Tuesday, 30 June 2020

Using the Linux ‘find’ command with multiple filename patterns

LPI Study Materials, LPI Tutorial and Material, LPI Exam Prep, LPI Certification

Someone asked me the other day how they could search for files with different names with one Linux find command. They wanted to create a list of all files that ended with the extensions .class and .sh.

Although this is actually very easy to do with the find command, the syntax is obscure and probably not well documented, so let's look at how to do this.

Linux find command - two filename patterns


Here's an example of how to search in the current directory and all subdirectories for files ending with the the extensions .class and .sh using the find command:

find . -type f \( -name "*.class" -o -name "*.sh" \)

That should work on all types of Unix systems, including vanilla Unix, Linux, BSD, freeBSD, AIX, Solaris, and Cygwin.

Finding files with three different filename extensions


While I'm in the neighborhood, here is an example of how to search the current directory for files that end in any of three different files extensions:

find . -type f \( -name "*cache" -o -name "*xml" -o -name "*html" \)

(FWIW, I did that one on a Mac OS/X machine.)

In these examples I always include the -type f option, which tells find to only display files, and specifically not directories. That's not always necessary, but when someone tells me they want to find files, I always include it.

Saturday, 27 June 2020

Linux ‘locate’ command examples

Linux Tutorial and Material, Linux Cert Exam, Linux Certification, Linux Learning

Linux FAQ: Can you share some examples of how to use the Linux locate command?

Background


Sure. The locate command is used to find files by their filename. The locate command is lightning fast because there is a background process that runs on your system that continuously finds new files and stores them in a database. When you use the locate command, it then searches that database for the filename instead of searching your filesystem while you wait (which is what the find command does).

To find files on Unix and Linux systems, I've historically fired up my old friend, the Linux find command. For instance, to find a file named tomcat.sh, I used to type something like this:

find / -name tomcat.sh -type f

This is a lot of typing, and although the results are current, it takes a long time to run on a big system. Recently a friend told me about the Linux locate command, and I haven't looked back since.

Using the locate command


Using the command is easy; just type locate followed by the name of the file you're looking for, like this:

locate tomcat.sh

Or, add the -i option to perform a case-insensitive search, like this:

locate -i springframework

Your Linux system will very quickly tell you all of the places it has been able to find (locate) the file with the name you specified. I emphasize the "very quickly" part, because this is so much faster than using the Linux find command you'll never look back easier.

Another thing to note: if you type in a name like foo (or any other name) locate, by default, returns the name of any file on the system that contains the string "foo", whether. So whether the filename is foo, or foobar, or barfoo, locate will return all of these as matches.

How the Linux locate command works


The locate command works so fast because it runs a background process to cache the location of files in your filesystem. Then, when you want to find the file you're looking for, you can just use the command like I showed previously. It's that easy. Here's a quick blurb from the locate man page:

The locate program may fail to list some files that are present, or may list files
that have been removed from the system.  This is because locate only reports files
that are present in the database, which is typically only regenerated once a week
by the /etc/periodic/weekly/310.locate script.

Use find(1) to locate files that are of a more transitory nature.

The locate database is typically built by user "nobody" and the locate.updatedb(8) utility skips directories which are not read-able for user "nobody", group "nobody", or world.  For example, if your HOME directory is not world-readable, none of your files are in the database.

So, as the man page states, if you can't find a file using the locate command, it may be that the database is out of date, and when this happens, you can always use the find command. The find command will be slower because it scans all the files you specify in real time (it doesn't have a database backing it up), but you can also perform more powerful searches with it.

Thursday, 25 June 2020

Linux ‘find’ command recipes

LPI Cert Exam, LPI Tutorial and Material, LPI Guides, LPI Study Material

Thinking about my own work when using Linux and Unix systems, a lot of the work is based around files, and when you're working with files, tools like the Linux find command are very helpful. So, I've decided to put together this list of find command examples/recipes that I'll update from time to time when I use the find command in different ways.

How to find all files beneath the current directory that end with the .jsp extension:

find . -type f -name "*jsp"

How to find all files in the /Users/al directory that end with the .jsp extension:

find /Users/al -type f -name "*jsp"

How to find all the files (no directories) beneath the current directory and run the ls -l command on those files:

find . -type f -exec ls -l {} \;

How to find all the directories (no files) beneath the current directory and run the ls -ld command on those files:

find . -type d -exec ls -ld {} \;

Note that the d option of the ls command is needed there to keep ls from printing the contents of the directory. I often don't want that, I just want to see some attributes of the directory itself.

Find and delete


Here's how to find all files beneath the current directory that begin with the letters Poop and delete them. Be very careful with this command, it is dangerous(!), and not recommended for newbies, especially if you don't have a backup.

find . -type f -name "Poop*" -exec rm {} \;

This one is even more dangerous. It finds all directories named CVS, and deletes them and their contents. Just like the previous command, be very careful with this command, it is dangerous(!), and not recommended for newbies, or if you don't have a backup.

find . -type d -name CVS -exec rm -r {} \;

find and chmod


Here are two examples using the find command and chmod together. This first example finds all files beneath the current directory and changes their mode to 644 (rw-r--r--):

find . -type f -name -exec chmod 644 {} \;

This example shows how to find all directories beneath the current directory and changes their mode to 755 (rwxr-xr-x):

find . -type d -name -exec chmod 755 {} \;

find command aliases


I use the Unix find command so often that I usually have at least one alias created to help cut down on the typing. Here is an alias named ff ("file find"):

alias ff="find . -type f -name "

Once that alias is defined I can invoke it like this:

ff myfile.foo

Don’t forget the ‘locate’ command


If you need to find a file somewhere on the filesystem, but you can't remember where it is, but you can remember part of the filename, the locate command is often faster than the find command. Here's an example showing how to use the locate command to find a file named something like lost-file on my local system:

locate lost-file

Tuesday, 23 June 2020

Soft and Hard links in Unix/Linux

A link in UNIX is a pointer to a file. Like pointers in any programming languages, links in UNIX are pointers pointing to a file or a directory. Creating links is a kind of shortcuts to access a file. Links allow more than one file name to refer to the same file, elsewhere.

Linux Tutorial and Material, Linux Exam Prep, Linux Certifications, LPI Guides

There are two types of links:

1. Soft Link or Symbolic links
2. Hard Links

These links behave differently when the source of the link (what is being linked to) is moved or removed. Symbolic links are not updated (they merely contain a string which is the pathname of its target); hard links always refer to the source, even if moved or removed.

For example, if we have a file a.txt. If we create a hard link to the file and then delete the file, we can still access the file using hard link. But if we create a soft link of the file and then delete the file, we can’t access the file through soft link and soft link becomes dangling. Basically hard link increases reference count of a location while soft links work as a shortcut (like in Windows)

1. Hard Links


◈ Each hard linked file is assigned the same Inode value as the original, therefore they reference the same physical file location. Hard links more flexible and remain linked even if the original or linked files are moved throughout the file system, although hard links are unable to cross different file systems.

◈ ls -l command shows all the links with the link column shows number of links.

◈ Links have actual file contents

◈ Removing any link, just reduces the link count, but doesn’t affect other links.

◈ We cannot create a hard link for a directory to avoid recursive loops.

◈ If original file is removed then the link will still show the content of the file.

◈ Command to create a hard link is:

$ ln  [original filename] [link name]

2. Soft Links


◈ A soft link is similar to the file shortcut feature which is used in Windows Operating systems. Each soft linked file contains a separate Inode value that points to the original file. As similar to hard links, any changes to the data in either file is reflected in the other. Soft links can be linked across different file systems, although if the original file is deleted or moved, the soft linked file will not work correctly (called hanging link).

◈ ls -l command shows all links with first column value l? and the link points to original file.

◈ Soft Link contains the path for original file and not the contents.

◈ Removing soft link doesn’t affect anything but removing original file, the link becomes “dangling” link which points to nonexistent file.

◈ A soft link can link to a directory.

◈ Link across filesystems: If you want to link files across the filesystems, you can only use symlinks/soft links.

◈ Command to create a Soft link is:

$ ln  -s [original filename] [link name]

Saturday, 20 June 2020

Linux Essentials: Overview

Linux adoption continues to rise world-wide as individual users, government entities and industries ranging from automotive to space exploration embrace open source technologies. This expansion of open source in enterprise is redefining traditional Information and Communication Technology (ICT) job roles to require more Linux skills. Whether you’re starting your career in Open Source, or looking for advancement, independently verifying your skill set can help you stand out to hiring managers or your management team.

Linux Essentials, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Cert Exam

The Linux Essentials Professional Development Certificate (PDC) is a great way to show employers that you have the foundational skills required for your next job or promotion. It also serves as an ideal stepping-stone to the more advanced LPIC Professional Certification track for Linux Systems Administrators.

The Linux Essentials Professional Development Certificate validates a demonstrated understanding of:

◈ FOSS, the various communities, and licenses

◈ knowledge of open source applications in the workplace as they relate to closed source equivalents

◈ basic concepts of hardware, processes, programs and the components of the Linux Operating System

◈ how to work on the command line and with files

◈ how to create and restore compressed backups and archives

◈ system security, users/groups and file permissions for public and private directories

◈ how to create and run simple scripts

Current Version: 1.6 (Exam code 010-160)

Prerequisites: There are no prerequisites for this certification

Requirements: Passing the Linux Essentials 010 exam

Validity Period: Lifetime

Languages: English, German and Dutch.

About Objective Weights: Each objective is assigned a weighting value. The weights range roughly from 1 to 10 and indicate the relative importance of each objective. Objectives with higher weights will be covered in the exam with more questions.

Linux Essentials Exam Topics


◈ The Linux community and a career in open source

◈ Finding your way on a Linux system

◈ The power of the command line

◈ The Linux operating system

◈ Security and file permissions

Detailed Linux Essentials Objectives Version 1.5

LPI Linux Essentials Test Center Portal


For the new Linux Essentials exam LPI is using a new method of internet-based testing (IBT) through accredited testing locations. This method of test delivery requires a computer lab, a proctor, and a secure browser. The IBT Linux Essentials testing solution is offered by LPI Inc.

Thursday, 18 June 2020

Cat command in Linux with examples

Cat Command in Linux, LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Prep

Cat(concatenate) command is very frequently used in Linux. It reads data from the file and gives their content as output. It helps us to create, view, concatenate files. So let us see some frequently used cat commands.

1) To view a single file

Command:

$cat filename

Output

It will show content of given filename

2) To view multiple files

Command:

$cat file1 file2

Output

This will show the content of file1 and file2.

3) To view contents of a file preceding with line numbers.

Command:

$cat -n filename

Output

It will show content with line number
example:-cat-n  lpi.txt
1)This is lpi
2)A unique array

4) Create a file

Command:

$ cat >newfile

Output

Will create and a file named newfile

5) Copy the contents of one file to another file.

Command:

$cat [filename-whose-contents-is-to-be-copied] > [destination-filename]

Output

The content will be copied in destination file

6) Cat command can suppress repeated empty lines in output

Command:

$cat -s lpi.txt

Output

Will suppress repeated empty lines in output

Cat Command in Linux, LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Prep
7) Cat command can append the contents of one file to the end of another file.

Command:

$cat file1 >> file2

Output

Will append the contents of one file to the end of another file

8) Cat command can display content in reverse order using tac command.

Command:

$tac filename

Output

Will display content in reverse order

9) Cat command can highlight the end of line.

Command:

$cat -E "filename"

Output

Will highlight the end of line

10) If you want to use the -v, -E and -T option together, then instead of writing -vET in the command, you can just use the -A command line option.

Command

$cat -A  "filename"

11) Cat command to open dashed files.

Command:

$cat -- "-dashfile"

Output

Will display the content of -dashfile

12) Cat command if the file has a lot of content and can’t fit in the terminal.

Command:

$cat "filename" | more

Output

Will show that much content, which could fit in terminal and will ask to show more.

13) Cat command to merge the contents of multiple files.

Command:

$cat "filename1" "filename2" "filename3" > "merged_filename"

Output

Will merge the contents of file in respective order and will insert that content in "merged_filename".

14) Cat command to display the content of all text files in the folder.

Command:

$cat *.txt

Output

Will show the content of all text files present in the folder.

Tuesday, 16 June 2020

The Linux file command

Linux Exam Prep, Linux Guides, Linux Tutorial and Material, Linux Learning, Linux Prep

Linux file information FAQ: How can I tell what type of file a file is on a Unix or Linux system?

The Linux file command shows you the type of a file, or multiple files. It's usually used when you're about to look at some type of file you've never seen before. When I first started working with Unix and Linux systems I used it a lot to make sure I wasn't about to open a binary file in the vi editor, amongst other things.

You issue the Linux file command just like other commands, like this:

file /etc/passwd

The output of the file command looks something like this:

/etc/passwd: ASCII English text

This is telling you that this is a plain text file. If you use the file command on a gzip'd file, the output will include text like this:

gzip compressed data

If you issue this command on a directory the output will say "directory", a PDF document will be reported as "PDF document", and if you issue it on a special Linux device file (typically under the /dev directory) it will look like this:

/dev/ttyp0: character special (4/0)

You can also issue the Linux file command on more than one file at a time, so you can do this to issue the file command on all files in the current directory:

file *

or this to look at all files in the /etc directory:

file /etc/*

and like this to look at all files in the /dev directory:

file /dev/*

Saturday, 13 June 2020

comm command in Linux with examples

LPI Tutorial and Material, LPI Certification, LPI Exam Prep, LPI Guides

comm compare two sorted files line by line and write to standard output; the lines that are common and the lines that are unique.

Suppose you have two lists of people and you are asked to find out the names available in one and not in the other, or even those common to both. comm is the command that will help you to achieve this. It requires two sorted files which it compares line by line.

Before discussing anything further first let’s check out the syntax of comm command:


Syntax:



$comm [OPTION]... FILE1 FILE2

◉ As using comm, we are trying to compare two files therefore the syntax of comm command needs two filenames as arguments.

◉ With no OPTION used, comm produces three-column output where first column contains lines unique to FILE1 ,second column contains lines unique to FILE2 and third and last column contains lines common to both the files.

◉ comm command only works right if you are comparing two files which are already sorted.

Example: Let us suppose there are two sorted files file1.txt and file2.txt and now we will use comm command to compare these two.

// displaying contents of file1 //
$cat file1.txt
Apaar
Ayush Rajput
Deepak
Hemant

// displaying contents of file2 //
$cat file2.txt
Apaar
Hemant
Lucky
Pranjal Thakral

Now, run comm command as:

// using comm command for
comparing two files //
$comm file1.txt file2.txt
                Apaar
Ayush Rajput
Deepak
                Hemant
        Lucky
        Pranjal Thakral

The above output contains of three columns where first column is separated by zero tab and contains names only present in file1.txt ,second column contains names only present in file2.txt and separated by one tab and the third column contains names common to both the files and is separated by two tabs from the beginning of the line.
This is the default pattern of the output produced by comm command when no option is used .

Options for comm command:


1. -1: suppress first column(lines unique to first file).
2. -2: suppress second column(lines unique to second file).
3. -3: suppress third column(lines common to both files).
4. – -check-order: check that the input is correctly sorted, even if all input lines are pairable.
5. – -nocheck-order: do not check that the input is correctly sorted.
6. – -output-delimiter=STR: separate columns with string STR
7. – -help: display a help message, and exit.
8. – -version: output version information, and exit.

Note: The options 4 to 8 are rarely used but options 1 to 3 are very useful in terms of the desired output user wants.

Using comm with options


1. Using -1 ,-2 and -3 options: The use of these three options can be easily explained with the help of example :

//suppress first column using -1//
$comm -1 file1.txt file2.txt
         Apaar
         Hemant
 Lucky
 Pranjal Thakral

//suppress second column using -2//
$comm -2 file1.txt file2.txt
        Apaar
Ayush Rajput
Deepak
        Hemant

//suppress third column using -3//
$comm -3 file1.txt file2.txt           
Ayush Rajput
Deepak     
        Lucky
        Pranjal Thakral

Note that you can also suppress multiple columns using these options together as:

//...suppressing multiple columns...//

$comm -12 file1.txt file2.txt
Apaar
Hemant

/* using -12 together suppressed both first
and second columns */

2. Using – -check-order option: This option is used to check whether the input files are sorted or not and in case if either of the two files are wrongly ordered then comm command will fail with an error message.

$comm - -check-order f1.txt f2.txt

The above command produces the normal output if both f1.txt and f2.txt are sorted and it just gives an error message if either of the two files are not sorted.

3. Using – -nocheck-order option: In case if you don’t want to check whether the input files are sorted or not, use this option. This can be explained with the help of an example.

//displaying contents of unsorted f1.txt//

$cat f1.txt
Parnjal
Kartik

//displaying contents of sorted file f2.txt//

$cat f2.txt
Apaar
Kartik

//now use - -nocheck-order option with comm//

$comm - -nocheck-order f1.txt f2.txt
Pranjal
        Apaar
                Kartik

/*as this option forced comm not to check
 the sorted order that's why the output
comm produced is also
not in sorted order*/

4 . – -output-delimiter=STR option: By default, the columns in the comm command output are separated by spaces as explained above. However, if you want, you can change that, and have a string of your choice as separator. This can be done using the –output-delimiter option. This option requires you to specify the string that you want to use as the separator.

Syntax:

$comm - -output-delimiter=STR FILE1 FILE2

EXAMPLE:

//...comm command with - -output-delimiter=STR option...//

$comm - -output-delimiter=+file1.txt file2.txt
++Apaar
Ayush Rajput
Deepak
++Hemant
+Lucky
+Pranjal Thakral

/*+ before content indicates content of
second column and ++ before content
indicates content of third column*/

So,that’s all about comm command and its options.

Thursday, 11 June 2020

TDC teaches how to deliver a vibrant interactive experience

LPI Study Materials, LPI Guides, LPI Learning, LPI Exam Prep, LPI Tutorial and Materials

How does a conference make the transition to virtual participation in the age of the COVID-19 virus? And suppose you have less than two months to do so?

As social distancing loomed as a dire necessity in February and March 2020, many organizations simply threw up their hands and canceled their conferences. But some were prepared to put their offering online. The major Brazilian organization "The Developer's Conference" is particularly worth examining for their online work. Building on previous experiences with hybrid conferences (mixed on-site and online), they rethought everything about what they were doing and produced a unique experience that they are now teaching others to reproduce.

The amount of interaction among speakers, sponsors, and attendees at TCDOnline was a much as ever, perhaps more so. And the 30 staff gracefully handled 6,750 attendees.

This article, based on a conversation I had with event director Yara Senger, will show how they did it.

TDC started in 2007. As early as 2009, they offered a cost-free online component along with their regular on-site conference. But the online component was modest: just video feeds of the speakers. Attendance was growing gradually over the years, and toward the end was about 11,000 on-site attendees with an additional 5,000 to 6,000 online participants.

Organizers could tell that there was not much interaction by online participants. In particular, the online participants offered little benefit to sponsors, whose satisfaction is crucial to make conferences work financially. If they just extended this format to an all-virtual conference, the experience would not be particularly positive. They worked from the premise that people need to feel comfortable with every aspect of an online experience in order to attract them year after year.

So the organizers started from the ground up, producing a very different kind of conference. Some of the important changes follow.

◉ The traditional experience of being greeted when you enter the door of an event was reproduced by banks of employees who approached every attendee as they logged into the conference. Every question received a prompt answer. If someone couldn't get into a room for some technical reason, they could get help in a chat.

◉ Video was available, so that you could see the people you were talking to. I asked Senger whether bandwidth was a problem, because I assumed that many areas of Brazil would be poorly provisioned with network connections. She said that the organizers heard only a few complaints, and that bandwidth was not an issue. This is a tribute, I think, to the brilliant engineering that developers have put into designing and implementing streaming protocols.

◉ Many parallel activities were offered on virtual streams that the organizers called "rooms." Thus, there would be a room for each track, where people could stay for multiple sessions.

LPI Study Materials, LPI Guides, LPI Learning, LPI Exam Prep, LPI Tutorial and Materials
Tracks, I should explain, form the basic structure of TDC. Typical tracks in 2020 included Agile, Agile for business, Artificial Intelligence and data, programming language, etc.—the kinds of topics that are popular at all computer conferences today. Attendees pay separately for each track they want to attend.

Some sponsors also rented "rooms" to offer demos and other content. Registration and questions were also handled in a dedicated room.

◉ If participants wanted to connect with a speaker after a talk, they could leave the track's room and visit a separate room for informal discussion. These went very well, showing that some 200 people could interact in an orderly fashion online. Some of the sessions went on until 11:00 at night, with perhaps 20 stalwart attendees lasting the whole time.

Senger said that interactions in the rooms were probably more formal than face-to-face meetings, which is a drawback of virtual meetings. But they allowed many more people to participate for a longer time than when sessions are on-site.

◉ Participants could network with each other in a serendipitous way, a little like hallway chats at face-to-face conferences. At TDCOnline, you could enter a room with another randomly chosen participant and introduce yourselves to each other for three minutes.

◉ Participants could transition between rooms with a single click. This is one of the advantages of the software used by TDC to administer the conference, Hopin. Hopin was available at a reasonable price and offered important features TDC needed. It let them set up separate tracks (one per room, as said before) and set participants' permissions so they had access only to the tracks they paid for.

◉ Session monitors followed the presentations in real-time and put resources up quickly in chat boxes. When a speaker mentioned a resource such as room to move it, the monitor could post a link to the resource within a couple seconds. They were so effective that some participants assumed that an NLP-powered chatbot was doing it.

◉ Each track during the on-site conference used to continue for eight hours, and people were encouraged to stay for the whole day. (They were always charged for the entire track.) Organizers realized that eight hours of conferencing online would grueling--even if some participants were delighted to stay till 11:00 at night--so tracks were reduced to four hours.

◉ Along with the reduction in the length of a track, sessions were reduced to 25 minutes each. This shifted much of the discussion from the formal session to the informal follow-up. And this, in turn, allowed the discussion to reflect better the interests of the attendees.

◉ There were also more panels than in previous years. The debates there were lively and productive, according to Senger. Some panels went on for three hours. Attendees could submit questions and vote on the questions submitted by others, so the most popular questions got plenty of attention.

What was the payoff for all this insight and planning? The conference ultimately drew a respectable number of 6,750 attendees, who logged in from every region of Brazil—bringing much broader participation geographically than ever before—and included people from 21 other countries too. (These countries are marked by red dots in the article image) Remember that the sessions were all in Portuguese, so this reveals a thriving international community of Brazilians and other people fluent in Portuguese. About 15 sponsors participated. A sparkling, fast-paced video conveys some of the experience.

I asked whether different types of people came this year, because it was online. Except for the great geographic spread, Senger said people seemed to be basically like the ones who came to earlier TDCs. The only difference is a slight decrease in experience levels. Only 39% of attendees had ten or more years of experience in their fields, compared to an average of 44% before. But there was a big increase among the great with 6-10 years of experience. This is understandable, because the conference was four times as large this year. There are a lot more people with 6-10 years of experience than with ten years.

As you can probably tell from my description of the conference, it depended heavily on the responsiveness and expertise of the staff. I asked Senger how they could handle 6,750 with only 30 staff, and she answered that the staff are well-trained and have been running their conferences for years. The speakers need a great deal of support, starting by testing their connections two weeks before the conference, ensuring they were ready the day before, and setting up their session right before it started.

I also asked how easily attendees picked up the rules for attendance. Senger said that it required a lot of explanation. The organizers are preparing a video that attendees can view before future conferences to grasp what they can do to make their participation as effective as possible.

TCDOnline was successful because of the unique structure and tools provided to speakers, sponsors, and attendees. The organizers are now consulting with other organizations to teach the lessons of the first TCDOnline and help others produce engaging and educational conferences during this period of physical distancing—and perhaps even after that period has ended.

Source: lpi.org

Tuesday, 9 June 2020

How to use the Linux 'lsof' command to list open files

lsof command, Linux Tutorial and Material, Linux Guides, Linux Exam Prep, LPI Certification

Linux “open files” FAQ: Can you share some examples of how to show open files on a Linux system (i.e., how to use the lsof command)?

lsof command background


The Linux lsof command lists information about files that are open by processes running on the system. The lsof command itself stands for “list of open files.” In this article I’ll share some lsof command examples.

Also Read: 102-500: Linux Administrator - 102 (LPIC-1 102)

I assume you’re logged in as root

One other note: In these examples I'll assume that you're logged in as the Unix/Linux root user. If not, you’re lsof command output may be significantly limited. If you’re logged in as a non-root user, either su to root, or use sudo to run these commands.

Basic Linux lsof command examples


Typing the lsof command by itself lists all open files belonging to all active processes on the system:

$ lsof

On my current Mac OS X system, which has been running for a long time, this shows a lot of open files, 1,582 to be specific:

$ lsof | wc -l
    1582

Note that I didn’t have to be logged in as the root user to see this information on my Mac system.

Adding the head command to lsof shows what some of this output looks like:

$ lsof | head

COMMAND     PID USER   FD     TYPE     DEVICE  SIZE/OFF      NODE NAME
loginwind    32   Al  cwd      DIR       14,2      1564         2 /
loginwind    32   Al  txt      REG       14,2   1754096 243026930 /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow
loginwind    32   Al  txt      REG       14,2    113744   3190067 /System/Library/LoginPlugins/FSDisconnect.loginPlugin/Contents/MacOS/FSDisconnect
loginwind    32   Al  txt      REG       14,2    425504 117920371 /System/Library/LoginPlugins/DisplayServices.loginPlugin/Contents/MacOS/DisplayServices
loginwind    32   Al  txt      REG       14,2      3144   3161654 /System/Library/ColorSync/Profiles/sRGB Profile.icc
loginwind    32   Al  txt      REG       14,2     96704 242998403 /System/Library/PrivateFrameworks/MachineSettings.framework/Versions/A/MachineSettings
loginwind    32   Al  txt      REG       14,2     51288 251253153 /private/var/folders/h5/h59HESVvEmG+3I4Q8lOAxE+++TI/-Caches-/mds/mdsDirectory.db
loginwind    32   Al  txt      REG       14,2    724688 117923285 /System/Library/LoginPlugins/BezelServices.loginPlugin/Contents/MacOS/BezelServices
loginwind    32   Al  txt      REG       14,2    329376 117923166 /System/Library/Extensions/IOHIDFamily.kext/Contents/PlugIns/IOHIDLib.plugin/Contents/MacOS/IOHIDLib

Common lsof options


As mentioned, these details go on for 1,582 lines, so it helps to have some way to weed through that output, whether that involves using the grep command, or some of the lsof options shown below.

This command lists all open files belonging to PID (process ID) 11925:

$ lsof -p 11925

This command lists all open files belonging to processes owned by the user named "al":

$ lsof -u al

This command lists files that are open in the directory specified, but it does not descend into sub-directories:

$ lsof +d '/Users/al'

The next command lists files that are open in the directory specified, and also descends into sub-directories. Beware: this can take a very long time to run for large directory structures:

$ lsof +D '/Users/al'

Saturday, 6 June 2020

Sed Command in Unix and Linux Examples

Sed Command, LPI Tutorial and Material, LPI Exam Prep, LPI Guides, LPI Certification

Sed is a Stream Editor used for modifying the files in unix (or linux). Whenever you want to make changes to the file automatically, sed comes in handy to do this. Most people never learn its power; they just simply use sed to replace text. You can do many things apart from replacing text with sed. Here I will describe the features of sed with examples.

Consider the below text file as an input.

>cat file.txt
unix is great os. unix is opensource. unix is free os.
learn operating system.
unixlinux which one you choose.

Sed Command Examples


1. Replacing or substituting string

Sed command is mostly used to replace the text in a file. The below simple sed command replaces the word "unix" with "linux" in the file.

>sed 's/unix/linux/' file.txt
linux is great os. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you choose.

Here the "s" specifies the substitution operation. The "/" are delimiters. The "unix" is the search pattern and the "linux" is the replacement string.

By default, the sed command replaces the first occurrence of the pattern in each line and it won't replace the second, third...occurrence in the line.

2. Replacing the nth occurrence of a pattern in a line.

Use the /1, /2 etc flags to replace the first, second occurrence of a pattern in a line. The below command replaces the second occurrence of the word "unix" with "linux" in a line.

>sed 's/unix/linux/2' file.txt
unix is great os. linux is opensource. unix is free os.
learn operating system.
unixlinux which one you choose.

3. Replacing all the occurrence of the pattern in a line.

The substitute flag /g (global replacement) specifies the sed command to replace all the occurrences of the string in the line.

>sed 's/unix/linux/g' file.txt
linux is great os. linux is opensource. linux is free os.
learn operating system.
linuxlinux which one you choose.

4. Replacing from nth occurrence to all occurrences in a line.

Use the combination of /1, /2 etc and /g to replace all the patterns from the nth occurrence of a pattern in a line. The following sed command replaces the third, fourth, fifth... "unix" word with "linux" word in a line.

>sed 's/unix/linux/3g' file.txt
unix is great os. unix is opensource. linux is free os.
learn operating system.
unixlinux which one you choose.

5. Changing the slash (/) delimiter

You can use any delimiter other than the slash. As an example if you want to change the web url to another url as

>sed 's/http:\/\//www/' file.txt

In this case the url consists the delimiter character which we used. In that case you have to escape the slash with backslash character, otherwise the substitution won't work.

Using too many backslashes makes the sed command look awkward. In this case we can change the delimiter to another character as shown in the below example.

>sed 's_http://_www_' file.txt
>sed 's|http://|www|' file.txt

6. Using & as the matched string

There might be cases where you want to search for the pattern and replace that pattern by adding some extra characters to it. In such cases & comes in handy. The & represents the matched string.

>sed 's/unix/{&}/' file.txt
{unix} is great os. unix is opensource. unix is free os.
learn operating system.
{unix}linux which one you choose.

>sed 's/unix/{&&}/' file.txt
{unixunix} is great os. unix is opensource. unix is free os.
learn operating system.
{unixunix}linux which one you choose.

7. Using \1,\2 and so on to \9

The first pair of parenthesis specified in the pattern represents the \1, the second represents the \2 and so on. The \1,\2 can be used in the replacement string to make changes to the source string. As an example, if you want to replace the word "unix" in a line with twice as the word like "unixunix" use the sed command as below.

>sed 's/\(unix\)/\1\1/' file.txt
unixunix is great os. unix is opensource. unix is free os.
learn operating system.
unixunixlinux which one you choose.

The parenthesis needs to be escaped with the backslash character. Another example is if you want to switch the words "unixlinux" as "linuxunix", the sed command is

>sed 's/\(unix\)\(linux\)/\2\1/' file.txt
unix is great os. unix is opensource. unix is free os.
learn operating system.
linuxunix which one you choose.

Another example is switching the first three characters in a line

>sed 's/^\(.\)\(.\)\(.\)/\3\2\1/' file.txt
inux is great os. unix is opensource. unix is free os.
aelrn operating system.
inuxlinux which one you choose.

8. Duplicating the replaced line with /p flag

The /p print flag prints the replaced line twice on the terminal. If a line does not have the search pattern and is not replaced, then the /p prints that line only once.

>sed 's/unix/linux/p' file.txt
linux is great os. unix is opensource. unix is free os.
linux is great os. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you choose.
linuxlinux which one you choose.

9. Printing only the replaced lines

Use the -n option along with the /p print flag to display only the replaced lines. Here the -n option suppresses the duplicate rows generated by the /p flag and prints the replaced lines only one time.

>sed -n 's/unix/linux/p' file.txt
linux is great os. unix is opensource. unix is free os.
linuxlinux which one you choose.

If you use -n alone without /p, then the sed does not print anything.

10. Running multiple sed commands.

You can run multiple sed commands by piping the output of one sed command as input to another sed command.

>sed 's/unix/linux/' file.txt| sed 's/os/system/'
linux is great system. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you chosysteme.

Sed provides -e option to run multiple sed commands in a single sed command. The above output can be achieved in a single sed command as shown below.

>sed -e 's/unix/linux/' -e 's/os/system/' file.txt
linux is great system. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you chosysteme.

11. Replacing string on a specific line number.

You can restrict the sed command to replace the string on a specific line number. An example is

>sed '3 s/unix/linux/' file.txt
unix is great os. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you choose.

The above sed command replaces the string only on the third line.

12. Replacing string on a range of lines.

You can specify a range of line numbers to the sed command for replacing a string.

>sed '1,3 s/unix/linux/' file.txt
linux is great os. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you choose.

Here the sed command replaces the lines with range from 1 to 3. Another example is

>sed '2,$ s/unix/linux/' file.txt
linux is great os. unix is opensource. unix is free os.
learn operating system.
linuxlinux which one you choose.

Here $ indicates the last line in the file. So the sed command replaces the text from second line to last line in the file.

13. Replace on a lines which matches a pattern.

You can specify a pattern to the sed command to match in a line. If the pattern match occurs, then only the sed command looks for the string to be replaced and if it finds, then the sed command replaces the string.

>sed '/linux/ s/unix/centos/' file.txt
unix is great os. unix is opensource. unix is free os.
learn operating system.
centoslinux which one you choose.

Here the sed command first looks for the lines which has the pattern "linux" and then replaces the word "unix" with "centos".

14. Deleting lines.

You can delete the lines a file by specifying the line number or a range or numbers.

>sed '2 d' file.txt
>sed '5,$ d' file.txt

15. Duplicating lines

You can make the sed command to print each line of a file two times.

>sed 'p' file.txt

16. Sed as grep command

You can make sed command to work as similar to grep command.

>grep 'unix' file.txt
>sed -n '/unix/ p' file.txt

Here the sed command looks for the pattern "unix" in each line of a file and prints those lines that has the pattern.

You can also make the sed command to work as grep -v, just by using the reversing the sed with NOT (!).

>grep -v 'unix' file.txt
>sed -n '/unix/ !p' file.txt

The ! here inverts the pattern match.

17. Add a line after a match.

The sed command can add a new line after a pattern match is found. The "a" command to sed tells it to add a new line after a match is found.

>sed '/unix/ a "Add a new line"' file.txt
unix is great os. unix is opensource. unix is free os.
"Add a new line"
learn operating system.
unixlinux which one you choose.
"Add a new line"

18. Add a line before a match

The sed command can add a new line before a pattern match is found. The "i" command to sed tells it to add a new line before a match is found.

>sed '/unix/ i "Add a new line"' file.txt
"Add a new line"
unix is great os. unix is opensource. unix is free os.
learn operating system.
"Add a new line"
unixlinux which one you choose.

19. Change a line

The sed command can be used to replace an entire line with a new line. The "c" command to sed tells it to change the line.

>sed '/unix/ c "Change line"' file.txt
"Change line"
learn operating system.
"Change line"

20. Transform like tr command

The sed command can be used to convert the lower case letters to upper case letters by using the transform "y" option.

>sed 'y/ul/UL/' file.txt
Unix is great os. Unix is opensoUrce. Unix is free os.
Learn operating system.
UnixLinUx which one yoU choose.

Here the sed command transforms the alphabets "ul" into their uppercase format "UL"

Thursday, 4 June 2020

Linux gzip: How to work with compressed files

Linux Tutorial and Material, Linux Certification, Linux Exam Prep, Linux Prep

If you work much with Unix and Linux systems you'll eventually run into the terrific file compression utilities, gzip and gunzip. As their names imply, the first command creates compressed files (by gzip'ing them), and the second command unzip's those files.

In this post I take a quick look at the gzip and gunzip file compression utilities, along with their companion tools you may not have known about: zcat, zgrep, and zmore.

The Unix/Linux gzip command


You can compress a file with the Unix/Linux gzip command. For instance, if I run an ls -l command on an uncompressed Apache access log file named access.log, I get this output:

-rw-r--r--   1 al  al  22733255 Aug 12  2008 access.log

Note that the size of this file is 22,733,255 bytes. Now, if we compress the file using gzip, like this:

gzip access.log

we end up creating a new, compressed file named access.log.gz. Here's what that file looks like:

-rw-r--r--   1 al  al  2009249 Aug 12  2008 access.log.gz

Notice that the file has been compressed from 22,733,255 bytes down to just 2,009,249 bytes. That's a huge savings in file size, roughly 10 to 1(!).

There's one important thing to note about gzip: The old file, access.log, has been replaced by this new compressed file, access.log.gz. This might freak you out a little the first time you use this command, but very quickly you get used to it. (If for some reason you don't trust gzip when you first try it, feel free to make a backup copy of your original file.)

The Linux gunzip command


The gunzip ("g unzip") command works just the opposite of gzip, converting a gzip'd file back to its original format. In the following example I'll convert the gzip'd file we just created back to its original format:

gunzip access.log.gz

Running that command restores our original file, as you can see in this output:

-rw-r--r--   1 al  al  22733255 Aug 12  2008 access.log

The Linux file compress utilities (zcat, zmore, zgrep)


I used to think I had to uncompress a gzip'd file to work on it with commands like cat, grep, and more, but at some point I learned there were equivalent gzip versions of these same commands, appropriately named zcat, zgrep, and zmore. So, anything you would normally do on a text file with the first three commands you can do on a gzip'd file with the last three commands.

For instance, instead of using cat to display the entire contents of the file, you use zcat to work on the gzip'd file instead, like this:

zcat access.log.gz

(Of course that output will go on for a long time with roughly 22MB of compressed text.)

You can also scroll through the file one page at a time with zmore:

zmore access.log.gz

And finally, you can grep through the compressed file with zgrep:

zgrep '/java/index.html' access.log.gz

There are also two other commands, zcmp and zdiff, that let you compare compressed files, but I personally haven't had the need for them. However, as you can imagine, they work like this:

zmp file1.gz file2.gz

or

zdiff file1.gz file2.gz

Linux gzip / compress summary


As a quick summary, just remember that you don't have to uncompress files to work on them, you can use the following z-utilities to work on the compressed files instead:

◉ zcat
◉ zmore
◉ zgrep
◉ zcmp
◉ zdiff

Tuesday, 2 June 2020

The Linux 'rm' command (remove files and directories)

RM Command, LPI Tutorial and Material, LPI Guides, LPI Certification, Linux Exam Prep

Linux FAQ: How do I delete files (remove files) on a Unix or Linux system?


The Linux rm command is used to remove files and directories. (As its name implies, this is a dangerous command, so be careful.)

Let's take a look at some rm command examples, starting from easy examples to more complicated examples.

How to delete files with rm


In its most basic use, the rm command can be used to remove one file, like this:

rm oldfile.txt

You can also use the rm command to delete multiple Linux files at one time, like this:

rm file1 file2 file3

If you prefer to be careful when deleting files, use the -i option with the rm command. The -i stands for "inquire", so when you use this option the rm command prompts you with a yes/no prompt before actually deleting your files:

rm -i files file2 file3


How to delete directories with rm


To delete Linux directories with the rm command, you have to specify the -r option, like this:

rm -r OldDirectory

The -r option means "recursive", and therefore this command recursively deletes all files and directories in the directory named OldDirectory.

As a warning, this command is obviously very dangerous, so be careful. Some people always add the inquire option when deleting directories, like this:

rm -ir OldDirectory

You can also delete multiple directories at one time, like this:

rm -r Directory1 Directory2 Directory3

How to use wildcards with rm


Unix and Linux systems have always supported wildcard characters, so in this case you can delete files and directories even faster. For instance, to delete all HTML files in the current directory, use this command:

rm *.html

Note that unlike DOS, you don't actually need the "." before the "html" in that command, so you can shorten the command like this:

rm *html

Unix and Linux wildcard characters and commands don't care at all about "." characters and filename extensions, so the "." is not needed.

You can also use wildcard characters in the middle of a filename or at the end of the filename. Here's an example where I'm deleting all files in the current directory that begin with the string "index":

rm index*

This command deletes files named index.html, index.php, and in general, any filename that begins with the character string "index".

You can also use wildcard characters like this to delete multiple files or directories:

rm Chapter[123].txt

That command deletes the files Chapter1.txt, Chapter2.txt, and Chapter3.txt, all in one command.

More Linux rm commands


There are probably many more Linux delete commands you can issue with the rm command. For instance, you can delete files and directories that aren't in the current directory. Here's an example where I delete a file named foo.txt that's in the /tmp directory:

rm /tmp/foo.txt