Sunday 31 March 2019

shutdown command in Linux with Examples

The shutdown command in Linux is used to shutdown the system in a safe way. You can shutdown the machine immediately, or schedule a shutdown using 24 hour format.It brings the system down in a secure way. When the shutdown is initiated, all logged-in users and processes are notified that the system is going down, and no further logins are allowed.

Shutdown Command, Linux Tutorial and Materials, Linux Guides, Linux Learning

Only root user can execute shutdown command.

Syntax of shutdown Command


shutdown [OPTIONS] [TIME] [MESSAGE]

◈ options – Shutdown options such as halt, power-off (the default option) or reboot the system.
◈ time – The time argument specifies when to perform the shutdown process.
◈ message – The message argument specifies a message which will be broadcast to all users.

Options


-r : Requests that the system be rebooted after it has been brought down.
-h : Requests that the system be either halted or powered off after it has been brought down, with the choice as to which left up to the system.
-H : Requests that the system be halted after it has been brought down.
-P : Requests that the system be powered off after it has been brought down.
-c : Cancels a running shutdown. TIME is not specified with this option, the first argument is MESSAGE.
-k : Only send out the warning messages and disable logins, do not actually bring the system down.

How to use shutdown


In it’s simplest form when used without any argument, shutdown will power off the machine.

sudo shutdown

How to shutdown the system at a specified time


The time argument can have two different formats. It can be an absolute time in the format hh:mm and relative time in the format +m where m is the number of minutes from now.

The following example will schedule a system shutdown at 05 A.M:

sudo shutdown 05:00

The following example will schedule a system shutdown in 20 minutes from now:

sudo shutdown +20

How to shutdown the system immediately


To shutdown your system immediately you can use +0 or its alias now:

sudo shutdown now

How to broadcast a custom message


The following command will shut down the system in 10 minutes from now and notify the users with message “System upgrade”:

sudo shutdown +10 "System upgrade"

It is important to mention that when specifying a custom wall message you must specify a time argument too.

How to halt your system


This can be achieved using the -H option.

shutdown -H

Halting means stopping all CPUs and powering off also makes sure the main power is disconnected.

How to make shutdown power-off machine


Although this is by default, you can still use the -P option to explicitly specify that you want shutdown to power off the system.

shutdown -P

How to reboot using shutdown


For reboot, the option is -r.

shutdown -r

You can also specify a time argument and a custom message:

shutdown -r +5 "Updating Your System"

The command above will reboot the system after 5 minutes and broadcast Updating Your System”

How to cancel a scheduled shutdown


If you have scheduled a shutdown and you want to cancel it you can use the -c argument:

sudo shutdown -c

When canceling a scheduled shutdown, you cannot specify a time argument, but you can still broadcast a message that will be sent to all users.

sudo shutdown -c "Canceling the reboot"

Saturday 30 March 2019

unexpand command in Linux with Examples

To convert the leading spaces and tabs into tabs, there exists a command line utility called unexpand command.

Unexpand Command, Linux Tutorial and Material, Linux Guides, Linux Learning, Linux Certifications

The unexpand command by default convert each spaces into tabs writing the produced output to the standard output. Here’s the syntax of unexpand command :

Syntax :


$unexpand [OPTION]... [FILE]...

where, OPTION refers to the options compatible with unexpand and FILE refers to the fie name.

Using unexpand command


To convert all the space characters into tab characters in the file kt.txt , use unexpand as :

$cat -vet kt.txt
have    a    nice    day$
always    try    harder$
to    achieve    better$

/* In the below output $ refers to
   the new line feed and ^I refers
   to the tab */
$unexpand kt.txt
have^Ia^Inice^Iday$
always^Itry^Iharder$
to^Iachieve^Ibetter$

In order to save the produced output by unexpand command in another file let’s say dv.txt use the below syntax :

/* Saving output in file, dv.txt */
$unexpand kt.txt>dv.txt

$cat -vet dv.txt
have^Ia^Inice^Iday$
always^Itry^Iharder$
to^Iachieve^Ibetter$

Options for unexpand command


◈ -a, – -all option : This option is used to convert all blanks, instead of just initial blanks (which is by default).

/* This converts all the blanks
   also into tabs */
$unexpand -a kt.txt>dv.txt

◈ – -first-only option : This is used to convert only leading sequences of blanks (overrides -a option).

/* This converts only the leading
   sequences of blanks */
$unexpand --first-only kt.txt>dv.txt

◈ -t, – -tabs=N option : This set tabs N characters apart instead of the default of 8 (enables -a option).

/* the -t option with numerical value
   2 forces to change the spaces
   into tabs of only 2 characters */
$unexpand -t2 kt.txt>dv.txt

◈ -t, – -tabs=LIST option : This option uses comma separated LIST of tab positions (enables -a option).
◈ – -help option : This display a help message and exit.
◈ – -version option : This display version information and exit.

Thursday 28 March 2019

16 Useful ‘cp’ Command Examples for Linux Beginners

cp Command, Linux Tutorial and Material, LPI Study Material, LPI Guides

In this article we will demonstrate 16 useful cp command examples specially for the linux beginners. Following is the basic syntax of cp command,

Copy a file to another file

# cp {options} source_file target_file

Copy File(s) to another directory or folder

# cp {options} source_file   target_directory

Copy directory to directory

# cp {options} source_directory target_directory

Let’s jump into the practical examples of cp command,

Example:1) Copy file to target directory


Let’s assume we want copy the /etc/passwd file to /mnt/backup directory for some backup purpose, so run below cp command,

root@lpicentral:~# cp /etc/passwd /mnt/backup/
root@lpicentral:~#

Use below command to verify whether it has been copied or not.

root@lpicentral:~# ls -l /mnt/backup/
total 4
-rw-r--r-- 1 root root 2410 Feb  3 17:10 passwd
root@lpicentral:~#

Example:2) Copying multiple files at the same time


Let’s assume we want to copy multiples (/etc/passwd, /etc/group & /etc/shadow) at same time to target directory (/mnt/backup)

root@lpicentral:~# cp /etc/passwd /etc/group /etc/shadow /mnt/backup/
root@lpicentral:~#

Example:3) Copying the files interactively (-i)


If you wish to copy the files from one place to another interactively then use the “-i” option in cp command, interactive option only works if the destination directory already has the same file, example is shown below,

root@lpicentral:~# cp -i /etc/passwd /mnt/backup/
cp: overwrite '/mnt/backup/passwd'? y
root@lpicentral:~#

In the above command one has to manually type ‘y’ to allow the copy operation

Example:4) Verbose output during copy command (-v)


If you want the verbose output of cp command then use “-v” option, example is shown below

root@lpicentral:~# cp -v /etc/fstab  /mnt/backup/
'/etc/fstab' -> '/mnt/backup/fstab'
root@lpicentral:~#

In case you want to use both interactive mode and verbose mode then use the options “-iv”

root@lpicentral:~# cp -iv /etc/fstab  /mnt/backup/
cp: overwrite '/mnt/backup/fstab'? y
'/etc/fstab' -> '/mnt/backup/fstab'
root@lpicentral:~#

Example:5) Copying a directory or folder (-r or -R)


To copy a directory from one place to another use -r or -R option in cp command. Let’s assume we want to copy the home directory of lpicentral user to “/mn/backup”,

root@lpicentral:~# cp -r /home/lpicentral/mnt/backup/
root@lpicentral:~#

In above command, -r option will copy the files and directory recursively.

Now verify the contents of lpicentral directory on target place,

root@lpicentral:~# ls -l /mnt/backup/lpicentral/
total 24
drwxr-xr-x 2 root root 4096 Feb  3 17:41 data
-rw-r--r-- 1 root root    7 Feb  3 17:41 file_1.txt
-rw-r--r-- 1 root root    7 Feb  3 17:41 file_2.txt
-rw-r--r-- 1 root root    7 Feb  3 17:41 file_3.txt
-rw-r--r-- 1 root root    7 Feb  3 17:41 file_4.txt
-rw-r--r-- 1 root root    7 Feb  3 17:41 file_5txt
-rw-r--r-- 1 root root    0 Feb  3 17:41 file_5.txt
root@lpicentral:~#

Example:6) Archive files and directory during copy (-a)


While copying a directory using cp command we generally use -r or -R option, but in place of -r option we can use ‘-a’ which will archive the files and directory during copy, example is shown below,

root@lpicentral:~# cp -a /home/lpicentral /mnt/backup/
root@lpicentral:~# ls -l /mnt/backup/lpicentral/
total 24
drwxr-xr-x 2 root root 4096 Feb  3 17:41 data
-rw-r--r-- 1 root root    7 Feb  3 17:39 file_1.txt
-rw-r--r-- 1 root root    7 Feb  3 17:39 file_2.txt
-rw-r--r-- 1 root root    7 Feb  3 17:39 file_3.txt
-rw-r--r-- 1 root root    7 Feb  3 17:39 file_4.txt
-rw-r--r-- 1 root root    7 Feb  3 17:40 file_5txt
-rw-r--r-- 1 root root    0 Feb  3 17:39 file_5.txt
root@lpicentral:~#

Example:7) Copy only when source file is newer than the target file (-u)


There can be some scenarios where you want copy the files only if the source files are newer than the destination ones. This can be easily achieved using “-u” option in the cp command.

In the Example:6  we have copied the lpicentral home directory to /mnt/backup folder, in the lpicentral home folder we have 5 txt files, let’s edit couple of them and then copy all the txt files using “cp -u”.

root@lpicentral:~# cd /home/lpicentral/
root@lpicentral:/home/lpicentral# echo "LinuxRocks" >> file_1.txt
root@lpicentral:/home/lpicentral# echo "LinuxRocks" >> file_4.txt
root@lpicentral:/home/lpicentral# cp -v -u  file_*.txt /mnt/backup/lpicentral/
'file_1.txt' -> '/mnt/backup/lpicentral/file_1.txt'
'file_4.txt' -> '/mnt/backup/lpicentral/file_4.txt'
root@lpicentral:/home/lpicentral#

Example:8) Do not overwrite the existing file while copying (-n)


There are some scenarios where you don’t want to overwrite the existing destination files while copying. This can be accomplished using the option ‘-n’ in ‘cp’ command

root@lpicentral:~# cp -i /etc/passwd /mnt/backup/
cp: overwrite '/mnt/backup/passwd'?

As you can see in above command, it is prompting us to overwrite the existing file, if you use -n then it will not prompt for the overwrite and also will not overwrite the existing file.

root@lpicentral:~# cp -n /etc/passwd /mnt/backup/
root@lpicentral:~#

Example:9) Creating symbolic links using cp command (-s)


Let’s assume we want to create symbolic link of a file instead copying using cp command, for such scenarios use ‘-s’ option in cp command, example is shown below

root@lpicentral:~# cp -s /home/lpicentral/file_1.txt /mnt/backup/
root@lpicentral:~# cd /mnt/backup/
root@lpicentral:/mnt/backup# ls -l file_1.txt
lrwxrwxrwx 1 root root 27 Feb  5 18:37 file_1.txt -> /home/lpicentral/file_1.txt
root@lpicentral:/mnt/backup#

Example:10) Creating Hard link using cp command (-l)


If you want to create hard link of a file instead copy using cp command, then use ‘-l’ option. example is shown below,

root@lpicentral:~# cp -l /home/lpicentral/devops.txt /mnt/backup/
root@lpicentral:~#

As we know in hard link, source and linked file will have the same inode numbers, let’s verify this using following commands,

root@lpicentral:~# ls -li /mnt/backup/devops.txt
918196 -rw-r--r-- 2 root root 37 Feb  5 20:02 /mnt/backup/devops.txt
root@lpicentral:~# ls -li /home/lpicentral/devops.txt
918196 -rw-r--r-- 2 root root 37 Feb  5 20:02 /home/lpicentral/devops.txt
root@lpicentral:

Example:11) Copying attributes from source to destination (–attributes-only)


If you want to copy only the attributes from source to destination using cp command, then use option “–attributes-only”

root@lpicentral:/home/lpicentral# cp --attributes-only /home/lpicentral/distributions.txt /mnt/backup/
root@lpicentral:/home/lpicentral# ls -l /home/lpicentral/distributions.txt
-rw-r--r-- 1 root root 41 Feb  5 19:31 /home/lpicentral/distributions.txt
root@lpicentral:/home/lpicentral# ls -l /mnt/backup/distributions.txt
-rw-r--r-- 1 root root 0 Feb  5 19:34 /mnt/backup/distributions.txt
root@lpicentral:/home/lpicentral#

In the above command, we have copied the distribution.txt file from lpicentral home directory to /mnt/backup folder, if you have noticed, only the attributes are copied, and content is skipped. Size of distribution.txt under /mn/backup folder is zero bytes.

Example:12) Creating backup of existing destination file while copying (–backup)


Default behavior of cp command is to overwrite the file on destination if the same file exists, if you want to make a backup of existing destination file during the copy operation then use ‘–backup‘ option, example is shown below,

root@lpicentral:~# cp --backup=simple -v /home/lpicentral/distributions.txt /mnt/backup/distributions.txt
'/home/lpicentral/distributions.txt' -> '/mnt/backup/distributions.txt' (backup: '/mnt/backup/distributions.txt~')
root@lpicentral:~#

If you have noticed, backup has been created and appended tilde symbol at end of file. backup option accept following parameters

◈ none, off  – never make backups
◈ numbered, t – make numbered backups
◈ existing, nil – numbered if numbered backups exist, simple otherwise
◈ simple, never – always make simple backups

Example:13) Preserve mode, ownership and timestamps while copying (-p)


If you want to preserve the file attributes like mode, ownership and timestamps while copying then use -p option in cp command, example is demonstrated below,

root@lpicentral:~# cd /home/lpicentral/
root@lpicentral:/home/lpicentral# cp -p devops.txt /mnt/backup/
root@lpicentral:/home/lpicentral# ls -l devops.txt
-rw-r--r-- 1 root root 37 Feb  5 20:02 devops.txt
root@lpicentral:/home/lpicentral# ls -l /mnt/backup/devops.txt
-rw-r--r-- 1 root root 37 Feb  5 20:02 /mnt/backup/devops.txt
root@lpicentral:/home/lpicentral#

Example:14) Do not follow symbolic links in Source while copying (-P)


If you do not want to follow the symbolic links of source while copying then use -P option in cp command, example is shown below

root@lpicentral:~# cd /home/lpicentral/
root@lpicentral:/home/lpicentral# ls -l /opt/nix-release.txt
lrwxrwxrwx 1 root root 14 Feb  9 12:28 /opt/nix-release.txt -> os-release.txt
root@lpicentral:/home/lpicentral#
root@lpicentral:/home/lpicentral# cp -P os-release.txt /mnt/backup/
root@lpicentral:/home/lpicentral# ls -l /mnt/backup/os-release.txt
-rw-r--r-- 1 root root 35 Feb  9 12:29 /mnt/backup/os-release.txt
root@lpicentral:/home/lpicentral#

Note: Default behavior of cp command is to follow the symbolic links in source while copying.

Example:15) Copy the files and directory forcefully using -f option


There can be some scenarios where existing destination file cannot be opened and removed. And if you have healthy file which can be copied in place of existing destination file, then use cp command along with -f option

root@lpicentral:/home/lpicentral# cp -f distributions.txt  /mnt/backup/
root@lpicentral:/home/lpicentral#

Example:16) Copy sparse files using sparse option in cp command


Sparse is a regular file which contains long sequence of zero bytes that doesn’t consume any physical disk block. One of benefit of sparse file is that it does not consume much disk space and read operation on that file would be quite fast.

Let’s assume we have sparse cloud image named as “ubuntu-cloud.img”

root@lpicentral:/home/lpicentral# du -sh ubuntu-cloud.img
12M     ubuntu-cloud.img
root@lpicentral:/home/lpicentral# cp --sparse=always ubuntu-cloud.img /mnt/backup/
root@lpicentral:/home/lpicentral# du -sh /mnt/backup/ubuntu-cloud.img
0       /mnt/backup/ubuntu-cloud.img
root@lpicentral:/home/lpicentral#

Different options can be used while using sparse parameter in cp command,

◈ sparse=auto
◈ sparse-always
◈ sparse=never

Wednesday 27 March 2019

Doing It For The Right Reasons: Millennials, Open Source, and Philosophy

Congratulations on choosing a carreer in Linux and Open Source! If you're just starting on the journey, I wish you all the best. I'm thrilled to hear that you've decided to pursue a career in Linux and open source software, and I hope that it brings you lots of success, both financially and professionally. However, and you knew that was going to be a "however", I hope you've decided to do this for the right reasons.


Not that there's anything wrong with picking a career that's going to ensure a steady stream of income for the next several years, on which you are going to build not only your future but your family's future, but there's more to selecting open source as a career then just the money.

So, are you going into open source for the right reasons?

Back in 1985, Starship -- the band, not an actual starship-- released a song called, "We Built This City". Starship, if you are too young to remember, was the evolution of Jefferson Starship, itself the evolution of Jefferson Airplane. A rather messy legal battle accompanied said evolution but I won't bore you with that here. The song, "We Built This City," was written by Bernie Taupin, Martin Page, Dennis Lambert, and Peter Wolf, while the vocals for the song came from Grace Slick and Mickey Thomas. I tell you all this because, well, it takes as village, as they say, even when you're building a city on rock and roll.

Since I mentioned a legal battle in the previous paragraph, I'd be remiss not to mention that Linux, the poster child for Open Source software, had it's own long running legal battle against a poser for open source, namely the company formerly known as "SCO". For the record the good guys won.

When people try to convince you that some moral point of view should grab and hold your attention, it's not uncommon to hear something to the effect that you should "do it for the children." Proponents of Open Source have long believed that free software can be a boon to children at every stage of development, both in terms of age, but also financially. Not far from where I live, there's the "Computer Recycling Centre", where volunteers take old computers, fix them up, load up Linux (usually), and make them available to people who otherwise could not afford them.

In 2005, Nicholas Negroponte spoke at the World Economic Forum in Davos, a purportedly non-profit organization represented by some of the richest people in the history of this planet. Negroponte was there to champion a project to elevate children of distant African villages into participants of the great global village, the Internet. This ambition child-oriented open source project was OLPC, or "One Laptop Per Child". Much has been written about this bold and grand plan to create a small, insanely durable laptop, and to put untold numbers of these laptops into the hands of disadvantaged children around the world for $100 per laptop (see the image accompanying this article). The project never achieved its lofty goals for reasons that would fill a book, but it did leave some interesting DNA behind in the form of free software.

Despite grand plans and a handful of bright green and white notebooks, the project never really achieved its goal of creating the $100 laptop, leaving behind tantalizing bits of open source DNA and some hard lessons about idealism in a world where questions about who counts the money while playing all those corporation games tends to rise above any attempt to put idealism first.

The Internet itself, the greatest boon to the world's impressive economic engine, is built on open source idealism. Powered by countless Linux servers in enormous corporate clouds, backed up by open protocols, open source domain and routing services, open source email, open source file sharing, open source Web services, and open source pretty much everything, the idealists of open source built the Internet. Even more tantalizing is that with open source powering this economic behemoth, business has wrapped itself in the mantle of open source, generating billions upon billions of dollars and defining, through the awesome power of their wealth, the shape of the world in which we live.

Except that we built this city. When viewed through the lens of the early creators of open source technology, we see a world where the technical and technological power of a handful of closed source companies (e.g. Microsoft, IBM, Sun) was reduced to a level playing field where anyone with the knowledge to code could, with freely available open source tools on a Linux system, create any kind of business (or world) they damn well pleased. In fact, we could imagine a world where open source was the norm and closed source was a kind of relic of an earlier age. When that day came, we would all be free to share in the wealth that was currently held by so few.

Spoiler alert! We did it. Today, open source is ubiquitous, so much so that most people don't realize that they use open source every moment of every day they interact with the digital world. Those crazy open source idealists of 25 or 30 years past managed to create something out of science fiction, a world where you can talk to a little gray speaker on your kitchen table and have it tell you the weather, read you the news, tell you a joke, book an appointment, make a call, or ask who wrote "We Built This City" and then play you the song.

But something happened on the way to the great open world of the future. A few powerful companies where born who knew how to take advantage of the work done by those open source idealists. Heck, some of them were, and may still be to some degree, open source idealists. Trouble is, the impact that people learning about and working with open source has steadily diminished. Those who count the money decide what happens next and most of us, myself included, are basically along for the ride.

Back at the tail end of November of 2018, I read an article titled, "Millennials and Open Source: Don't Know, Don't Care?" and, as they say, it got me to thinking. Thinking about it makes me think of that old saying about those  who forget history being doomed to repeat it. Understanding what makes open source special created the present we live in and gave us amazing super powers. Forgetting that means forgetting that you have those powers and letting only the rich and powerful decide what kind of world you will inherit. If you are going to work in this field, you most definitely should care, must care, about open source.

We all need the money. Until we enter into that Star Trek universe where money isn't really used by most people, dinner makes it to the table by virtue of selling your skills to an employer. You learn Linux and open source, maybe specialize in DevOps, and insert yourself into a big company making a mint from open source. But remember that those tools, and the knowledge that you've learned along the way, allow you to build and create a world of wonders as amazing as anything that Google or Facebook or Amazon have built (with our help). If we don't like this city, we can build another, and another, because open source is power and, thanks for some great licensing foresight, we all own it.

Build the next laptop for needy children the world over. Create a security system that protects privacy while delivering the good of the Internet. Think of a way to develop the next level sharing economy that will make your neighbours' poverty disappear as they share in the wealth that open source creates. Find a way, as you work at getting your paycheque, to recapture some of that wide-eyed idealism that comes with understanding that open source gives you the same power as everyone else regardless of how rich and powerful they are.

Tuesday 26 March 2019

md5sum Command in Linux with Examples

The md5sum is designed to verify data integrity using MD5 (Message Digest Algorithm 5).

md5sum Command, Linux Certifications, Linux Learning, Linux Tutorial and Material

MD5 is 128-bit cryptographic hash and if used properly it can be used to verify file authenticity and integrity.

Example :


Input : md5sum /home/xyz/test/test.cpp
Output : c6779ec2960296ed9a04f08d67f64422  /home/xyz/test/test.cpp

Importance :

Suppose, anyone wants to install an operating system , so to verify if it’s correct CD, it’s always a good idea to verify .iso file using MD5 checksum, so that you don’t end up installing wrong software (some sort of virus which can corrupt your filesystem).

Syntax :


md5sum [OPTION]... [FILE]...

It will print or check MD5(128-bit) checksum.

It computes MD5 checksum for file “test.cpp”

Output :

c6779ec2960296ed9a04f08d67f64422  /home/xyz/test/test.cpp

Options :

-b : read in binary mode
-c : read MD5 from files and check them
–tag : create a BSD-style checksum
-t : read in text mode(it’s by default)

The options which are useful when verifying checksum :


–ignore-missing : don’t report status for missing files
–quiet : don’t print OK for each successfully verified file
–status : don’t output anything, status code shows success
–strict : exit non-zero for improperly formatted checksum files
-w : warn about improperly formatted checksum files

Command usage examples with options :

Example 1: Store the MD5 checksum in file and then verify it.

# md5sum /home/xyz/test/test.cpp > checkmd5.md5

It will store the MD5 checksum for test.cpp in file checkmd5.md5

# md5sum -c checkmd5.md5

It will verify the contents of file

Output :

/home/xyz/test/test.cpp: OK

After changing the contents of file checkmd5.md5, the output will be :

/home/xyz/test/test.cpp: FAILED
md5sum: WARNING: 1 computed checksum did NOT match

Example 2: create a BSD-style checksum with –tag option

# md5sum --tag /home/xyz/test/test.cpp

Output :

MD5 (/home/xyz/test/test.cpp) = c6779ec2960296ed9a04f08d67f64422

Example 3: – quiet option, can be used when verifying checksum, don’t print OK when verification is successful.

#  md5sum -c --quiet  checkmd5.md5

Don’t produce any output, means it’s successful.

But if checksum don’t match, it produces warning.

# md5sum -c --quiet  checkmd5.md5
/home/xyz/test/test.cpp: FAILED
md5sum: WARNING: 1 computed checksum did NOT match

Example 4: – warn option, it can be used for generating a warning for improperly formatted checksum files.

content of file checkmd5.md5:

c6779ec2960296ed9a04f08d67f64422 /home/xyz/test/test.cpp

Now, execute command with –warn option

# md5sum -c --warn  checkmd5.md5
/home/xyz/test/test.cpp: OK

It don’t produce any warning.

Now, do some formatting in file checkmd5.md5

c6779ec2960296ed9a04f08d67f64422
/home/xyz/test/test.cpp

Now, execute the command

# md5sum -c --warn  checkmd5.md5

Output :

md5sum: checkmd5.md5: 1: improperly formatted MD5 checksum line
md5sum: checkmd5.md5: 2: improperly formatted MD5 checksum line
md5sum: checkmd5.md5: no properly formatted MD5 checksum lines found

and if –warn is replaced with –strict option, it will exit non-zero for improperly formatted checksum lines

# md5sum -c --strict  checkmd5.md5
md5sum: checkmd5.md5: no properly formatted MD5 checksum lines found

Saturday 23 March 2019

Paste command in Linux with examples

Paste Command, Linux Command, Linux Tutorial and Materials, Linux Learning

Paste command is one of the useful commands in Unix or Linux operating system. It is used to join files horizontally (parallel merging) by outputting lines consisting of lines from each file specified, separated by tab as delimiter, to the standard output. When no file is specified, or put dash (“-“) instead of file name, paste reads from standard input and gives output as it is until a interrupt command [Ctrl-c] is given.

Syntax:


paste [OPTION]... [FILES]...

Let us consider three files having name state, capital and number. state and capital file contains 5 names of the Indian states and capitals respectively. number file contains 5 numbers.

$ cat state
Arunachal Pradesh
Assam
Andhra Pradesh
Bihar
Chhattisgrah

$ cat capital
Itanagar
Dispur
Hyderabad
Patna
Raipur

Without any option paste merges the files in parallel. The paste command writes corresponding lines from the files with tab as a deliminator on the terminal.

$ paste number state capital
1       Arunachal Pradesh       Itanagar
2       Assam   Dispur
3       Andhra Pradesh  Hyderabad
4       Bihar   Patna
5       Chhattisgrah    Raipur

In the above command three files are merges by paste command.

Options:


1. -d (delimiter): Paste command uses the tab delimiter by default for merging the files. The delimiter can be changed to any other character by using the -d option. If more than one character is specified as delimiter then paste uses it in a circular fashion for each file line separation.

Only one character is specified
$ paste -d "|" number state capital
1|Arunachal Pradesh|Itanagar
2|Assam|Dispur
3|Andhra Pradesh|Hyderabad
4|Bihar|Patna
5|Chhattisgrah|Raipur

More than one character is specified
$ paste -d "|," number state capital
1|Arunachal Pradesh,Itanagar
2|Assam,Dispur
3|Andhra Pradesh,Hyderabad
4|Bihar,Patna
5|Chhattisgrah,Raipur

First and second file is separated by '|' and second and third is separated by ','.
After that list is exhausted and reused.

2. -s (serial): We can merge the files in sequentially manner using the -s option. It reads all the lines from a single file and merges all these lines into a single line with each line separated by tab. And these single lines are separated by newline.

$ paste -s number state capital
1       2       3       4       5
Arunachal Pradesh       Assam   Andhra Pradesh  Bihar   Chhattisgrah
Itanagar        Dispur  Hyderabad       Patna   Raipur

In the above command, first it reads data from number file and merge them into single line with each line separated by tab. After that newline character is introduced and reading from next file i.e. state starts and process repeats again till all files are read.

Combination of -d and -s: The following example shows how to specify a delimiter for sequential merging of files:

$ paste -s -d ":" number state capital
1:2:3:4:5
Arunachal Pradesh:Assam:Andhra Pradesh:Bihar:Chhattisgrah
Itanagar:Dispur:Hyderabad:Patna:Raipur

3. –version: This option is used to display the version of paste which is currently running on your system.

$ paste --version
paste (GNU coreutils) 8.26
Packaged by Cygwin (8.26-2)
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later .
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Applications of Paste Command


1. Combining N consecutive lines: The paste command can also be used to merge N consecutive lines from a file into a single line. Here N can be specified by specifying number hyphens(-) after paste.

With 2 hyphens
$ cat capital | paste - -
Itanagar        Dispur
Hyderabad       Patna
Raipur

With 3 hyphens
$ paste - - - < capital
Itanagar        Dispur  Hyderabad
Patna   Raipur

2. Combination with other commands: Even though paste require at least two files for concatenating lines, but data from one file can be given from shell. Like in our example below, cut command is used with -f option for cutting out first field of state file and output is pipelined with paste command having one file name and instead of second file name hyphen is specified.

Note: If hyphen is not specified then input from shell is not pasted.

Without hypen
$ cut -d " " -f 1 state | paste number
1
2
3
4
5

With hypen
$ cut -d " " -f 1 state | paste number -
1       Arunachal
2       Assam
3       Andhra
4       Bihar
5       Chhattisgrah

Ordering of pasting can be changed by altering the location of hyphen:

$ cut -d " " -f 1 state | paste - number
Arunachal       1
Assam   2
Andhra  3
Bihar   4
Chhattisgrah    5

Tuesday 19 March 2019

Will A Robot Eat Your Job?

One of my favourite news shows did a depressing story about jobs and careers, that somehow made me feel better about mine.

LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorial and Material

Among my few regular watches on television these days is “Last Week Tonight”, an HBO news/comedy show hosted by Daily Show alumnus and former Brit John Oliver. (Once a Brit, always a Brit?) What I like most about this show is its interest in the rare art of long-form video journalism. Each “Last Week Tonight” episode takes 20 or more minutes making some of the most boring issues of the day just a little less wonkish, and thus more accessible to people simply trying to make sense.

The episode that dragged me to my word processor was about employment, automation, and the “moving jobs to other countries” claims which appear more like fear-mongering and less like the basis for sound policy.

Oliver’s piece tried to make sense of the highly-nuanced issue of how society is coping with the ever-shifting nature of work. During my stint in Geneva, I engaged with a number of people at the International Labour Organization who have been obsessed with studying the issue. They and many others are trying to make sense of everything from the “gig economy” to the effect of 3D printing on global supply chains.

The issue is not easy to navigate. For example, the emergence of Automated Teller Machines has led to a thoroughly counter-intuitive consequence for human tellers.

To be sure, I highly recommend a watch of Oliver’s piece. There is an official clip on the Tubes but it’s not viewable in all countries. You could just search “John Oliver Automation”.

The segment does try to make as much sense of a major societal issue as can be done with 20 minutes and a few snarky gags, but one thing stuck out sharply at me from the piece. Near the segment’s end, the case is made that the jobs that WON’T be automated any time soon will be:

“Non-routine tasks that require social intelligence, critical thinking, and creative problem solving”

In a nutshell, that line describes almost perfectly my job when I was a paid Linux system administrator. It involved interfacing between people and their machines, solving problems which (generally) aren’t repetitive, that often require ingenuity, research, and invention.

The careers for which LPI designs its certifications are less susceptible to automation and more likely to be in ever-higher demand. Rather than be replaced by machines, we’ll be the ones making sure they do what they’re supposed to. And since Linux is the system powering everything from embedded systems to most of The Cloud, you can safely assume this is the stuff to know.

Because it is unlikely that LPI-targeted careers will be automated away, it’s a great path of retraining and redirecting talents for those who have been displaced in other fields. I’m seeing some examples of Linux and the Open Source software that powers the Web as a career-shift, and novel ways of retraining that I plan to discuss in future blogs. For now, have a look at the John Oliver piece when you can … and keep learning that Linux.

Saturday 16 March 2019

Gzip Command in Linux

gzip command compresses files. Each single file is compressed into a single file. The compressed file consists of a GNU zip header and deflated data.

Gzip Command, Linux Tutorial and Material, Linux Certifications, Linux Guides

If given a file as an argument, gzip compresses the file, adds a “.gz” suffix, and deletes the original file. With no arguments, gzip compresses the standard input and writes the compressed file to standard output.

Difference between Gzip and zip command in Unix and when to use which command


◈ ZIP and GZIP are two very popular methods of compressing files, in order to save space, or to reduce the amount of time needed to transmit the files across the network, or internet.

◈ In general, GZIP is much better compared to ZIP, in terms of compression, especially when compressing a huge number of files.

◈ The common practice with GZIP, is to archive all the files into a single tarball before compression. In ZIP files, the individual files are compressed and then added to the archive.

◈ When you want to pull a single file from a ZIP, it is simply extracted, then decompressed. With GZIP, the whole file needs to be decompressed before you can extract the file you want from the archive.

◈ When pulling a 1MB file from a 10GB archive, it is quite clear that it would take a lot longer in GZIP, than in ZIP.

◈ GZIP’s disadvantage in how it operates, is also responsible for GZIP’s advantage. Since the compression algorithm in GZIP compresses one large file instead of multiple smaller ones, it can take advantage of the redundancy in the files to reduce the file size even further.

◈ If you archive and compress 10 identical files with ZIP and GZIP, the ZIP file would be over 10 times bigger than the resulting GZIP file.

Syntax :


gzip [Options] [filenames]

Example:


$ gzip mydoc.txt

This command will create a compressed file of mydoc.txt named as mydoc.txt.gz and delete the original file.

Gzip Command, Linux Tutorial and Material, Linux Certifications, Linux Guides

Options:


1. -f option: Sometimes a file cannot be compressed. Perhaps you are trying to compress a file called “myfile1” but there is already a file called “myfile1.gz”. In this instance, the “gzip” command won’t ordinarily work.

To force the “gzip” command to do its stuff simply use -f option:

$ gzip -f myfile1.txt

This will forcefully compress a file named myfile.txt even if there already exists a file named as myfile.txt.gz

2. -k option: By default when you compress a file using the “gzip” command you end up with a new file with the extension “.gz”.If you want to compress the file and keep the original file you have to run the gzip command with -k option:

$ gzip -k mydoc.txt

The above command would end up with a file called “mydoc.txt.gz” and “mydoc.txt”.

3. -L option: This option displays the gzip license.

$ gzip -L filename.gz

OUTPUT :

Apple gzip 264.50.1 (based on FreeBSD gzip 20111009)
Copyright (c) 1997, 1998, 2003, 2004, 2006 Matthew R. Green
All rights reserved.

4. -r option: This option can compress every file in a folder and its subfolders.This option doesn’t create one file called foldername.gz. Instead, it traverses the directory structure and compresses each file in that folder structure.

gzip -r testfolder

This will compress all the files present in the testfolder.

5. -[1-9] option: It allows to change the compression level.A file can be compressed in different ways. For instance, you can go for a smaller compression which will work faster or you can go for maximum compression which has the tradeoff of taking longer to run.The speed and compression level can vary by levels using numbers between 1 and 9.

$ gzip -1 mydoc.txt

This will get maximum compression at the slowest speed

$ gzip -9 mydoc.txt

To get minimum compression at the fastest speed

6. -v option: This option displays the name and percentage reduction for each file compressed or decompressed.

$ gzip -v mydoc.txt

OUTPUT:

new.txt:       18.2% -- replaced with new.txt.gz

7. -d option: This option allows to decompress a file using the “gzip” command.

$ gzip -d mydoc.txt.gz

This command will unzip the compressed file named as mydoc.txt.gz.

Thursday 14 March 2019

free Command in Linux with examples

While using LINUX there might come a situation when you are willing to install a new application (big in size) and you wish to know for the amount of free memory available on your system. In LINUX, there exists a command line utility for this and that is free command which displays the total amount of free space available along with the amount of memory used and swap memory in the system, and also the buffers used by the kernel.

Linux Tutorial and Materials, Linux Guides, LPI Study Materials, LPI Learning

This is pretty much what free command does for you.


Syntax:



$free [OPTION]

OPTION : refers to the options
compatible with free command.

As free displays the details of the memory related to your system , its syntax doesn’t need any arguments to be passed but only options which you can use according to your wish.

Using free Command


You can use the free command as:

// using free command
$free
             total       used       free     shared    buffers     cached
Mem:        509336     462216      47120          0      71408     215684
-/+ buffers/cache:     175124     334212
Swap:       915664      11928     903736

/*free command without any
option shows the used
and free space of swap
and physical memory in KB */

When no option is used then free command produces the columnar output as shown above where column:

1. total displays the total installed memory (MemTotal and SwapTotal i.e present in /proc/meminfo).

2. used displays the used memory.

3. free displays the unused memory.

4. shared displays the memory used by tmpfs(Shmen i.epresent in /proc/meminfo and displays zero in case not available).

5. buffers displays the memory used by kernel buffers.

6. cached displays the memory used by the page cache and slabs(Cached and Slab available in /proc/meminfo).

7. buffers/cache displays the sum of buffers and cache.

Options for free command


◈ -b, – -bytes : It displays the memory in bytes.
◈ -k, – -kilo : It displays the amount of memory in kilobytes(default).
◈ -m, – -mega : It displays the amount of memory in megabytes.
◈ -g, – -giga : It displays the amount of memory in gigabytes.
◈ – – tera : It displays the amount of memory in terabytes.
◈ -h, – -human : It shows all output columns automatically scaled to shortest three digit unit and display the units also of print out. The units used are B(bytes), K(kilos), M(megas), G(gigas), and T(teras).
◈ -c, – -count : It displays the output c number of times and this option actually works with -s option.
◈ -l, – -lohi : It shows the detailed low and high memory statistics
◈ -o, – -old : This option disables the display of the buffer adjusted line.
◈ -s, – -seconds : This option allows you to display the output continuously after s seconds delay. In actual, the usleepsystem call is used for microsecond resolution delay times.
◈ -t, – -total : It adds an additional line in the output showing the column totals.
◈ – -help : It displays a help message and exit.
◈ -V, – -version : It displays version info and exit.
Using free command with options

1. Using -b : It just displays the output in unit bytes.

//using free with -b

$free -b
             total       used       free     shared    buffers     cached
Mem:     521560064  474198016   47362048          0   73826304  220983296
-/+ buffers/cache:  179388416  342171648
Swap:    937639936   12210176  925429760

/*everything now
displayed is in bytes */

2. Using -k : This option displays the result in kilobytes.

//using free with -k

$free -k
             total       used       free     shared    buffers     cached
Mem:        509336     463084      46252          0      72104     215804
-/+ buffers/cache:     175176     334160
Swap:       915664      11924     903740

/*no change in output
if compared to only free
command output cause this
is the by default format
that free uses for the
result */

3. Using -m : This option displays the result in megabytes.

//using free with -m

$free -m
             total       used       free     shared    buffers     cached
Mem:           497        452         45          0         70        210
-/+ buffers/cache:        171        326
Swap:          894         11        882

/*everything now
displayed is in megabytes */

4.using -g : This option displays the result in gigabytes.

//using free with -g

$free -g
             total       used       free     shared    buffers     cached
Mem:             0          0          0          0          0          0
-/+ buffers/cache:          0          0
Swap:            0          0          0

/*everything now
displayed is in gigabytes */

5. Using -t (total) : This option displays an additional line containing the total of the total, used and free columns.

//using free with -t

$free -t
             total       used       free     shared    buffers     cached
Mem:        509336     463332      46004          0      72256     215804
-/+ buffers/cache:     175272     334064
Swap:       915664      11924     903740
Total:     1425000     475256     949744

/*the line containing
total is added to the
output when -t is used*/

Linux Tutorial and Materials, Linux Guides, LPI Study Materials, LPI Learning

6. Using -s and -o: This option allows you to display the output of free command after a set time gap given by the user. This option requires a numeric value to be passed with it that is treated as the number of seconds after which the output will be displayed.

//using free with -s

$free -s 3 -c 3
             total       used       free     shared    buffers     cached
Mem:        509336     469604      39732          0      73260     216068
-/+ buffers/cache:     180276     329060
Swap:       915664      11924     903740

             total       used       free     shared    buffers     cached
Mem:        509336     468968      40368          0      73268     216060
-/+ buffers/cache:     179640     329696
Swap:       915664      11924     903740

             total       used       free     shared    buffers     cached
Mem:        509336     469092      40244          0      73272     216068
-/+ buffers/cache:     179752     329584

/*the above output will
be displayed (only 3 times)
after every 3 seconds */

Now, with -s you can only specify the time gap but not the number of times you want the output to be displayed. For this, -c is used along with -s specifying the number of times the output will be displayed.

7. Using -o : This option makes the buffer/cache line go away from the output as shown below.

//using free with -o

$free -o
              total       used       free     shared    buffers     cached
Mem:        509336     463588      45748          0      72376     215856
Swap:       915664      11924     903740

/*now the output
doesn't have the
buffer line in it */

Saturday 9 March 2019

LPIC-1: iproute2 and NetworkManager

The restructuring of the networking objectives is one of the major changes in LPIC-1 version 5.0. The entire topic 109 is dedicated to the connectivity of your system. The new structure divides the topic into networking fundamentals (109.1), persistent network configuration (109.2), troubleshooting and runtime configuration (109.3) and DNS name resolution (109.4). Unlike before, each command is now exclusively assigned to one of these topics, which makes the preparation for the exam easier.

LPIC-1, LPI Guides, LPI Tutorial and Material, LPI Certifications, LPI Learning

Objective 109.1 covers fundamentals of the internet protocols. It covers IP addresses, subnetting and important TCP and UDP ports. Although is has been on the exam before, make sure you have a fair understanding of IPv6. Focus on the most important aspects of IPv6, make sure you understand the addresses and their components and try to apply your IPv4 subnetting knowledge to IPv6. However, don’t get lost in the numerous additions to IPv6. Be prepared to solve the aspects mentioned in the objectives by using either IPv4 or IPv6.

Persistent network configuration is the focus of objective 109.2. Nowadays, most desktop Linux distributions use NetworkManager as the main network configuration tool. You have probably seen one of its widgets on your Linux desktop. Beyond those widgets, NetworkManager can be controlled from the command line using the program nmcli or the curses interface nmtui.

Taking a closer look at nmcli provides you insights in how NetworkManager structures the network configuration. The Fedora Networking Guide does a great job in explaining nmcli. Chapter 2.4 introduces nmcli, including some examples, explaining how to connect to a network and how to configure static routes. Don’t underestimate NetworkManager. Is is a flexible tool, and it is the core of a weight-four objective. Make sure you practice how to connect to an ethernet and wifi using NetworkManager from the command line.

If you want to learn more about nmcli beyond the LPIC-1 objectives, review the nmcli(1) manpage and the examples in the ArchLinux wiki.

Besides NetworkManager, objective 109.2 includes hostnamectl and awareness of systemd-networkd, which we already covered last week. The objective also includes ifup and ifdown, which are part of the traditional networking configuration of many distributions.

Internally, NetworkManager, ifup and ifdown, all configure parameters of your Linux system. You can review, or even manually create or change these options. This kind of troubleshooting and runtime configuration is covered in objective 109.3.

Historically, the net-utils package provided commands such as ifconfig and route. On Linux, iproute2 offers a modern replacement for these tools. After more than a decade of coexistence, iproute2 finally becomes the standard tool on most distributions. In LPIC-1 version 5.0, you’re supposed to be proficient in using iproute2 and some other new tools such as ss. The older net-utils are covered at awareness level, meaning that you should have a basic idea about these tools. If you’re new to the topic and started using iproute2 right away, you should be fine. If you’re still in the net-utils comfort zone, it’s time to switch to iproute2 now. Red Hat’s IP Command Cheatsheet provides a great comparison of net-utils and iproute2 commands. Again, focus on what’s on the LPIC-1 objectives and experiment with interfaces, addresses (both IPv4 and IPv6!) and routes. If you want to go beyond the LPIC-1 objectives, take a look at the iproute2 user guide, which is a collection of examples even for less commonly used iproute2 tasks.

Finally, Objective 109.4 covers client aspects of DNS. Most of this objective was not changed in version 5.0. As we’re already seen in last week’s posting, systemd-resolvd was added to this objective. In general, make sure you understand not only how to configure a DNS client, but also how to configure how your system tries to resolve names and how to resolve a name from your system’s perspective, which might include more than just DNS.

I’ve already mentioned practicing once or twice. You won’t gain proficiency from pure reading. Networking is a particularly demanding topic and requires real hands-on experience. Try to set up a test lab with a couple of virtual machines and configure their networking according to the LPIC-1 objectives, both with IPv4 and IPv6, both using NetworkManager and iproute2. As we will see next week, you will be able to leverage your virtualization experience for another new LPIC-1 topic as well.

Friday 8 March 2019

Are you ready to be a DevOps Engineer?

You have just passed the LPI DevOps certification exam. You are now ready for action when it comes to the tool- and tech-side of DevOps. To become an even better DevOps Engineer, there are three topics that will help you during your daily work, but have only little to do with IT: mindset, learning, and communication.

DevOps Engineer, LPI Study Materials, LPI Guides, LPI Tutorial and Material, LPI Certification

As a Linux/Open Source specialist, DevOps can bring a lot of fun to your job. DevOps encourages you to think outside the box and experiment, so its very likely you and your team will deploy (a lot of) new open source tools to see if they improve your application or IT-platform. Although this is, of course, exciting, tools will not solve every challenge that comes across during a day at the office.

Technology provides a certain form of comfort. Because you are familiar with how technology works and behaves, you love working with it: writing new lines of code, testing a new release, fine-tuning your deployment scripts, or spinning up you first containers on a K8s cluster. Yep, IT is awesome. As an IT-specialist, you put a lot of effort in automation and improving the company’s IT environment. This way, you help to reduce costs and increase speed. To achieve a true new level of IT-efficiency though, tight collaboration between developers, operations, and the business is inevitable. The true power of DevOps can only be unleashed when the value stream works like a well oiled machine. So, to become a master in DevOps, you will need to master your soft skills.

Engineering Mindset

A fascinating thing about us humans, is that we are not very good at changing, despite the fact that change is something we have to deal with every day. And although it would seem obvious that we become better at it as we get older, the opposite seems to be true. Children are amazingly good at change. This is mostly because they just learn by the outcome of all the experiments they do on a regular basis. When we become more conscious of how things work or should work, we are cutting down the amount of experiments and we start to calculate outcomes beforehand, based on intelligence and experience. The expected outcome has a big influence on whether we decide to start experimenting or do nothing at all. So, we basically “rationalize” ourselves out of the change-driven habits of our youth. That’s a pity! With DevOps, its important to restart your experiments. You will need to explore again. You will need to fail (a lot, and fast). And yes, your boss should encourage you to fail, but not punish you if you do. An engineering mindset is about embracing failure and use it to work towards fundamental improvements, instead of only applying band aids. This will be hard and will introduce new forms of risk, but like Alfred said to the young Bruce Wayne in the movie, Batman Begins, “Why do we fall? So we can learn to pick ourselves up.” This is a great recipe for loving change instead of hating it.

Keep learning

Since you started experimenting again after reading the previous paragraph, you also started to learn again. Becoming good at your job requires a lot of effort. It doesn’t matter what kind of job you do, whether you are a Go or Python developer, a Linux sysadmin or an Ansible expert, only with hard work and dedication will you master these skills. As many professional sportsmen say, it is hard to get to the top (and win a race or a match), but its even harder to stay there. Because you reached a big goal, maybe the biggest goal you have set in your life so far, its very likely you will lose focus and/or motivation. There are also forces from outside that have influence on your drive, like your (corporate) environment. To overcome this, its important to keep setting and reviewing your goals. This may be a goal that is even harder to reach or requires a new approach, and new skills. It may also be a number of smaller, easier goals that help you develop specific skills. New goals help you to keep making progress. The fun thing is that the progress you make towards your new goal, will most likely also make you become better in other things you do. It will provide you new insights and a fresh look at things. You will meet new people with a different perspective on the things you do.

Max out your communication skills

Wow! You are developing an engineering mindset and learning new things! Now, it’s time to help others to learn. Grab a cup of coffee and talk to your teammates or a random colleague about what you have learned or what challenges you are dealing with. Ask them for feedback and listen, to understand rather than respond. Some people are afraid they will not be seen as a specialist anymore when they have shared their knowledge with others (because they will not be the “local hero” anymore), but most likely sharing knowledge will only emphasize your status as an expert because more people will know how much you know! Besides, talking about what you know is also a form of self reflection. It makes your knowledge sink in. This gives your mind room to gain even more new knowledge.

Sharing knowledge and communicating with your coworkers requires that you leave your comfort-zone (again) and act in a different way than before. This may sound scary, but its actually quite easy when you start doing it. Look at it as just another experiment. By diving into it, you will make the first important step. After this phase, you will know more about your communicative strengths and weaknesses. It will also give you a good starting point for further improvement of specific soft skills. You might need some help to become better in non-verbal communication or maybe you need some coaching to improve your ability to give and receive feedback.

Wednesday 6 March 2019

expand Command in LINUX with examples

Whenever you work with files in LINUX there can be a situation when you are stuck with a file that contains lot of tabs in it and whatever you need to do with file requires that file with no tabs but with spaces. In this situation the task looks quite simple if you are dealing with a small file but what if the file you are dealing with is very big or you need to do this for large number of files. For situation like this, LINUX has a command line utility called expand which allows you to convert tabs into spaces in a file and when no file is specified it reads from standard input.

Thus, expand is useful for pre-processing character files like before sorting that contain tab characters. expand actually writes the produced output to standard output with tab characters expanded to space characters. In this, backspace characters are preserved into the output and also decrement the column count for tab calculations.

Syntax of expand :


//...syntax of expand...//
$expand [OPTION] FILE

The syntax of this is quite simple to understand. It just requires a file name FILE in which you want to expand tab characters into space characters and it reads from standard input if no file name is passed and gives result to standard output.

Example :


Suppose you have a file name kt.txt containing tab characters. You can use expand as:

//using expand//

$expand kt.txt

/* expand will produce
the content of the file in
 output with only tabs changed
to spaces*/

Note, if there is a need to make this type of change in multiple files then you just have to pass all file names in input and tabs will get converted into spaces.

You can also transfer the output of the changes made into some other file like:

$expand kt.txt > dv.txt

/*now the output will get
transfer to dv.txt as
redirection operator >
is used*/

Options for expand command:


1. -i, – – initial option : There can be a need to convert tabs that preceed lines and leave unchanged those that appear after non-blanks. In simple words this option allows no conversion of tabs after non-blanks.

//using -i option//

$expand -i kt.txt

/*this will not change
those tabs that appear
after blanks*/

expand Command, Linux Tutorial and Material, Linux Guides, Linux Certifications

2. -t, – – tabs=N option : By default, expand converts tabs into the corresponding number of spaces. But it is possible to tweak the number of spaces using the -t command line option. This option requires you to enter the new number of spaces(N) you want the tabs to get converted.

//using -t option//

$expand -t1 kt.txt > dv.txt

/*this will convert the
tabs in kt.txt to 1 space
instead of default 8
spaces*/

You can also use it as:

$expand --tabs=1 kt.tx > dv.txt

/*this will also convert tabs to
one space each*/

3. -t, – -tabs=LIST option : This uses comma separated LIST of tab positions.

4. – -help : This will display a help message and exit.

5. –version : This will display version information and exit.

The number of options are not much when it comes to expand command. So, that’s pretty much everything about expand command.

Sunday 3 March 2019

comm command in Linux with examples

comm compare two sorted files line by line and write to standard output; the lines that are common and the lines that are unique.

comm command, Linux Tutorial and Material, LPI Learning, LPI Certifications

Suppose you have two lists of people and you are asked to find out the names available in one and not in the other, or even those common to both. comm is the command that will help you to achieve this. It requires two sorted files which it compares line by line.
Before discussing anything further first let’s check out the syntax of comm command:

Syntax :

$comm [OPTION]... FILE1 FILE2

◈ As using comm, we are trying to compare two files therefore the syntax of comm command needs two filenames as arguments.
◈ With no OPTION used, comm produces three-column output where first column contains lines unique to FILE1 ,second column contains lines unique to FILE2 and third and last column contains lines common to both the files.
◈ comm command only works right if you are comparing two files which are already sorted.

Example: Let us suppose there are two sorted files file1.txt and file2.txt and now we will use comm command to compare these two.

// displaying contents of file1 //
$cat file1.txt
Apaar
Ayush Rajput
Deepak
Hemant

// displaying contents of file2 //
$cat file2.txt
Apaar
Hemant
Lucky
Pranjal Thakral

Now, run comm command as:

// using comm command for
comparing two files //
$comm file1.txt file2.txt
                Apaar
Ayush Rajput
Deepak
                Hemant
        Lucky
        Pranjal Thakral

The above output contains of three columns where first column is separated by zero tab and contains names only present in file1.txt ,second column contains names only present in file2.txt and separated by one tab and the third column contains names common to both the files and is separated by two tabs from the beginning of the line.

This is the default pattern of the output produced by comm command when no option is used .

Options for comm command:

1. -1 :suppress first column(lines unique to first file).
2. -2 :suppress second column(lines unique to second file).
3. -3 :suppress third column(lines common to both files).
4. – -check-order :check that the input is correctly sorted, even if all input lines are pairable.
5. – -nocheck-order :do not check that the input is correctly sorted.
6. – -output-delimiter=STR :separate columns with string STR
7. – -help :display a help message, and exit.
8. – -version :output version information, and exit.

Note : The options 4 to 8 are rarely used but options 1 to 3 are very useful in terms of the desired output user wants.

Using comm with options

1. Using -1 ,-2 and -3 options : The use of these three options can be easily explained with the help of example :

//suppress first column using -1//
$comm -1 file1.txt file2.txt
         Apaar
         Hemant
 Lucky
 Pranjal Thakral

//suppress second column using -2//
$comm -2 file1.txt file2.txt
        Apaar
Ayush Rajput
Deepak
        Hemant

//suppress third column using -3//
$comm -3 file1.txt file2.txt           
Ayush Rajput
Deepak     
        Lucky
        Pranjal Thakral

Note that you can also suppress multiple columns using these options together as:

//...suppressing multiple columns...//

$comm -12 file1.txt file2.txt
Apaar
Hemant

/* using -12 together suppressed both first
and second columns */

2. Using – -check-order option : This option is used to check whether the input files are sorted or not and in case if either of the two files are wrongly ordered then comm command will fail with an error message.

$comm - -check-order f1.txt f2.txt

The above command produces the normal output if both f1.txt and f2.txt are sorted and it just gives an error message if either of the two files are not sorted.

3. Using – -nocheck-order option : In case if you don’t want to check whether the input files are sorted or not, use this option. This can be explained with the help of an example.

//displaying contents of unsorted f1.txt//

$cat f1.txt
Parnjal
Kartik

//displaying contents of sorted file f2.txt//

$cat f2.txt
Apaar
Kartik

//now use - -nocheck-order option with comm//

$comm - -nocheck-order f1.txt f2.txt
Pranjal
        Apaar
                Kartik

/*as this option forced comm not to check
 the sorted order that's why the output
comm produced is also
not in sorted order*/

4 . – -output-delimiter=STR option: By default, the columns in the comm command output are separated by spaces as explained above. However, if you want, you can change that, and have a string of your choice as separator. This can be done using the –output-delimiter option. This option requires you to specify the string that you want to use as the separator.

Syntax:

$comm - -output-delimiter=STR FILE1 FILE2

EXAMPLE:

//...comm command with - -output-delimiter=STR option...//

$comm - -output-delimiter=+file1.txt file2.txt
++Apaar
Ayush Rajput
Deepak
++Hemant
+Lucky
+Pranjal Thakral

/*+ before content indicates content of
second column and ++ before content
indicates content of third column*/ 

Friday 1 March 2019

dmesg command in Linux for driver messages

Linux Study Materials, Linux Tutorial and Material, Linux Certifications

dmesg command also called as “driver message” or “display message” is used to examine the kernel ring buffer and print the message buffer of kernel. The output of this command contains the messages produced by the device drivers.

Usage of dmesg :


When the computer boots up, there are lot of messages(log) generated during the system start-up.
So you can read all these messages by using dmesg command. The contents of kernel ring buffer are also stored in /var/log/dmesg file.

The dmesg command can be useful when system encounters any problem during its start-up, so by reading the contents of dmesg command you can actually find out where the problem occurred(as there are many steps in system boot-up sequence).

Syntax :


dmesg [options]


Options :



-C –clear : clear the ring buffer.
-c –read-clear : clear the ring buffer after printing its contents.
-D –console-off : disable the messages printing to console.
-E –console-on : Enable printing messages to console.
-F –file file : read the messages from given file.
-h –help : display help text.
-k –kernel : print kernel messages.
-t –notime : do not print kernel’s timestamps.
-u –userspace : print userspace messages.

You can check more options here

Since output of dmesg command is very large, so for finding specific information in dmesg output, it is better to use dmesg command with less or grep command.

dmesg | less

or

dmesg | grep "text_to_search"


For example :



This is output of dmesg command when I plugged in USB drive and then unplugged it.
This is part of output of dmesg command, since output is very large, you can try on your Linux terminal

[ 6982.128179] usb 2-2: New USB device found, idVendor=0930, idProduct=6544
[ 6982.128185] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6982.128188] usb 2-2: Product: DataTraveler 2.0
[ 6982.128190] usb 2-2: Manufacturer: Kingston
[ 6982.128193] usb 2-2: SerialNumber: C86000886407C141DA1401A2
[ 6982.253866] usb-storage 2-2:1.0: USB Mass Storage device detected
[ 6982.254035] scsi host3: usb-storage 2-2:1.0
[ 6982.254716] usbcore: registered new interface driver usb-storage
[ 6982.265103] usbcore: registered new interface driver uas
[ 6983.556572] scsi 3:0:0:0: Direct-Access Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 4
[ 6983.557750] sd 3:0:0:0: Attached scsi generic sg1 type 0
[ 6983.557863] sd 3:0:0:0: [sdb] 30310400 512-byte logical blocks: (15.5 GB/14.5 GiB)
[ 6983.558092] sd 3:0:0:0: [sdb] Write Protect is off
[ 6983.558095] sd 3:0:0:0: [sdb] Mode Sense: 45 00 00 00
[ 6983.558314] sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn’t support DPO or FUA
[ 6983.560061] sdb: sdb1
[ 6983.563403] sd 3:0:0:0: [sdb] Attached SCSI removable disk
[ 7045.431954] wlp2s0: disassociated from a0:55:4f:27:bd:01 (Reason: 1)
[ 7049.003277] wlp2s0: authenticate with a0:55:4f:27:bd:01
[ 7049.006680] wlp2s0: send auth to a0:55:4f:27:bd:01 (try 1/3)
[ 7049.015786] wlp2s0: authenticated
[ 7049.021441] wlp2s0: associate with a0:55:4f:27:bd:01 (try 1/3)
[ 7049.038590] wlp2s0: RX AssocResp from a0:55:4f:27:bd:01 (capab=0x431 status=0 aid=140)
[ 7049.043217] wlp2s0: associated
[ 7049.063811] wlp2s0: Limiting TX power to 30 (30 – 0) dBm as advertised by a0:55:4f:27:bd:01
[ 7129.257920] usb 2-2: USB disconnect, device number 3

Since output is always large, it is advisable to use dmesg command along with grep command.

For example :

dmesg | grep "usb"

It gives output

[ 5944.925979] usb 2-1: new low-speed USB device number 2 using xhci_hcd
[ 5945.085658] usb 2-1: New USB device found, idVendor=04d9, idProduct=1702
[ 5945.085663] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 5945.085666] usb 2-1: Product: USB Keyboard
[ 5945.085669] usb 2-1: Manufacturer:
[ 5945.222536] input: USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.0/0003:04D9:1702.0003/input/input19
[ 5945.282554] hid-generic 0003:04D9:1702.0003: input,hidraw2: USB HID v1.10 Keyboard [ USB Keyboard] on usb-0000:00:14.0-1/input0
[ 5945.284803] input: USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.1/0003:04D9:1702.0004/input/input20
[ 5945.342340] hid-generic 0003:04D9:1702.0004: input,hidraw3: USB HID v1.10 Device [ USB Keyboard] on usb-0000:00:14.0-1/input1
[ 6981.985310] usb 2-2: new high-speed USB device number 3 using xhci_hcd
[ 6982.128179] usb 2-2: New USB device found, idVendor=0930, idProduct=6544
[ 6982.128185] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6982.128188] usb 2-2: Product: DataTraveler 2.0
[ 6982.128190] usb 2-2: Manufacturer: Kingston
[ 6982.128193] usb 2-2: SerialNumber: C86000886407C141DA1401A2
[ 6982.253866] usb-storage 2-2:1.0: USB Mass Storage device detected
[ 6982.254035] scsi host3: usb-storage 2-2:1.0
[ 6982.254716] usbcore: registered new interface driver usb-storage
[ 6982.265103] usbcore: registered new interface driver uas
[ 7129.257920] usb 2-2: USB disconnect, device number 3

Output with options :

For example :

dmesg -t

-t specifies output with timestamps.

Output :

usb 2-2: new high-speed USB device number 3 using xhci_hcd
usb 2-2: New USB device found, idVendor=0930, idProduct=6544
usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-2: Product: DataTraveler 2.0
usb 2-2: Manufacturer: Kingston
usb 2-2: SerialNumber: C86000886407C141DA1401A2
usb-storage 2-2:1.0: USB Mass Storage device detected
scsi host3: usb-storage 2-2:1.0
usbcore: registered new interface driver usb-storage
usbcore: registered new interface driver uas
scsi 3:0:0:0: Direct-Access Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 4
sd 3:0:0:0: Attached scsi generic sg1 type 0
sd 3:0:0:0: [sdb] 30310400 512-byte logical blocks: (15.5 GB/14.5 GiB)
sd 3:0:0:0: [sdb] Write Protect is off
sd 3:0:0:0: [sdb] Mode Sense: 45 00 00 00
sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn’t support DPO or FUA
sdb: sdb1
sd 3:0:0:0: [sdb] Attached SCSI removable disk
wlp2s0: disassociated from a0:55:4f:27:bd:01 (Reason: 1)
wlp2s0: authenticate with a0:55:4f:27:bd:01
wlp2s0: send auth to a0:55:4f:27:bd:01 (try 1/3)
wlp2s0: authenticated
wlp2s0: associate with a0:55:4f:27:bd:01 (try 1/3)
wlp2s0: RX AssocResp from a0:55:4f:27:bd:01 (capab=0x431 status=0 aid=140)
wlp2s0: associated
wlp2s0: Limiting TX power to 30 (30 – 0) dBm as advertised by a0:55:4f:27:bd:01
usb 2-2: USB disconnect, device number 3