Wednesday, 28 November 2018

Limitations and Pitfalls of Cloud Computing

Cloud computing companies have become commonplace. Business people recognize that cloud-based software and services make it possible to use computing resources more efficiently. Large capacity servers in massive server farms can run applications and services and provide good performance. The cloud isn’t scary anymore and everyone uses it.

LPI Study Material, LPI Tutorial and Material, LPI Guides, LPI Certification

Even so, cloud services have their limitations and pitfalls.

For example, some applications, unless heavily modified, do not do well with high latency. Other applications may have huge network requirements which do not fit “cloud” models well. Applications where the computation is far distant from the data can have excessive communication costs and long latencies.

Consider the case of an application developed to use artificial intelligence to recognize people on a bus. A web cam was placed at the front of the bus and the application tracked people entering and leaving the bus and therefore could calculate the number of seats that were empty.

The application was modeled two ways. The first had the entire application living in the cloud with data streaming from the webcam on the bus into the remote application. They built the second model using a small single-board computer running the program on the bus itself. While it communicated with the cloud, it only did so when there were no more seats available or when seats became available after someone left the bus.

Estimates of both approaches showed that in network savings alone, installing the small computer paid for itself in one day. In addition, interruptions in network traffic due to the roaming issues of the bus were not as frequent. Multiply this by a fleet of buses and you see real savings.

“Big Cloud” vendors are now reaching out with new IoT (Internet of Things) solutions, while avoiding discussions about the amount of Internet traffic, latencies, lack of control, and potential security problems that may come from this offering.

Even with common and established cloud applications, issues such as multi-country jurisdiction, not being able to guarantee privacy, lack of servers in more than two-hundred countries, lack of control on where data is stored and where processes are run, are all factors in what data processing you allow into the cloud.

An alternative approach is to create private clouds to do the initial processing of IoT data in such a way as to limit transport and exposure.

Peer-to-Peer cloud software

Peer-to-Peer cloud software allows systems administrators to set up their own cloud among the different computers or servers processing the data and the “Things” supplying that data. If the Things have even the tiniest networking capability, they could become a legitimate part of the cloud and allow any application that could authenticate to them access the Thing.

This would help keep the network traffic and processing local to the Things, thereby reducing the networking costs and often improving latency time in processing the raw data.

Ideally, this cloud software would be Open Source. Many pundits are anxious about Things being used to help spread viruses, aiding in denial of service attacks and other dastardly goings-on. Making sure the Thing software is Open Source means that the source code is available to fix the inevitable problems of rampant Things far into the future. Of course, all software and networking the Thing uses should also include good encryption and authorization.

Other uses of peer-to-peer cloud software

While peer-to-peer cloud software is exceptional for IoT, it is also useful in client/server cloud functions. By setting up clouds internal to your own organization or community, you make more efficient use of existing hardware. Using peer-to-peer clouds in conjunction with Big Cloud vendors can reduce the costs of the cloud software overall. This is called a hybrid cloud.

Now add to this the ability to actually buy, rent, and sell additional resources that you may have or need as the situation warrants, all done automatically once the criteria for exchange has been set up.

Cloud providers often have Service Level Agreements (SLA) that state what level of performance you will get from the resource provider. These include, but are not limited to, the percentage of time will your resources be available (often measured in “nines”, e.g. 99.999%), the type of security provided, and whether data is backed up and when, etc. These are all things that people look for when choosing a provider.

Likewise, accompanying contracts state how much the services cost and what happens if the SLA is not met.

Standardize these things, make the system electronic, and use some form of electronic currency, and the systems can automatically find the resources needed and balance for the best possible fit. This frees up the purchaser from having to constantly evaluate what supplier is going to provide the resources for their cloud needs.

Saturday, 24 November 2018

df Command in Linux with examples

There might come a situation while using LINUX when you want to know the amount of space consumed by a particular file system on your LINUX system or how much space is available on a particular file system. LINUX being command friendly provides a command line utility for this i.e df command that displays the amount of disk space available on the file system containing each file name argument.

Linux Tutorial and Material, Linux Guides, Linux Certification, Linux Study Material

◈ If no file name is passed as an argument with df command then it shows the space available on all currently mounted file systems
◈ . This is something you might wanna know cause df command is not able to show the space available on unmounted file systems and the reason for this is that for doing this on some systems requires very deep knowledge of file system structures.
◈ By default, df shows the disk space in 1 K blocks.
◈ df displays the values in the units of first available SIZE from –block-size (which is an option) and from the DF_BLOCK_SIZE, BLOCKSIZE AND BLOCK_SIZE environment variables.
◈ By default, units are set to 1024 bytes or 512 bytes(if POSIXLY_CORRECT is set) . Here, SIZE is an integer and optional unit and units are K, M, G, T, P, E, Z, Y (as K in kilo) .

df Syntax :


df [OPTION]...[FILE]...
OPTION : to the options compatible with df command
FILE : specific filename in case you want to know
the disk space usage of a particular file system only.

Using df command


Suppose you have a file named kt.txt and you want to know the used disk space on the file system that contains this file then you can use df in this case as:

// using df for a specific file //

$df kt.txt
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/the2        1957124      1512   1955612   1% /snap/core

/* the df only showed the disk usage
details of the file system that
contains file kt.txt */

Now, what if you don’t give any filename t df, in that case df displays the disk usage information for all mounted file systems as shown below:

//using df without any filename //

$df
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/loop0      18761008  15246876   2554440  86% /
none                   4         0         4   0% /sys/fs/cgroup
udev              493812         4    493808   1% /dev
tmpfs             100672      1364     99308   2% /run
none                5120         0      5120   0% /run/lock
none              503352      1764    501588   1% /run/shm
none              102400        20    102380   1% /run/user
/dev/the2        1957124      1512   1955612   1% /snap/core

/* in this case df displayed
the disk usage details of all
mounted file systems */

Options for df command


◈ -a,- -all : It includes all the dummy files also in the output which are actually having zero block sizes.
◈ -B,- -block-size=S : This is the option we were talking in the above para which is used to scale sizes by SIZE like -BM prints sizes in units of 1,048,576 bytes.
◈ – -total : It is used to display the grand total for size.
◈ -h,- -human-readable : It print sizes in human readable format.
◈ -H,- -si : This option is same as -h but it use powers of 1000 instead of 1024.
◈ -i,- -inodes : This option is used when you want to display the inode information instead of block usage.
◈ -k : Its use is like –block-size-1k.
◈ -l,- -local : This will display the disk usage of only local file systems.
◈ -P,- -portability : It uses the POSIX output format.
◈ -t,- -type=TYPE : It will only show the output of file systems having type TYPE.
◈ -T,- -print-type : This option is used to print file system type shown in the output.
◈ -x,- -exclude-type=TYPE : It will exclude all the file systems having type TYPE from the output.
◈ -v : Ignored, included for compatibility reasons.
◈ – -no-sync : This is the default setting i.e not to invoke sync before getting usage info.
◈ – -sync : It invokes a sync before getting usage info.
◈ – -help : It displays a help message and exit.
◈ – -version : It displays version information and exit.

Examples of using df with options


1. Using -a : If there is a need to display all file systems along with those which has zero block sizes then use -a option with df.

//using df with -a//

$df -a
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/loop0      18761008  15246876   2554440  86% /
none                   4         0         4   0% /sys/fs/cgroup
udev              493812         4    493808   1% /dev
tmpfs             100672      1364     99308   2% /run
none                5120         0      5120   0% /run/lock
none              503352      1764    501588   1% /run/shm
none              102400        20    102380   1% /run/user
/dev/sda3      174766076 164417964  10348112  95% /host
systemd                0         0         0    - /sys/fs/cgroup

/* in this case
systemd file system
having zero block
size is also displayed */

2. Using -h : This is used to make df command display the output in human-readable format.

//using -h with df//

$df -h kt.txt
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/the2           1.9G      1.5M      1.9G   1% /snap/core

/*this output is
easily understandable by
the user and all
cause of -h option */

In the above, G and M represents Gigabytes and Megabytes respectively. You can use -h with df only to produce the output in readable format for all the mounted file systems rather than just of the file system containing kt.txt file.

3. Using -k : This displays the file system information and usage in 1 K blocks.

//using -k with df//

$df -k kt.txt
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/the2        1957124      1512   1955612   1% /snap/core

/* no change cause the
output was initially
shown in the usage
of 1K bytes only */

4. Using – -total : This option is used to produce total for a size, used and available columns in the output.

//using --total with df//

$df --total
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/loop0      18761008  15246876   2554440  86% /
none                   4         0         4   0% /sys/fs/cgroup
udev              493812         4    493808   1% /dev
tmpfs             100672      1364     99308   2% /run
none                5120         0      5120   0% /run/lock
none              503352      1764    501588   1% /run/shm
none              102400        20    102380   1% /run/user
/dev/the2        1957124      1512   1955612   1% /snap/core
total           21923492  15251540   5712260  92% -                     

/* the total row
is added in the
output */

5. Using -T : With the help of this option, you will be able to see the corresponding type of the file system as shown.

/using -T with df//

$df -T kt.txt
Filesystem     Type     1K-blocks      Used Available Use% Mounted on
/dev/the2      squashfs   1957124      1512   1955612   1% /snap/core

/* you can use
-T with df only
to display type of
all the mounted
file systems */

6. Using -t : This is used when you wish the disk usage information of the file systems having a particular type only.

//using -t with df//

$df -t squashfs
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/loop0      18761008  15246876   2554440  86% /
/dev/the2        1957124      1512   1955612   1% /snap/core

/*so the df
displayed only the
info of the file
systems having type
squashfs */

7. Using -x : Now, you can also tell df to display the disk usage info of all the file systems except those having a particular type with the help of -x option.

//using -x with df//

$df -x squashfs
Filesystem     1K-blocks      Used Available Use% Mounted on
none                   4         0         4   0% /sys/fs/cgroup
udev              493812         4    493808   1% /dev
tmpfs             100672      1364     99308   2% /run
none                5120         0      5120   0% /run/lock
none              503352      1764    501588   1% /run/shm
none              102400        20    102380   1% /run/user

/* in this case info of
/dev/the2 and /dev/loop0
file systems aren't
displayed cause they are
of type squashfs */

8. Using -i : This option is used to display inode information in the output.

//using -i with df//

$df -i kt.txt
Filesystem     Inodes  IUsed    IFree Iuse% Mounted on
/dev/the2      489281     48   489233    1% /snap/core

/*showing inode info
of file system
having file kt.txt */

When -i option is used then the second, third, and fourth columns displays inode-related figures instead of disk related.

9. Using – -sync : By default, the df command produces output with – -no-sync option which will not perform the sync system call prior to reprting usage information. Now we can use – -sync option which will force a sync resulting in the output being fully up to date.

//using --sync option//

$df --sync
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/loop0      18761008  15246876   2554440  86% /
none                   4         0         4   0% /sys/fs/cgroup
udev              493812         4    493808   1% /dev
tmpfs             100672      1364     99308   2% /run
none                5120         0      5120   0% /run/lock
none              503352      1764    501588   1% /run/shm
none              102400        20    102380   1% /run/user
/dev/the2        1957124      1512   1955612   1% /snap/core

/* in this case
no change in the output
is observed cause it is
possible that there is no
update info present to
be reflected */

10. Using -l : When we run df command then by default it shows any externally mounted file systems which include those from external NFS or Samba servers. We can hide the info of these external file systems from output with -l option syntax of which is shown below.

$df -l

So, this is all about df command.

Wednesday, 21 November 2018

help Command in Linux with examples

If you are new to LINUX operating system and having trouble dealing with the command-line utilities provided by LINUX then you really need to know first of all about the help command which as its name says help you to learn about any built-in command.

LPI Study Materials, Linux Tutorial and Material, LPI Certification

help command as told before just displays information about shell built-in commands. Here’s the syntax for it:

// syntax for help command

$help [-dms] [pattern ...]

The pattern specified in the syntax above refers to the command about which you would like to know and if it is matched with any shell built-in command then help give details about it and if it is not matched then help prints the list of help topics for your convenience. And the d, m and s here are options that you can use with the help command.

Using help command

To make you understand more easily about what help command does let’s try help command for finding out about help itself.

// using help

$help help
help: help [-dms] [pattern...]
    Display information about builtin commands.

    Displays brief summaries of builtin commands. If PATTERN IS
    specified, gives detailed help on all commands matching PATTERN,
    otherwise the list of help topics is printed.

    Options:
      -d        output short description for each topic
      -m        display usage in pseudo-manpage format
      -s        output only a short usage synopsis for each topic matching
        PATTERN

    Arguments:
      PATTERN   Pattern specifying a help topic

    Exit Status:
    Returns success unless PATTERN is not found or an invalid option is given.

/* so that's what help command
does telling everything
about the command and 
helping you out */

Options for help command

◈ -d option : It is used when you just want to get an overview about any shell built-in command i.e it only gives short description.
◈ -m option : It displays usage in pseudo-manpage format.
◈ -s option : It just displays only a short usage synopsis for each topic matching.

Using help with options

◈ Using -d : This option just lets you know about what a command does without giving you details about its options and other stuff.

// using help with -d

$help -d help
help - Display information about builtin commands.

◈ Using -s : This option is when you just want to know about the syntax of a command.

// using help with -s 

$help -s help
help: help [-dms] [pattern ...]

◈ Using -m : This is used to display information about a command in pseudo-manpage format.

// using help with -m

$help -m help
NAME
    help - Display information about builtin commands.
SYNOPSIS
    help [-dms] [pattern ...]
DESCRIPTION
    Display information about builtin commands.

    Displays brief summaries of builtin commands. If PATTERN IS
    specified, gives detailed help on all commands matching PATTERN,
    otherwise the list of help topics is printed.

    Options:
      -d        output short description for each topic
      -m        display usage in pseudo-manpage format
      -s        output only a short usage synopsis for each topic matching
        PATTERN

    Arguments:
      PATTERN   Pattern specifying a help topic

    Exit Status:
    Returns success unless PATTERN is not found or an invalid option is given.

SEE ALSO
    bash(1)
IMPLEMENTATION
    GNU bash,version 4.3.11(1)-release (i686-pc-linux-gnu)
    Copyright (C) 2013 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later 

So that’s all about help command.

Friday, 16 November 2018

ps command in Linux with Examples

As we all know Linux is a multitasking and multi-user systems. So, it allows multiple processes to operate simultaneously without interfering with each other. Process is one of the important fundamental concept of the Linux OS. A process is an executing instance of a program and carry out different tasks within the operating system.

Linux Tutorial and Materials, Linux Certification, Linux Guides, Linux LPI

Linux provides us a utility called ps for viewing information related with the processes on a system which stands as abbreviation for “Process Status”. ps command is used to list the currently running processes and their PIDs along with some other information depends on different options. It reads the process information from the virtual files in /proc file-system. /proc contains virtual files, this is the reason it’s referred as a virtual file system.

ps provides numerous options for manipulating the output according to our need.

Syntax 


ps [options]

Options for ps Command :


1. Simple process selection : Shows the processes for the current shell –

[root@lpicentral ~]# ps
  PID TTY          TIME CMD
12330 pts/0    00:00:00 bash
21621 pts/0    00:00:00 ps

Result contains four columns of information.
Where,

PID – the unique process ID
TTY – terminal type that the user is logged into
TIME – amount of CPU in minutes and seconds that the process has been running
CMD – name of the command that launched the process.

Note – Sometimes when we execute ps command, it shows TIME as 00:00:00. It is nothing but the total accumulated CPU utilization time for any process and 00:00:00 indicates no CPU time has been given by the kernel till now. In above example we found that, for bash no CPU time has been given. This is because bash is just a parent process for different processes which needs bash for their execution and bash itself is not utilizing any CPU time till now.

2. View Processes : View all the running processes use either of the following option with ps –
[root@lpicentral ~]# ps -A
[root@lpicentral ~]# ps -e

3. View Processes not associated with a terminal : View all processes except both session leaders and processes not associated with a terminal.

[root@lpicentral ~]# ps -a
  PID TTY          TIME CMD
27011 pts/0    00:00:00 man
27016 pts/0    00:00:00 less
27499 pts/1    00:00:00 ps

Note – You may be thinking that what is session leader? A unique session is assing to evry process group. So, session leader is a process which kicks off other processes. The process ID of first process of any session is similar as the session ID.

4. View all the processes except session leaders :

[root@lpicentral ~]# ps -d

5. View all processes except those that fulfill the specified conditions (negates the selection) :

Example – If you want to see only session leader and processes not associated with a terminal. Then, run

[root@lpicentral ~]# ps -a -N
OR
[root@lpicentral ~]# ps -a --deselect

6. View all processes associated with this terminal :

[root@lpicentral ~]# ps -T

7. View all the running processes :

[root@lpicentral ~]# ps -r

8. View all processes owned by you : Processes i.e same EUID as ps which means runner of the ps command, root in this case –

[root@lpicentral ~]# ps -x

Process selection by list

Here we will discuss how to get the specific processes list with the help of ps command. These options accept a single argument in the form of a blank-separated or comma-separated list. They can be used multiple times.
For example: ps -p “1 2” -p 3,4

1. Select the process by the command name. This selects the processes whose executable name is given in cmdlist. There may be a chance you won’t know the process ID and with this command it is easier to search.
Syntax : ps -C command_name

Syntax :
ps -C command_name

Example :
[root@lpicentral ~]# ps -C dhclient
  PID TTY          TIME CMD
19805 ?        00:00:00 dhclient

2. Select by group ID or name. The group ID identifies the group of the user who created the process.

Syntax :
ps -G group_name
ps --Group group_name

Example :
[root@lpicentral ~]# ps -G root

3. View by group id :

Syntax :
ps -g group_id
ps -group group_id

Example :
[root@lpicentral ~]# ps -g 1
  PID TTY          TIME CMD
    1 ?        00:00:13 systemd

4. View process by process ID.

Syntax :
ps p process_id
ps -p process_id
ps --pid process_id

Example :
[root@lpicentral ~]#  ps p 27223
  PID TTY      STAT   TIME COMMAND
27223 ?        Ss     0:01 sshd: root@pts/2

[root@lpicentral ~]#  ps -p 27223
  PID TTY          TIME CMD
27223 ?        00:00:01 sshd

[root@lpicentral ~]#  ps --pid 27223
  PID TTY          TIME CMD
27223 ?        00:00:01 sshd

Linux Tutorial and Materials, Linux Certification, Linux Guides, Linux LPI

You can view multiple processes by specifying multiple process IDs separated by blank or comma –

Example :

[root@lpicentral ~]#  ps -p 1 904 27223
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:13 /usr/lib/systemd/systemd --switched-root --system --d
  904 tty1     Ssl+   1:02 /usr/bin/X -core -noreset :0 -seat seat0 -auth /var/r
27223 ?        Ss     0:01 sshd: root@pts/2

Here, we mentioned three process IDs – 1, 904 and 27223 which are separated by blank.

5. Select by parent process ID. By using this command we can view all the processes owned by parent process except the parent process.

[root@lpicentral ~]# ps -p 766
  PID TTY          TIME CMD
  766 ?        00:00:06 NetworkManager

[root@lpicentral ~]# ps --ppid 766
  PID TTY          TIME CMD
19805 ?        00:00:00 dhclient

In above example process ID 766 is assigned to NetworkManager and this is the parent process for dhclient with process ID 19805.

6. View all the processes belongs to any session ID.

Syntax :
ps -s session_id
ps --sid session_id

Example :
[root@lpicentral ~]# ps -s 1248
  PID TTY          TIME CMD
 1248 ?        00:00:00 dbus-daemon
 1276 ?        00:00:00 dconf-service
 1302 ?        00:00:00 gvfsd
 1310 ?        00:00:00 gvfsd-fuse
 1369 ?        00:00:00 gvfs-udisks2-vo
 1400 ?        00:00:00 gvfsd-trash
 1418 ?        00:00:00 gvfs-mtp-volume
 1432 ?        00:00:00 gvfs-gphoto2-vo
 1437 ?        00:00:00 gvfs-afc-volume
 1447 ?        00:00:00 wnck-applet
 1453 ?        00:00:00 notification-ar
 1454 ?        00:00:02 clock-applet

7. Select by tty. This selects the processes associated with the mentioned tty :

Syntax :
ps t tty
ps -t tty
ps --tty tty

Example :
[root@lpicentral ~]# ps -t pts/0
  PID TTY          TIME CMD
31199 pts/0    00:00:00 bash
31275 pts/0    00:00:00 man
31280 pts/0    00:00:00 less

8. Select by effective user ID or name.

Syntax :
ps U user_name/ID
ps -U user_name/ID
ps -u user_name/ID
ps –User user_name/ID
ps –user user_name/ID

Output Format control


These options are used to choose the information displayed by ps. There are multiple options to control output format. These option can be combined with any other options like e, u, p, G, g etc, depends on our need.

1. Use -f to view full-format listing.

[tux@lpicentral ~]$ ps -af
tux      17327 17326  0 12:42 pts/0    00:00:00 -bash
tux      17918 17327  0 12:50 pts/0    00:00:00 ps -af

2. Use -F to view Extra full format.

[tux@lpicentral ~]$ ps -F
UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
tux      17327 17326  0 28848  2040   0 12:42 pts/0    00:00:00 -bash
tux      17942 17327  0 37766  1784   0 12:50 pts/0    00:00:00 ps -F

3. To view process according to user-defined format.

Syntax :
[root@lpicentral ~]#  ps --formate column_name
[root@lpicentral ~]#  ps -o column_name
[root@lpicentral ~]#  ps o column_name

Example :
[root@lpicentral ~]#  ps -aN --format cmd,pid,user,ppid
CMD                           PID USER      PPID
/usr/lib/systemd/systemd --     1 root         0
[kthreadd]                      2 root         0
[ksoftirqd/0]                   3 root         2
[kworker/0:0H]                  5 root         2
[migration/0]                   7 root         2
[rcu_bh]                        8 root         2
[rcu_sched]                     9 root         2
[watchdog/0]                   10 root         2

In this example I wish to see command, process ID, username and parent process ID, so I pass the arguments cmd, pid, user and ppid respectively.

4. View in BSD job control format :

[root@lpicentral ~]# ps -j
  PID  PGID   SID TTY          TIME CMD
16373 16373 16373 pts/0    00:00:00 bash
19734 19734 16373 pts/0    00:00:00 ps

5. Display BSD long format :

[root@lpicentral ~]# ps l
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
4     0   904   826  20   0 306560 51456 ep_pol Ssl+ tty1       1:32 /usr/bin/X -core -noreset :0 -seat seat0 -auth /var/run/lightdm/root/:0 -noli
4     0 11692 11680  20   0 115524  2132 do_wai Ss   pts/2      0:00 -bash

6. Add a column of security data.

[root@lpicentral ~]# ps -aM
LABEL                                                  PID  TTY    TIME    CMD
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 19534 pts/2 00:00:00 man
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 19543 pts/2 00:00:00 less
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 20469 pts/0 00:00:00 ps

7. View command with signal format.

[root@lpicentral ~]# ps s 766

8. Display user-oriented format

[root@lpicentral ~]# ps u 1
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.6 128168  6844 ?        Ss   Apr08   0:16 /usr/lib/systemd/systemd --switched-root --system --deserialize 21

9. Display virtual memory format

[root@lpicentral ~]# ps v 1
  PID TTY      STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
    1 ?        Ss     0:16     62  1317 126850 6844  0.6 /usr/lib/systemd/systemd --switched-root --system --deserialize 21

10. If you want to see environment of any command. Then use option **e** –

[root@lpicentral ~]# ps ev 766
  PID TTY      STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  766 ?        Ssl    0:08     47  2441 545694 10448  1.0 /usr/sbin/NetworkManager --no-daemon LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin

11. View processes using highest memory.

ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem

12 – print a process tree

[root@lpicentral ~]# ps --forest -C sshd
  PID TTY          TIME CMD
  797 ?        00:00:00 sshd
11680 ?        00:00:03  \_ sshd
16361 ?        00:00:02  \_ sshd

12. List all threads for a particular process. Use either the -T or -L option to display threads of a process.

[root@lpicentral ~]# ps -C sshd -L
  PID   LWP TTY          TIME CMD
  797   797 ?        00:00:00 sshd
11680 11680 ?        00:00:03 sshd
16361 16361 ?        00:00:02 sshd

Note – For the explanation of different column contents refer man page.

Thursday, 8 November 2018

wc command in Linux with examples

wc stands for word count. As the name implies, it is mainly used for counting purpose.

Linux Tutorial and Material, Linux Certification, Linux Guides, Linux Tutorial and Material

◈ It is used to find out number of lines, word count, byte and characters count in the files specified in the file arguments.
◈ By default it displays four-columnar output.
◈ First column shows number of lines present in a file specified, second column shows number of words present in the file, third column shows number of characters present in file and fourth column itself is the file name which are given as argument.


Syntax:


wc [OPTION]... [FILE]...

Let us consider two files having name state.txt and capital.txt containing 5 names of the Indian states and capitals respectively.

$ cat state.txt
Andhra Pradesh
Arunachal Pradesh
Assam
Bihar
Chhattisgarh

$ cat capital.txt
Hyderabad
Itanagar
Dispur
Patna
Raipur

Passing only one file name in the argument.

$ wc state.txt
 5  7 63 state.txt
       OR
$ wc capital.txt
 5  5 45 capital.txt

Passing more than one file name in the argument.

$ wc state.txt capital.txt
  5   7  63 state.txt
  5   5  45 capital.txt
 10  12 108 total

Note : When more than file name is specified in argument then command will display four-columnar output for all individual files plus one extra row displaying total number of lines, words and characters of all the files specified in argument, followed by keyword total.

Options:


1. -l: This option prints the number of lines present in a file. With this option wc command displays two-columnar output, 1st column shows number of lines present in a file and 2nd itself represent the file name.

With one file name
$ wc -l state.txt
5 state.txt

With more than one file name
$ wc -l state.txt capital.txt
  5 state.txt
  5 capital.txt
 10 total

2. -w: This option prints the number of words present in a file. With this option wc command displays two-columnar output, 1st column shows number of words present in a file and 2nd is the file name.

With one file name
$ wc -w state.txt
7 state.txt

With more than one file name
$ wc -w state.txt capital.txt
  7 state.txt
  5 capital.txt
 12 total

3. -c: This option displays count of bytes present in a file. With this option it display two-columnar output, 1st column shows number of bytes present in a file and 2nd is the file name.

With one file name
$ wc -c state.txt
63 state.txt

With more than one file name
$ wc -c state.txt capital.txt
 63 state.txt
 45 capital.txt
108 total

4. -m: Using -m option ‘wc’ command displays count of characters from a file.

With one file name
$ wc -m state.txt
63 state.txt

With more than one file name
$ wc -m state.txt capital.txt
 63 state.txt
 45 capital.txt
108 total

5. -L: The ‘wc’ command allow an argument -L, it can be used to print out the length of longest (number of characters) line in a file. So, we have the longest character line Arunachal Pradesh in a file state.txt and Hyderabad in the file capital.txt. But with this option if more than one file name is specified then the last row i.e. the extra row, doesn’t display total but it display the maximum of all values displaying in the first column of individual files.

Note: A character is the smallest unit of information that includes space, tab and newline.

With one file name
$ wc -L state.txt
17 state.txt

With more than one file name
$ wc -L state.txt capital.txt
 17 state.txt
 10 capital.txt
 17 total

6. –version: This option is used to display the version of wc which is currently running on your system.

$ wc --version
wc (GNU coreutils) 8.26
Packaged by Cygwin (8.26-1)
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later .
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Applications of wc Command


1. To count all files and folders present in directory: As we all know ls command in unix is used to display all the files and folders present in the directory, when it is piped with wc command with -l option it display count of all files and folders present in current directory.

$ ls gfg
a.txt 
b.txt  
c.txt  
d.txt  
e.txt  
geeksforgeeks  
India

$ ls gfg | wc -l
7

2. Display number of word count only of a file: We all know that this can be done with wc command having -w option, wc -w file_name, but this command shows two-columnar output one is count of words and other is file name.

$ wc -w  state.txt
7 state.txt

So to display 1st column only, pipe(|) output of wc -w command to cut command with -c option. Or use input redirection(<).

$ wc -w  state.txt | cut -c1
7
      OR
$ wc -w < state.txt
7

Sunday, 4 November 2018

Join Command in Linux

The join command in UNIX is a command line utility for joining lines of two files on a common field.

Linux Command, Linux Tutorial and Material, Linux Guides, Linux Certification


Suppose you have two files and there is a need to combine these two files in a way that the output makes even more sense.For example, there could be a file containing names and the other containing ID’s and the requirement is to combine both files in such a way that the names and corresponding ID’s appear in the same line. join command is the tool for it. join command is used to join the two files based on a key field present in both the files. The input file can be separated by white space or any delimiter.


Syntax:


$join [OPTION] FILE1 FILE2

Example : Let us assume there are two files file1.txt and file2.txt and we want to combine the contents of these two files.
// displaying the contents of first file //
$cat file1.txt
1 AAYUSH
2 APAAR
3 HEMANT
4 KARTIK

// displaying contents of second file //
$cat file2.txt
1 101
2 102
3 103
4 104

Now, in order to combine two files the files must have some common field. In this case, we have the numbering 1, 2... as the common field in both the files.

NOTE : When using join command, both the input files should be sorted on the KEY on which we are going to join the files.

//..using join command...//
$join file1.txt file2.txt
1 AAYUSH 101
2 APAAR 102
3 HEMANT 103
4 KARTIK 104

// by default join command takes the
first column as the key to join as
in the above case //

So, the output contains the key followed by all the matching columns from the first file file1.txt, followed by all the columns of second file file2.txt.

Now, if we wanted to create a new file with the joined contents, we could use the following command:

$join file1.txt file2.txt > newjoinfile.txt

//..this will direct the output of joined files
into a new file newjoinfile.txt
containing the same output as the example
above..//

Options for join command:


1. -a FILENUM : Also, print unpairable lines from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2.
2. -e EMPTY : Replace missing input fields with EMPTY.
3. -i - -ignore-case : Ignore differences in case when comparing fields.
4. -j FIELD : Equivalent to "-1 FIELD -2 FIELD".
5. -o FORMAT : Obey FORMAT while constructing output line.
6. -t CHAR : Use CHAR as input and output field separator.
7. -v FILENUM : Like -a FILENUM, but suppress joined output lines.
8. -1 FIELD : Join on this FIELD of file 1.
9. -2 FIELD : Join on this FIELD of file 2.
10. - -check-order : Check that the input is correctly sorted, even if all input lines are pairable.
11. - -nocheck-order : Do not check that the input is correctly sorted.
12. - -help : Display a help message and exit.
13. - -version : Display version information and exit.

Using join with options


1. using -a FILENUM option : Now, sometimes it is possible that one of the files contain extra fields so what join command does in that case is that by default, it only prints pairable lines. For example, even if file file1.txt contains an extra field provided that the contents of file2.txt are same then the output produced by join command would be same:

//displaying the contents of file1.txt//
$cat file1.txt
1 AAYUSH
2 APAAR
3 HEMANT
4 KARTIK
5 DEEPAK

//displaying contents of file2.txt//
$cat file2.txt
1 101
2 102
3 103
4 104

//using join command//
$join file1.txt file2.txt
1 AAYUSH 101
2 APAAR 102
3 HEMANT 103
4 KARTIK 104

// although file1.txt has extra field the
output is not affected cause the 5 column in
file1.txt was unpairable with any in file2.txt//

What if such unpairable lines are important and must be visible after joining the files. In such cases we can use -a option with join command which will help in displaying such unpairable lines. This option requires the user to pass a file number so that the tool knows which file you are talking about.

//using join with -a option//

//1 is used with -a to display the contents of
first file passed//

$join file1.txt file2.txt -a 1
1 AAYUSH 101
2 APAAR 102
3 HEMANT 103
4 KARTIK 104
5 DEEPAK

//5 column of first file is
also displayed with help of -a option
although it is unpairable//

2. using -v option : Now, in case you only want to print unpairable lines i.e suppress the paired lines in output then -v option is used with join command.

This option works exactly the way -a works(in terms of 1 used with -v in example below).

//using -v option with join//

$join file1.txt file2.txt -v 1
5 DEEPAK

//the output only prints unpairable lines found
in first file passed//

3. using -1, -2 and -j option : As we already know that join combines lines of files on a common field, which is first field by default.However, it is not necessary that the common key in the both files always be the first column.join command provides options if the common key is other than the first column.

Now, if you want the second field of either file or both the files to be the common field for join, you can do this by using the -1 and -2 command line options. The -1 and -2 here represents he first and second file and these options requires a numeric argument that refers to the joining field for the corresponding file. This will be easily understandable with the example below:

//displaying contents of first file//
$cat file1.txt
AAYUSH 1
APAAR 2
HEMANT 3
KARTIK 4

//displaying contents of second file//
$cat file2.txt
 101 1
 102 2
 103 3
 104 4

//now using join command //

$join -1 2 -2 2 file1.txt file2.txt
1 AAYUSH 101
2 APAAR 102
3 HEMANT 103
4 KARTIK 104

//here -1 2 refers to the use of 2 column of
first file as the common field and -2 2
refers to the use of 2 column of second
file as the common field for joining//

So, this is how we can use different columns other than the first as the common field for joining.
In case, we have the position of common field same in both the files(other than first) then we can simply replace the part -1[field] -2[field] in the command with -j[field]. So, in the above case the command could be:

//using -j option with join//

$join -j2 file1.txt file2.txt
1 AAYUSH 101
2 APAAR 102
3 HEMANT 103
4 KARTIK 104

4. using -i option : Now, other thing about join command is that by default, it is case sensitive. For example, consider the following examples:

//displaying contents of file1.txt//
$cat file1.txt
A AAYUSH
B APAAR
C HEMANT
D KARTIK

//displaying contents of file2.txt//
$cat file2.txt
a 101
b 102
c 103
d 104

Now, if you try joining these two files, using the default (first) common field, nothing will happen. That's because the case of field elements in both files is different. To make join ignore this case issue, use the -i command line option.

//using -i option with join//
$join -i file1.txt file2.txt
A AAYUSH 101
B APAAR 102
C HEMANT 103
D KARTIK 104

5. using - -nocheck-order option : By default, the join command checks whether or not the supplied input is sorted, and reports if not. In order to remove this error/warning then we have to use - -nocheck-order command like:

//syntax of join with --nocheck-order option//

$join --nocheck-order file1 file2

6. using -t option : Most of the times, files contain some delimiter to separate the columns. Let us update the files with comma delimiter.

$cat file1.txt
1, AAYUSH
2, APAAR
3, HEMANT
4, KARTIK
5, DEEPAK

//displaying contents of file2.txt//
$cat file2.txt
1, 101
2, 102
3, 103
4, 104

Now, -t option is the one we use to specify the delimiterin such cases.
Since comma is the delimiter we will specify it along with -t.

//using join with -t option//

$join -t, file1.txt file2.txt
1, AAYUSH, 101
2, APAAR, 102
3, HEMANT, 103
4, KARTIK, 104

Friday, 2 November 2018

Execute Mysql Command in Bash / Shell Script

Q) How to connect to mysql database from a bash script in unix or linux and run sql queries?


Bash scripting helps in automating things. We can automate running sql queries by connecting to the mysql database through a shell script in unix or linux system.

Linux Tutorial and Materials, LPI Guides, LPI Learning, LPI Study Material, LPI Learning

Here we will see how to run a small sql query in mysql database through a script. The bash script code is shown below:

#!/usr/bin/bash

#Script to run automated sql queries

#Declaring mysql DB connection 

MASTER_DB_USER='username'
MASTER_DB_PASSWD='password'
MASTER_DB_PORT=3160
MASTER_DB_HOST='mysql.hostname'
MASTER_DB_NAME='mysqlDbName'

#Prepare sql query

SQL_Query='select * from tablename limit 10'

#mysql command to connect to database

MYSQL -u$MASTER_DB_USER -p$MASTER_DB_PASSWD -P$MASTER_DB_PORT -h$MASTER_DB_HOST -D$MASTER_DB_NAME <<EOF 
$SQL_Query
EOF
echo "End of script"

Here in the above script, the first part declares the mysql db variables and assigns the DB details. The second part prepares sql query. And the final part executes the mysql command.

Move / Rename files, Directory - MV Command in Unix / Linux

MV Command, LPI Study Materials, LPI Guides, LPI Tutorial and Materials, LPI Certification

Q. How to rename a file or directory in unix (or linux) and how to move a file or directory from the current directory to another directory?


Unix provides a simple mv (move) command which can be used to rename or move files and directories. The syntax of mv command is

mv [options] oldname newname

The options of mv command are

f : Do not prompt before overwriting a file. 
i : Prompts for the user input before overwriting a file.

If the newname already exists, then the mv command overwrites that file. Let see some examples on how to use mv command.

Unix mv command examples


1. Write a unix/linux command to rename a file?


Renaming a file is one of the basic features of the mv command. To rename a file from "log.dat" to "bad.dat", use the below mv command

> mv log.dat bad.dat

Note that if the "bad.dat" file already exists, then its contents will be overwritten by "log.dat". To avoid this use the -i option, which prompts you before overwriting the file.

mv -i log.dat bad.dat
mv: overwrite `bad.dat'?

2. Write a unix/linux command to rename a directory?


Just as renaming a file, you can use the mv command to rename a directory. To rename the directory from docs to documents, run the below command

mv docs/ documents/

If the documents directory already exists, then the docs directory will be moved in to the documents directory.

3. Write a unix/linux command to move a file into another directory?


The mv command can also be used to move the file from one directory to another directory. The below command moves the sum.pl file in the current directory to /var/tmp directory.

mv sum.pl /var/tmp/

If the sum.pl file already exists in the /var/tmp directory, then the contents of that file will be overwritten.

4. Write a unix/linux command to move a directory in to another directory?


Just as moving a file, you can move a directory into another directory. The below mv command moves the documents directory into the tmp directory

mv documents /tmp/

5. Write a unix/linux command to move all the files in the current directory to another directory?


You can use the regular expression pattern * to move all the files from one directory to another directory.

mv * /var/tmp/

The above command moves all the files and directories in the current directory to the /var/tmp/ directory.

6. mv *


What happens if you simply type mv * and then press enter?

It depends on the files you have in the directory. The * expands to all the files and directories. Three scenarios are possible.

◈ If the current directory has only files, then the contents of all the files (except one file) will be written in to the one file. The one file is the last file which depends on the pattern *.
◈ If the current directory contains only directories, then all the directories (except one directory) will be moved to another directory.
◈ If the current directory contains both files and directories, then it depends on the expansion of the *. If the pattern * gives the last one as directory then all the files will be moved to that directory. Otherwise the mv command will fail.

Some Tips:

◈ Try to avoid mv *
◈ Avoid moving large number of files.