Wednesday, 31 January 2018

10 Essential Linux Commands For Navigating Your File System


Introduction


This guide lists 10 Linux commands that you need to know in order to be able to navigate around your file system using the Linux terminal.

It provides commands to find out which directory you are in, which directory you were previously in, how to navigate to other folders, how to get back home, how to create files and folders, how to create links

01. Which Folder Are You In


When you open a terminal window the first thing you need to know is where you are in the file system.

Think of this like the "you are here" marker that you find on maps within shopping malls.

To find out which folder you are in you can use the following command:

pwd

The results returned by pwd may differ depending on whether you are using the shell version of pwd or the one installed in your /usr/bin directory.

In general, it will print something along the lines of /home/username.

02. What Files And Folders Are Under The Current Directory


Now that you know which folder you are in, you can see which files and folders are under the current directory by using the ls command.

ls

On its own, the ls command will list all the files and folders in the directory except for those beginning with a period (.).

To see all the files including hidden files (those starting with a period) you can use the following switch:

ls -a

Some commands create backups of files which begin with the tilde metacharacter (~).

If you don't want to see the backups when listing the files in a folder use the following switch:

ls -B

The most common use of the ls command is as follows:

ls -lt

This provides a long listing sorted by modification time, with the newest first.

Other sort options include by extension, size, and version:

ls -lU
ls -lX
ls -lv

The long listing format gives you the following information:

◈ permissions
◈ number of inodes for the file (see hard links)
◈ owner
◈ primary group
◈ file size
◈ last access time
◈ file/folder/link name

03. How To Navigate To Other Folders


To move around the file system you can use the cd command.

The Linux file system is a tree structure. The top of the tree is denoted by a slash (/).

Under the root directory, you will find some or all of the following folders.

◈ /
◈ bin
◈ boot
◈ cdrom
◈ dev
◈ etc
◈ home
◈ lib
◈ lib64
◈ lost+found
◈ media
◈ mnt
◈ opt
◈ proc
◈ root
◈ run
◈ sbin
◈ srv
◈ sys
◈ tmp
◈ var
◈ usr

The bin folder contains commands that can be run by any user such as the cd command, ls, mkdir etc.

The sbin contains system binaries. 

The usr folder stands for unix system resources and also contains a bin and sbin folder. The /usr/bin folder has an extended set of commands which users can run. Similarly, the /usr/sbin folder contains an extended set of system commands.

The boot folder contains everything required by the boot process.

The cdrom folder is self-explanatory.

The dev folder contains details about all the devices on the system. 

The etc folder is generally where all the system configuration files are stored.

The home folder is generally where all the user folders are stored and for the average user is the only area they should be concerned about.

The lib and lib64 folders contain all the kernel and shared libraries.

The lost+found folder will contain files which no longer have a name which have been found by the fsck command.

The media folder is where mounted media such as USB drives are located.

The mnt folder is also used to mount temporary storage such as USB drives, other file systems, ISO images, etc.

The opt folder is used by some software packages as a place to store the binaries. Other packages use /usr/local.

The proc folder is a system folder used by the kernel. You don't really need to worry about this folder too much.

The root folder is the home directory for the root user.

The run folder is a system folder for storing system runtime information. 

The srv folder is where you would keep things like web folders, ​mysql databases, and subversion repositories etc.

The sys folder contains a folder structure to provide system information.

The tmp folder is a temporary folder.

The var folder contains a whole wealth of stuff specific to the system including game data, dynamic libraries, log files, process IDs, messages and cached application data.

To navigate to a particular folder use the cd command as follows:

cd /home/username/Documents

04. How To Navigate Back To The Home Folde


You can get back to the home folder from anywhere else in the system using the following command:

cd ~

05. How To Create A New Folder


If you want to create a new folder you can use the following command:

mkdir foldername

06. How To Create Files


Linux provides an incredible number of ways for creating new files.

To create an empty file you can use the following command:

touch filename

The touch command is used to update the last access time for a file but on a file that doesn't exist it has the effect of creating it.

You can also create a file using the following command:

cat > filename

07. How To Rename And Move Files Around The File System


The are a number of ways to rename files.

The simplest way to rename a file is to use the mv command.

mv oldfilename newfilename

You can use the mv command to move a file from one folder to another as well.

mv /path/of/original/file /path/of/target/folder

If you want to rename a lot of files which match a similar pattern you can use the rename command.

rename expression replacement filename(s)

For example:

rename "gary" "tom" *

This will replace all files in the folder with gary in it with tom. So a file called garycv will become tomcv.

Note that the rename command doesn't work on all systems. The mv command is safer.

08. How To Copy Files


To copy a file using Linux you can use the cp command as follows.

cp filename filename2

The above command will copy filename1 and call it filename2.

You can use the copy command to copy files from one folder to another.

For example

cp /home/username/Documents/userdoc1 /home/username/Documents/UserDocs

The above command will copy the file userdoc1 from /home/username/Documents to /home/username/Documents/UserDocs

09. How To Delete FIles And Folders


You can delete files and folders using the rm command:

rm filename

If you want to remove a folder you need to use the following switch:

rm -R foldername

The above command removes a folder and its contents including sub-folders.

10. What Are Symbolic Links And Hard Links


 A symbolic link is a file that points to another file. A desktop shortcut is basically a symbolic link.

You might for example have the following file on your system.

◈ /home/username/document/accounts/useraccounts.doc

Maybe you want to be able to access that document from the home/username folder.

You can create a symbolic link using the following command:

ln -s /home/username/documents/accounts/useraccounts.doc /home/username/useraccounts.doc

You can edit the useraccounts.doc file from both places but when you edit the symbolic link you are actually editing the file in the /home/username/documents/accounts folder.

A symbolic link can be created on one filesystem and point to a file on another file system.

A symbolic link really just creates a file which has a pointer to the other file or folder.

A hard link however creates a direct link between the two files. Essentially they are the same file but with just another name.

A hard link provides a good way of categorising files without taking up further disk space.

You can create a hard link using the following syntax:

ln filenamebeinglinked filenametolinkto

The syntax is similar to that of a symbolic link but it doesn't use the -s switch.

Saturday, 27 January 2018

ash - Linux Command - Unix Command

Linux Command, Unix Command, LPI Tutorials and Materials, LPI Guides, LPI Learning

NAME

sh - command interpreter (shell)

SYNOPSIS

sh [-/+aCefnuvxIimqsVEbc ] [-o longname ] -words [target ... ] 

DESCRIPTION

Sh is the standard command interpreter for the system. The current version of sh is in the process of being changed to conform with the POSIX 1003.2 and 1003.2a specifications for the shell. This version has many features which make it appear similar in some respects to the Korn shell, but it is not a Korn shell clone (see ksh(1)).


Only features designated by POSIX plus a few Berkeley extensions, are being incorporated into this shell. We expect POSIX conformance by the time 4.4 BSD is released. This man page is not intended to be a tutorial or a complete specification of the shell.

Overview

The shell is a command that reads lines from either a file or the terminal, interprets them, and generally executes other commands. It is the program that is running when a user logs into the system (although a user can select a different shell with the chsh(1) command). The shell implements a language that has flow control constructs, a macro facility that provides a variety of features in addition to data storage, along with built in history and line editing capabilities. It incorporates many features to aid interactive use and has the advantage that the interpretative language is common to both interactive and non-interactive use (shell scripts).

That is, commands can be typed directly to the running shell or can be put into a file and the file can be executed directly by the shell.

Invocation

If no args are present and if the standard input of the shell is connected to a terminal (or if the -iflag is set), and the -c option is not present, the shell is considered an interactive shell.

An interactive shell generally prompts before each command and handles programming and command errors differently (as described below). When first starting, the shell inspects argument 0, and if it begins with a dash `-' the shell is also considered a login shell. This is normally done automatically by the system when the user first logs in. A login shell first reads commands from the files /etc/profile and .profile if they exist. If the environment variable ENV is set on entry to a shell, or is set in the .profile of a login shell, the shell next reads commands from the file named in ENV Therefore, a user should place commands that are to be executed only at login time in the .profile file, and commands that are executed for every shell inside the ENV file. To set the ENV variable to some file, place the following line in your .profile of your home directory

ENV=$HOME/.shinit; export ENV

substituting for ``.shinit'' any filename you wish. Since the ENV file is read for every invocation of the shell, including shell scripts and non-interactive shells, the following paradigm is useful for restricting commands in the ENV file to interactive invocations. Place commands within the ``case'' and ``esac'' below (these commands are described later):


case $- in *i*)

# commands for interactive use only

...

esac

If command line arguments besides the options have been specified, then the shell treats the first argument as the name of a file from which to read commands (a shell script), and the remaining arguments are set as the positional parameters of the shell ($1, $2, etc). Otherwise, the shell reads commands from its standard input. 

Argument List Processing

All of the single letter options have a corresponding name that can be used as an argument to the -o option. The set -o name is provided next to the single letter option in the description below.

Specifying a dash ``-'' turns the option on, while using a plus ``+'' disables the option. The following options can be set from the command line or with the set(1) builtin (described later).

-a allexport

Export all variables assigned to. (UNIMPLEMENTED for 4.4alpha)

-c

Read commands from the command line. No commands will be read from the standard input.

-C noclobber

Don't overwrite existing files with ``>'' (UNIMPLEMENTED for 4.4alpha)

-e errexit

If not interactive, exit immediately if any untested command fails. The exit status of a command is considered to be explicitly tested if the command is used to control an if elifwhile or until or if the command is the left hand operand of an ``&&'' or ``||'' operator.

-f noglob

Disable pathname expansion.

-n noexec

If not interactive, read commands but do not execute them. This is useful for checking the syntax of shell scripts.

-u nounset

Write a message to standard error when attempting to expand a variable that is not set, and if the shell is not interactive, exit immediately. (UNIMPLEMENTED for 4.4alpha)

-v verbose

The shell writes its input to standard error as it is read. Useful for debugging.

-x xtrace

Write each command to standard error (preceded by a `+ ' before it is executed. Useful for debugging.

-q quietprofile

If the -v or -x options have been set, do not apply them when reading initialization files, these being /etc/profile .profile and the file specified by the ENV environment variable.

-I ignoreeof

Ignore EOF's from input when interactive.

-i interactive

Force the shell to behave interactively.

-m monitor

Turn on job control (set automatically when interactive).

-s stdin

Read commands from standard input (set automatically if no file arguments are present). This option has no effect when set after the shell has already started running (i.e. with set(1)).

-V vi

Enable the built-in vi(1) command line editor (disables -E if it has been set).

-E emacs

Enable the built-in emacs(1) command line editor (disables -V if it has been set).

-b notify

Enable asynchronous notification of background job completion. (UNIMPLEMENTED for 4.4alpha)

Lexical Structure

The shell reads input in terms of lines from a file and breaks it up into words at whitespace (blanks and tabs), and at certain sequences of characters that are special to the shell called ``operators'' There are two types of operators: control operators and redirection operators (their meaning is discussed later). Following is a list of operators:

"Control operators:"

& && ( ) ; ;; | || <newline>

"Redirection operator:"

< > >| << >> <& >& <<- <>

Quoting

Quoting is used to remove the special meaning of certain characters or words to the shell, such as operators, whitespace, or keywords. There are three types of quoting: matched single quotes, matched double quotes, and backslash. 

Backslash

A backslash preserves the literal meaning of the following character, with the exception of Aq newline . A backslash preceding a Aq newline is treated as a line continuation. 

Single Quotes

Enclosing characters in single quotes preserves the literal meaning of all the characters (except single quotes, making it impossible to put single-quotes in a single-quoted string).

Double Quotes

Enclosing characters within double quotes preserves the literal meaning of all characters except dollarsign ($) backquote (`) and backslash (\) The backslash inside double quotes is historically weird, and serves to quote only the following characters:

$ ` \ <newline>

Otherwise it remains literal.

Reserved Words

Reserved words are words that have special meaning to the shell and are recognized at the beginning of a line and after a control operator. The following are reserved words:

! Ta elif Ta fi Ta while Ta case

else Ta for Ta then Ta { Ta }

do Ta done Ta until Ta if Ta esac

Their meaning is discussed later. 

Aliases

An alias is a name and corresponding value set using the alias(1) builtin command. Whenever a reserved word may occur (see above), and after checking for reserved words, the shell checks the word to see if it matches an alias. If it does, it replaces it in the input stream with its value. For example, if there is an alias called ``lf'' with the value ``ls -F'' then the input:

lf foobar <return>

would become

ls -F foobar <return>

Aliases provide a convenient way for naive users to create shorthands for commands without having to learn how to create functions with arguments. They can also be used to create lexically obscure code. This use is discouraged. 

Commands

The shell interprets the words it reads according to a language, the specification of which is outside the scope of this man page (refer to the BNF in the POSIX 1003.2 document). Essentially though, a line is read and if the first word of the line (or after a control operator) is not a reserved word, then the shell has recognized a simple command. Otherwise, a complex command or some other special construct may have been recognized. 

Simple Commands

If a simple command has been recognized, the shell performs the following actions:

1. Leading words of the form ``name=value'' are stripped off and assigned to the environment of the simple command. Redirection operators and their arguments (as described below) are stripped off and saved for processing.

2. The remaining words are expanded as described in the section called ``Expansions'' and the first remaining word is considered the command name and the command is located. The remaining words are considered the arguments of the command. If no command name resulted, then the ``name=value'' variable assignments recognized in item 1 affect the current shell.

3. Redirections are performed as described in the next section.

Redirections

Redirections are used to change where a command reads its input or sends its output. In general, redirections open, close, or duplicate an existing reference to a file. The overall format used for redirection is:

[n] redir-op file

where redir-op is one of the redirection operators mentioned previously. Following is a list of the possible redirections. The Bq n is an optional number, as in `3' (not `Bq 3 ' that refers to a file descriptor.

[n] > file

Redirect standard output (or n) to file.

[n] >| file

Same, but override the -C option.

[n] >> file

Append standard output (or n) to file.

[n] < file

Redirect standard input (or n) from file.

[n1] <& n2

Duplicate standard input (or n1) from file descriptor n2.

[n] <&-

Close standard input (or n).

[n1] >& n2

Duplicate standard output (or n1) from n2.

[n] >&-

Close standard output (or n).

[n] <> file

Open file for reading and writing on standard input (or n).

The following redirection is often called a ``here-document''

[n]<< delimiter

here-doc-text...

delimiter

All the text on successive lines up to the delimiter is saved away and made available to the command on standard input, or file descriptor n if it is specified. If the delimiter as specified on the initial line is quoted, then the here-doc-text is treated literally, otherwise the text is subjected to parameter expansion, command substitution, and arithmetic expansion (as described in the section on ``Expansions )'' If the operator is ``<<-'' instead of ``<<'' then leading tabs in the here-doc-text are stripped. 

Search and Execution

There are three types of commands: shell functions, builtin commands, and normal programs -- and the command is searched for (by name) in that order. They each are executed in a different way.

When a shell function is executed, all of the shell positional parameters (except $0, which remains unchanged) are set to the arguments of the shell function. The variables which are explicitly placed in the environment of the command (by placing assignments to them before the function name) are made local to the function and are set to the values given. Then the command given in the function definition is executed. The positional parameters are restored to their original values when the command completes. This all occurs within the current shell.

Shell builtins are executed internally to the shell, without spawning a new process.

Otherwise, if the command name doesn't match a function or builtin, the command is searched for as a normal program in the filesystem (as described in the next section). When a normal program is executed, the shell runs the program, passing the arguments and the environment to the program. If the program is not a normal executable file (i.e., if it does not begin with the "magic number" whose ASCII representation is "#!", so execve(2) returns Er ENOEXEC then) the shell will interpret the program in a subshell. The child shell will reinitialize itself in this case, so that the effect will be as if a new shell had been invoked to handle the ad-hoc shell script, except that the location of hashed commands located in the parent shell will be remembered by the child.

Note that previous versions of this document and the source code itself misleadingly and sporadically refer to a shell script without a magic number as a "shell procedure". 

Path Search

When locating a command, the shell first looks to see if it has a shell function by that name. Then it looks for a builtin command by that name. If a builtin command is not found, one of two things happen:

1. Command names containing a slash are simply executed without performing any searches.

2. The shell searches each entry in PATH in turn for the command. The value of the PATHvariable should be a series of entries separated by colons. Each entry consists of a directory name. The current directory may be indicated implicitly by an empty directory name, or explicitly by a single period.

Command Exit Status

Each command has an exit status that can influence the behavior of other shell commands. The paradigm is that a command exits with zero for normal or success, and non-zero for failure, error, or a false indication. The man page for each command should indicate the various exit codes and what they mean. Additionally, the builtin commands return exit codes, as does an executed shell function. 

Complex Commands

Complex commands are combinations of simple commands with control operators or reserved words, together creating a larger complex command. More generally, a command is one of the following:

◈ simple command
◈ pipeline
◈ list or compound-list
◈ compound command
◈ function definition

Unless otherwise stated, the exit status of a command is that of the last simple command executed by the command.

Pipelines

A pipeline is a sequence of one or more commands separated by the control operator |. The standard output of all but the last command is connected to the standard input of the next command. The standard output of the last command is inherited from the shell, as usual.

The format for a pipeline is:

[!] command1 [ | command2 ...]

The standard output of command1 is connected to the standard input of command2. The standard input, standard output, or both of a command is considered to be assigned by the pipeline before any redirection specified by redirection operators that are part of the command.

If the pipeline is not in the background (discussed later), the shell waits for all commands to complete.

If the reserved word ! does not precede the pipeline, the exit status is the exit status of the last command specified in the pipeline. Otherwise, the exit status is the logical NOT of the exit status of the last command. That is, if the last command returns zero, the exit status is 1; if the last command returns greater than zero, the exit status is zero.

Because pipeline assignment of standard input or standard output or both takes place before redirection, it can be modified by redirection. For example:

$ command1 2>&1 | command2

sends both the standard output and standard error of command1 to the standard input of command2.

A ; or <newline> terminator causes the preceding AND-OR-list (described next) to be executed sequentially; a & causes asynchronous execution of the preceding AND-OR-list.

Note that unlike some other shells, each process in the pipeline is a child of the invoking shell (unless it is a shell builtin, in which case it executes in the current shell -- but any effect it has on the environment is wiped). 

Background Commands --

If a command is terminated by the control operator ampersand (&), the shell executes the command asynchronously -- that is, the shell does not wait for the command to finish before executing the next command.

The format for running a command in background is:

command1 & [command2 & ...]

If the shell is not interactive, the standard input of an asynchronous command is set to /dev/null 

Lists -- Generally Speaking

A list is a sequence of zero or more commands separated by newlines, semicolons, or ampersands, and optionally terminated by one of these three characters. The commands in a list are executed in the order they are written. If command is followed by an ampersand, the shell starts the command and immediately proceed onto the next command; otherwise it waits for the command to terminate before proceeding to the next one. 

Short-Circuit List Operators

``&&'' and ``||'' are AND-OR list operators. ``&&'' executes the first command, and then executes the second command iff the exit status of the first command is zero. ``||'' is similar, but executes the second command iff the exit status of the first command is nonzero. ``&&'' and ``||'' both have the same priority. 

Flow-Control Constructs -- if, while, for, case

The syntax of the if command is

if list
then list
[ elif list
then    list ] ...
[ else list ]
fi

The syntax of the while command is

while list
do   list
done

The two lists are executed repeatedly while the exit status of the first list is zero. The until command is similar, but has the word until in place of while, which causes it to repeat until the exit status of the first list is zero.

The syntax of the for command is

for variable in word...
do   list
done

The words are expanded, and then the list is executed repeatedly with the variable set to each word in turn. do and done may be replaced with ``{'' and ``}''

The syntax of the break and continue command is

break [ num ]
continue [ num ]

Break terminates the num innermost for or while loops. Continue continues with the next iteration of the innermost loop. These are implemented as builtin commands.

The syntax of the case command is

case word in
pattern) list ;;
...
esac

The pattern can actually be one or more patterns (see Shell Patterns described later), separated by ``'' characters. 

Grouping Commands Together

Commands may be grouped by writing either

(list)

or

{ list;

The first of these executes the commands in a subshell. Builtin commands grouped into a (list) will not affect the current shell. The second form does not fork another shell so is slightly more efficient. Grouping commands together this way allows you to redirect their output as though they were one program:

{ printf  hello  ; printf  world\n" ; } > greeting

Functions

The syntax of a function definition is

name ( ) command

A function definition is an executable statement; when executed it installs a function named name and returns an exit status of zero. The command is normally a list enclosed between ``{'' and ``}''

Variables may be declared to be local to a function by using a local command. This should appear as the first statement of a function, and the syntax is

local [ variable | - ] ...

Local is implemented as a builtin command.

When a variable is made local, it inherits the initial value and exported and readonly flags from the variable with the same name in the surrounding scope, if there is one. Otherwise, the variable is initially unset. The shell uses dynamic scoping, so that if you make the variable x local to function f, which then calls function g, references to the variable x made inside g will refer to the variable x declared inside f, not to the global variable named x.

The only special parameter than can be made local is ``-'' Making ``-'' local any shell options that are changed via the set command inside the function to be restored to their original values when the function returns.

The syntax of the return command is

return [ exitstatus

It terminates the currently executing function. Return is implemented as a builtin command. 

Variables and Parameters

The shell maintains a set of parameters. A parameter denoted by a name is called a variable. When starting up, the shell turns all the environment variables into shell variables. New variables can be set using the form

name=value

Variables set by the user must have a name consisting solely of alphabetics, numerics, and underscores - the first of which must not be numeric. A parameter can also be denoted by a number or a special character as explained below. 

Positional Parameters

A positional parameter is a parameter denoted by a number (n > 0). The shell sets these initially to the values of its command line arguments that follow the name of the shell script. The set(1) builtin can also be used to set or reset them. 

Special Parameters

A special parameter is a parameter denoted by one of the following special characters. The value of the parameter is listed next to its character.

*

Expands to the positional parameters, starting from one. When the expansion occurs within a double-quoted string it expands to a single field with the value of each parameter separated by the first character of the IFS variable, or by a <space> if IFS is unset.

@

Expands to the positional parameters, starting from one. When the expansion occurs within double-quotes, each positional parameter expands as a separate argument. If there are no positional parameters, the expansion of @ generates zero arguments, even when @ is double-quoted. What this basically means, for example, is if $1 is ``abc'' and $2 is ``def ghi'' then Qq $@ expands to the two arguments:

abc   def ghi

#

Expands to the number of positional parameters.

?

Expands to the exit status of the most recent pipeline.

- (Hyphen.)

Expands to the current option flags (the single-letter option names concatenated into a string) as specified on invocation, by the set builtin command, or implicitly by the shell.

$

Expands to the process ID of the invoked shell. A subshell retains the same value of $ as its parent.

!

Expands to the process ID of the most recent background command executed from the current shell. For a pipeline, the process ID is that of the last command in the pipeline.

0 (Zero.)

Expands to the name of the shell or shell script.

Word Expansions

This clause describes the various expansions that are performed on words. Not all expansions are performed on every word, as explained later.

Tilde expansions, parameter expansions, command substitutions, arithmetic expansions, and quote removals that occur within a single word expand to a single field. It is only field splitting or pathname expansion that can create multiple fields from a single word. The single exception to this rule is the expansion of the special parameter @ within double-quotes, as was described above.

The order of word expansion is:

1. Tilde Expansion, Parameter Expansion, Command Substitution, Arithmetic Expansion (these all occur at the same time).

2. Field Splitting is performed on fields generated by step (1) unless the IFS variable is null.

3. Pathname Expansion (unless set -f is in effect).

4. Quote Removal.

The $ character is used to introduce parameter expansion, command substitution, or arithmetic evaluation. 

Tilde Expansion (substituting a user's home directory)

A word beginning with an unquoted tilde character (~) is subjected to tilde expansion. All the characters up to a slash (/) or the end of the word are treated as a username and are replaced with the user's home directory. If the username is missing (as in ~/foobar ) the tilde is replaced with the value of the HOME variable (the current user's home directory). 

Parameter Expansion

The format for parameter expansion is as follows:

${expression}

where expression consists of all characters until the matching ``}'' Any ``}'' escaped by a backslash or within a quoted string, and characters in embedded arithmetic expansions, command substitutions, and variable expansions, are not examined in determining the matching ``}''

The simplest form for parameter expansion is:

${parameter}

The value, if any, of parameter is substituted.

The parameter name or symbol can be enclosed in braces, which are optional except for positional parameters with more than one digit or when parameter is followed by a character that could be interpreted as part of the name. If a parameter expansion occurs inside double-quotes:

1. Pathname expansion is not performed on the results of the expansion.
2. Field splitting is not performed on the results of the expansion, with the exception of @.

In addition, a parameter expansion can be modified by using one of the following formats.

${parameter:-word}

Use Default Values. If parameter is unset or null, the expansion of word is substituted; otherwise, the value of parameter is substituted.

${parameter:=word}

Assign Default Values. If parameter is unset or null, the expansion of word is assigned to parameter. In all cases, the final value of parameter is substituted. Only variables, not positional parameters or special parameters, can be assigned in this way.

${parameter:?[word]}

Indicate Error if Null or Unset. If parameter is unset or null, the expansion of word (or a message indicating it is unset if word is omitted) is written to standard error and the shell exits with a nonzero exit status. Otherwise, the value of parameter is substituted. An interactive shell need not exit.

${parameter:+word}

Use Alternative Value. If parameter is unset or null, null is substituted; otherwise, the expansion of word is substituted.

In the parameter expansions shown previously, use of the colon in the format results in a test for a parameter that is unset or null; omission of the colon results in a test for a parameter that is only unset.

${#parameter}

String Length. The length in characters of the value of parameter.

The following four varieties of parameter expansion provide for substring processing. In each case, pattern matching notation (see Shell Patterns), rather than regular expression notation, is used to evaluate the patterns. If parameter is * or @, the result of the expansion is unspecified. Enclosing the full parameter expansion string in double-quotes does not cause the following four varieties of pattern characters to be quoted, whereas quoting characters within the braces has this effect.

${parameter%word}

Remove Smallest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the suffix matched by the pattern deleted.

${parameter%%word}

Remove Largest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the largest portion of the suffix matched by the pattern deleted.

${parameter#word}

Remove Smallest Prefix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the prefix matched by the pattern deleted.

${parameter##word}

Remove Largest Prefix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the largest portion of the prefix matched by the pattern deleted.

Command Substitution

Command substitution allows the output of a command to be substituted in place of the command name itself. Command substitution occurs when the command is enclosed as follows:

$(command)

or Po ``backquoted'' version Pc :

`command`

The shell expands the command substitution by executing command in a subshell environment and replacing the command substitution with the standard output of the command, removing sequences of one or more <newline>s at the end of the substitution. (Embedded <newline>s before the end of the output are not removed; however, during field splitting, they may be translated into <space>s, depending on the value of IFS and quoting that is in effect.) 

Arithmetic Expansion

Arithmetic expansion provides a mechanism for evaluating an arithmetic expression and substituting its value. The format for arithmetic expansion is as follows:

$((expression))

The expression is treated as if it were in double-quotes, except that a double-quote inside the expression is not treated specially. The shell expands all tokens in the expression for parameter expansion, command substitution, and quote removal.

Next, the shell treats this as an arithmetic expression and substitutes the value of the expression. 

White Space Splitting (Field Splitting)

After parameter expansion, command substitution, and arithmetic expansion the shell scans the results of expansions and substitutions that did not occur in double-quotes for field splitting and multiple fields can result.

The shell treats each character of the IFS as a delimiter and use the delimiters to split the results of parameter expansion and command substitution into fields. 

Pathname Expansion (File Name Generation)

Unless the -f flag is set, file name generation is performed after word splitting is complete. Each word is viewed as a series of patterns, separated by slashes. The process of expansion replaces the word with the names of all existing files whose names can be formed by replacing each pattern with a string that matches the specified pattern. There are two restrictions on this: first, a pattern cannot match a string containing a slash, and second, a pattern cannot match a string starting with a period unless the first character of the pattern is a period. The next section describes the patterns used for both Pathname Expansion and the case(1) command. 

Shell Patterns

A pattern consists of normal characters, which match themselves, and meta-characters. The meta-characters are ``!'' ``*'' ``?'' and ``['' These characters lose their special meanings if they are quoted. When command or variable substitution is performed and the dollar sign or back quotes are not double quoted, the value of the variable or the output of the command is scanned for these characters and they are turned into meta-characters.

An asterisk (``*'' ) matches any string of characters. A question mark matches any single character. A left bracket (``['' ) introduces a character class. The end of the character class is indicated by a (``]'' ) if the ``]'' is missing then the ``['' matches a ``['' rather than introducing a character class. A character class matches any of the characters between the square brackets. A range of characters may be specified using a minus sign. The character class may be complemented by making an exclamation point the first character of the character class.

To include a ``]'' in a character class, make it the first character listed (after the ``!'' if any). To include a minus sign, make it the first or last character listed 

Builtins

This section lists the builtin commands which are builtin because they need to perform some operation that can't be performed by a separate process. In addition to these, there are several other commands that may be builtin for efficiency (e.g. echo 1).

:

A null command that returns a 0 (true) exit value.

. file

The commands in the specified file are read and executed by the shell.

alias [name [=string ... ] ]

If name=string is specified, the shell defines the alias name with value string If just name is specified, the value of the alias name is printed. With no arguments, the alias builtin prints the names and values of all defined aliases (see unalias )

bg [ job ] ...

Continue the specified jobs (or the current job if no jobs are given) in the background.

command command arg...

Execute the specified builtin command. (This is useful when you have a shell function with the same name as a builtin command.)

cd [directory ]

Switch to the specified directory (default $HOME ) If an entry for CDPATH appears in the environment of the cd command or the shell variable CDPATH is set and the directory name does not begin with a slash, then the directories listed in CDPATH will be searched for the specified directory. The format of CDPATH is the same as that of PATH In an interactive shell, the cd command will print out the name of the directory that it actually switched to if this is different from the name that the user gave. These may be different either because theCDPATH mechanism was used or because a symbolic link was crossed.

eval string...

Concatenate all the arguments with spaces. Then re-parse and execute the command.

exec [command arg... ]

Unless command is omitted, the shell process is replaced with the specified program (which must be a real program, not a shell builtin or function). Any redirections on the execcommand are marked as permanent, so that they are not undone when the exec command finishes.

exit [exitstatus ]

Terminate the shell process. If exitstatus is given it is used as the exit status of the shell; otherwise the exit status of the preceding command is used.

export name...

export -p

The specified names are exported so that they will appear in the environment of subsequent commands. The only way to un-export a variable is to unset it. The shell allows the value of a variable to be set at the same time it is exported by writing

export name=value

With no arguments the export command lists the names of all exported variables. With the -poption specified the output will be formatted suitably for non-interactive use.

fc [-e editor ] [first [last ] ]

fc -l [-nr ] [first [last ] ]

fc -s [old=new ] [first ]

The fc builtin lists, or edits and re-executes, commands previously entered to an interactive shell.

-e editor

Use the editor named by editor to edit the commands. The editor string is a command name, subject to search via the PATH variable. The value in the FCEDIT variable is used as a default when -e is not specified. If FCEDIT is null or unset, the value of theEDITOR variable is used. If EDITOR is null or unset, ed(1) is used as the editor.

-l (ell)

List the commands rather than invoking an editor on them. The commands are written in the sequence indicated by the first and last operands, as affected by -r with each command preceded by the command number.

-n

Suppress command numbers when listing with -l.

-r

Reverse the order of the commands listed (with -l or edited (with neither -l nor -s )

-s

Re-execute the command without invoking an editor.

first

last

Select the commands to list or edit. The number of previous commands that can be accessed are determined by the value of the HISTSIZE variable. The value of first or last or both are one of the following:

[+]number

A positive number representing a command number; command numbers can be displayed with the -l option.

-number

A negative decimal number representing the command that was executed number of commands previously. For example, -1 is the immediately previous command.

string

A string indicating the most recently entered command that begins with that string. If the old=new operand is not also specified with -s the string form of the first operand cannot contain an embedded equal sign.

The following environment variables affect the execution of fc:

FCEDIT

Name of the editor to use.

HISTSIZE

The number of previous commands that are accessible.

fg [job ]

Move the specified job or the current job to the foreground.

getopts optstring var

The POSIX getopts command, not to be confused with the Bell Labs -derived getopt(1).

The first argument should be a series of letters, each of which may be optionally followed by a colon to indicate that the option requires an argument. The variable specified is set to the parsed option.

The getopts command deprecates the older getopt(1) utility due to its handling of arguments containing whitespace.

The getopts builtin may be used to obtain options and their arguments from a list of parameters. When invoked, getopts places the value of the next option from the option string in the list in the shell variable specified by var and it's index in the shell variableOPTIND When the shell is invoked, OPTIND is initialized to 1. For each option that requires an argument, the getopts builtin will place it in the shell variable OPTARG If an option is not allowed for in the optstring then OPTARG will be unset.

optstring is a string of recognized option letters. If a letter is followed by a colon, the option is expected to have an argument which may or may not be separated from it by white space. If an option character is not found where expected, getopts will set the variable var to a ``?'' getopts will then unset OPTARG and write output to standard error. By specifying a colon as the first character of optstring all errors will be ignored.

A nonzero value is returned when the last option is reached. If there are no remaining arguments, getopts will set var to the special option, ``--'' otherwise, it will set var to ``?''

The following code fragment shows how one might process the arguments for a command that can take the options [a] and [b] and the option [c] which requires an argument.

while getopts abc: f
do
        case $f in
        a | b)  flag=$f;;
        c)      carg=$OPTARG;;
        \?)     echo $USAGE; exit 1;;
        esac
done
shift `expr $OPTIND - 1`

This code will accept any of the following as equivalent:

cmd -acarg file file
cmd -a -c arg file file
cmd -carg -a file file
cmd -a -carg -- file file

hash -rv command...

The shell maintains a hash table which remembers the locations of commands. With no arguments whatsoever, the hash command prints out the contents of this table. Entries which have not been looked at since the last cd command are marked with an asterisk; it is possible for these entries to be invalid.

With arguments, the hash command removes the specified commands from the hash table (unless they are functions) and then locates them. With the -v option, hash prints the locations of the commands as it finds them. The -r option causes the hash command to delete all the entries in the hash table except for functions.

jobid [job ]

Print the process id's of the processes in the job. If the job argument is omitted, the current job is used.

jobs

This command lists out all the background processes which are children of the current shell process.

pwd

Print the current directory. The builtin command may differ from the program of the same name because the builtin command remembers what the current directory is rather than recomputing it each time. This makes it faster. However, if the current directory is renamed, the builtin version of pwd will continue to print the old name for the directory.

read [-p prompt ] [-r ] variable...

The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line is read from the standard input. The trailing newline is deleted from the line and the line is split as described in the section on word splitting above, and the pieces are assigned to the variables in order. At least one variable must be specified. If there are more pieces than variables, the remaining pieces (along with the characters in IFS that separated them) are assigned to the last variable. If there are more variables than pieces, the remaining variables are assigned the null string. The read builtin will indicate success unless EOF is encountered on input, in which case failure is returned.

By default, unless the -r option is specified, the backslash ``\'' acts as an escape character, causing the following character to be treated literally. If a backslash is followed by a newline, the backslash and the newline will be deleted.

readonly name...

readonly -p

The specified names are marked as read only, so that they cannot be subsequently modified or unset. The shell allows the value of a variable to be set at the same time it is marked read only by writing

readonly name=value

With no arguments the readonly command lists the names of all read only variables. With the -p option specified the output will be formatted suitably for non-interactive use.

set [{ -options | +options | -- arg... ]

The set command performs three different functions.

With no arguments, it lists the values of all shell variables.

If options are given, it sets the specified option flags, or clears them as described in the section called Sx Argument List Processing .

The third use of the set command is to set the values of the shell's positional parameters to the specified args. To change the positional parameters without changing any options, use ``--'' as the first argument to set. If no args are present, the set command will clear all the positional parameters (equivalent to executing ``shift $# .''

variable value

Assigns value to variable. (In general it is better to write variable=value rather than usingsetvar setvar is intended to be used in functions that assign values to variables whose names are passed as parameters.)

shift [n ]

Shift the positional parameters n times. A shift sets the value of $1 to the value of $2 the value of $2 to the value of $3 and so on, decreasing the value of $# by one. If n is greater than the number of positional parameters, shift will issue an error message, and exit with return status 2.

times

Print the accumulated user and system times for the shell and for processes run from the shell. The return status is 0.

trap action signal...

Cause the shell to parse and execute action when any of the specified signals are received. The signals are specified by signal number. If signal is 0 the action is executed when the shell exits. action may be null or ``-'' the former causes the specified signal to be ignored and the latter causes the default action to be taken. When the shell forks off a subshell, it resets trapped (but not ignored) signals to the default action. The trap command has no effect on signals that were ignored on entry to the shell.

type [name ... ]

Interpret each name as a command and print the resolution of the command search. Possible resolutions are: shell keyword, alias, shell builtin, command, tracked alias and not found. For aliases the alias expansion is printed; for commands and tracked aliases the complete pathname of the command is printed.

ulimit [-H -S ] [-a -tfdscmlpn [value ] ]

Inquire about or set the hard or soft limits on processes or set new limits. The choice between hard limit (which no process is allowed to violate, and which may not be raised once it has been lowered) and soft limit (which causes processes to be signaled but not necessarily killed, and which may be raised) is made with these flags:

-H

set or inquire about hard limits

-S

set or inquire about soft limits. If neither -H nor -S is specified, the soft limit is displayed or both limits are set. If both are specified, the last one wins.

The limit to be interrogated or set, then, is chosen by specifying any one of these flags:

-a

show all the current limits

-t

show or set the limit on CPU time (in seconds)

-f

show or set the limit on the largest file that can be created (in 512-byte blocks)

-d

show or set the limit on the data segment size of a process (in kilobytes)

-s

show or set the limit on the stack size of a process (in kilobytes)

-c

show or set the limit on the largest core dump size that can be produced (in 512-byte blocks)

-m

show or set the limit on the total physical memory that can be in use by a process (in kilobytes)

-l

show or set the limit on how much memory a process can lock with mlock(2) (in kilobytes)

-p

show or set the limit on the number of processes this user can have at one time

-n

show or set the limit on the number files a process can have open at once

If none of these is specified, it is the limit on file size that is shown or set. If value is specified, the limit is set to that number; otherwise the current limit is displayed.

Limits of an arbitrary process can be displayed or set using the sysctl(8) utility.

umask [mask ]

Set the value of umask (see umask(2)) to the specified octal value. If the argument is omitted, the umask value is printed.

unalias [-a ] [name ]

If name is specified, the shell removes that alias. If -a is specified, all aliases are removed.

unset name...

The specified variables and functions are unset and unexported. If a given name corresponds to both a variable and a function, both the variable and the function are unset.

wait [job ]

Wait for the specified job to complete and return the exit status of the last process in the job. If the argument is omitted, wait for all jobs to complete and the return an exit status of zero.

Command Line Editing

When sh is being used interactively from a terminal, the current command and the command history (see fc in Sx Builtins ) can be edited using vi-mode command-line editing. This mode uses commands, described below, similar to a subset of those described in the vi man page. The command `set' -o vi enables vi-mode editing and place sh into vi insert mode. With vi-mode enabled, sh can be switched between insert mode and command mode. The editor is not described in full here, but will be in a later document. It's similar to vi: typing Aq ESC will throw you into command VI command mode. Hitting Aq return while in command mode will pass the line to the shell.

Important: Use the man command (% man) to see how a command is used on your particular computer.

Friday, 26 January 2018

An introduction to GRUB2 configuration for your Linux machine

Linux Tutorials and Materials, LPI Guides, LPI Certifications, LPI Learning

GRUB


GRUB stands for GRand Unified Bootloader. Its function is to take over from BIOS at boot time, load itself, load the Linux kernel into memory, and then turn over execution to the kernel. Once the kernel takes over, GRUB has done its job and it is no longer needed.

GRUB supports multiple Linux kernels and allows the user to select between them at boot time using a menu. I have found this to be a very useful tool because there have been many instances that I have encountered problems with an application or system service that fails with a particular kernel version. Many times, booting to an older kernel can circumvent issues such as these. By default, three kernels are kept–the newest and two previous–when yum or dnf are used to perform upgrades. The number of kernels to be kept before the package manager erases them is configurable in the /etc/dnf/dnf.conf or /etc/yum.conf files. I usually change the installonly_limit value to 9 to retain a total of nine kernels. This has come in handy on a couple occasions when I had to revert to a kernel that was several versions down-level.

GRUB menu


The function of the GRUB menu is to allow the user to select one of the installed kernels to boot in the case where the default kernel is not the desired one. Using the up and down arrow keys allows you to select the desired kernel and pressing the Enter key continues the boot process using the selected kernel.

The GRUB menu also provides a timeout so that, if the user does not make any other selection, GRUB will continue to boot with the default kernel without user intervention. Pressing any key on the keyboard except the Enter key terminates the countdown timer which is displayed on the console. Pressing the Enter key immediately continues the boot process with either the default kernel or an optionally selected one.

The GRUB menu also provides a "rescue" kernel, in for use when troubleshooting or when the regular kernels don't complete the boot process for some reason. Unfortunately, this rescue kernel does not boot to rescue mode. More on this later in this article.

The grub.cfg file


The grub.cfg file is the GRUB configuration file. It is generated by the grub2-mkconfig program using a set of primary configuration files and the grub default file as a source for user configuration specifications. The /boot/grub2/grub.cfg file is first generated during Linux installation and regenerated when a new kernel is installed.

The grub.cfg file contains Bash-like code and a list of installed kernels in an array ordered by sequence of installation. For example, if you have four installed kernels, the most recent kernel will be at index 0, the previous kernel will be at index 1, and the oldest kernel will be index 3. If you have access to a grub.cfg file you should look at it to get a feel for what one looks like. The grub.cfg file is just too large to be included in this article.

GRUB configuration files


The main set of configuration files for grub.cfg is located in the /etc/grub.d directory. Each of the files in that directory contains GRUB code that is collected into the final grub.cfg file. The numbering scheme used in the names of these configuration files is designed to provide ordering so that the final grub.cfg file is assembled into the correct sequence. Each of these files has a comment to denote the beginning and end of the section, and those comments are also part of the final grub.cfg file so that it is possible to see from which file each section is generated. The delimiting comments look like this:

### BEGIN /etc/grub.d/10_linux ###

### END /etc/grub.d/10_linux ###

These files should not be modified unless you are a GRUB expert and understand what the changes will do. Even then you should always keep a backup copy of the original, working grub.cfg file. The specific files, 40_custom and 41_custom are intended to be used to generate user modifications to the GRUB configuration. You should still be aware of the consequences of any changes you make to these files and maintain a backup of the original grub.cfg file.

Linux Tutorials and Materials, LPI Guides, LPI Certifications, LPI Learning

You can also add your own files to the /etc/grub.d directory. One reason for doing that might be to add a menu line for a non-Linux operating system. Just be sure to follow the naming convention to ensure that the additional menu item is added either immediately before or after the 10_linux entry in the configuration file.

GRUB defaults file


Configuration of the original GRUB was fairly simple and straightforward. I would just modify /boot/grub/grub.conf and be good to go. I could still modify GRUB2 by changing /boot/grub2/grub.cfg, but the new version is considerably more complex than the original GRUB. In addition, grub.cfg may be overwritten when a new kernel is installed, so any changes may disappear. However, the GNU.org GRUB Manual does discuss direct creation and modification of /boot/grub2/grub.cfg.

Changing the configuration for GRUB2 is fairly easy once you actually figure out how to do it. I only discovered this while researching GRUB2 for a previous article. The secret formula is in the /etc/default directory, with a file called, naturally enough, grub, which is then used in concert with a simple terminal command. The /etc/default directory contains configuration files for a few programs such as Google Chrome, useradd, and grub.

The /etc/default/grub file is very simple. The grub defaults file has a number of valid key/value pairs listed already. You can simply change the values of existing keys or add other keys that are not already in the file. Listing 1, below, shows an unmodified /etc/default/grub file.

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g'
   /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora_fedora25vm/root
   rd.lvm.lv=fedora_fedora25vm/swap
   rd.lvm.lv=fedora_fedora25vm/usr rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

Listing 1: An original grub default file for Fedora 25.

Let's look at what each of these keys means as well as some that don't appear in the grub default file.

◉ GRUB_TIMEOUT The value of this key determines the length of time that the GRUB selection menu is displayed. GRUB offers the capability to keep multiple kernels installed simultaneously and choose between them at boot time using the GRUB menu. The default value for this key is 5 seconds, but I usually change that to 10 seconds to allow more time to view the choices and make a selection.

◉ GRUB_DISTRIBUTOR This key defines a sed expression that extracts the distribution release number from the /etc/system-release file. This information is used to generate the text names for each kernel release that appear in the GRUB menu, such as "Fedora". Due to variations in the structure of the data in the system-release file between distributions, this sed expression may be different on your system.

◉ GRUB_DEFAULT Determines which kernel is booted by default. That is the "saved" kernel which is the most recent kernel. Other options here are a number which represents the index of the list of kernels in grub.cfg. Using an index such as 3, however, to load the fourth kernel in the list will always load the fourth kernel in the list even after a new kernel is installed. So using an index will load a different kernel after a new kernel is installed. The only way to ensure that a specific kernel release is booted is to set the value of GRUB_DEFAULT to the name of the desired kernel, like 4.8.13-300.fc25.x86_64.

◉ GRUB_SAVEDEFAULT Normally, this option is not specified in the grub defaults file. Normal operation when a different kernel is selected for boot, that kernel is booted only that one time. The default kernel is not changed. When set to "true" and used with GRUB_DEFAULT=saved this option saves a different kernel as the default. This happens when a different kernel is selected for boot.

◉ GRUB_DISABLE_SUBMENU Some people may wish to create a hierarchical menu structure of kernels for the GRUB menu screen. This key, along with some additional configuration of the kernel stanzas in grub.cfg allow creating such a hierarchy. For example, the one might have the main menu with "production" and "test" sub-menus where each sub-menu would contain the appropriate kernels. Setting this to "false" would enable the use of sub-menus.

◉ GRUB_TERMINAL_OUTPUT In some environments it may be desirable or necessary to redirect output to a different display console or terminal. The default is to send output to the default terminal, usually the "console" which equates to the standard display on an Intel class PC. Another useful option is to specify "serial" in a data center or lab environment in which serial terminals or Integrated Lights Out (ILO) terminal connections are in use.

◉ GRUB_TERMINAL_INPUT As with GRUB_TERMINAL_OUTPUT, it may be desirable or necessary to redirect input from a serial terminal or ILO device rather than the standard keyboard input.

◉ GRUB_CMDLINE_LINUX This key contains the command line arguments that will be passed to the kernel at boot time. Note that these arguments will be added to the kernel line of grub.cfg for all installed kernels. This means that all installed kernels will have the same arguments when booted. I usually remove the "rhgb" and "quiet" arguments so that I can see all of the very informative messages output by the kernel and systemd during the boot and startup.

◉ GRUB_DISABLE_RECOVERY When the value of this key is set to "false," a recovery entry is created in the GRUB menu for every installed kernel. When set to "true" no recovery entries are created. Regardless of this setting, the last kernel entry is always a "rescue" option. However, I encountered a problem with the rescue option, which I'll talk more about below.

Generate grub.cfg


After completing the desired configuration it is necessary to generate the /boot/grub2/grub.cfg file. This is accomplished with the following command.

grub2-mkconfig > /boot/grub2/grub.cfg

This command takes the configuration files located in /etc/grub.d in sequence to build the grub.cfg file, and uses the contents of the grub defaults file to modify the output to achieve the final desired configuration. The grub2-mkconfig command attempts to locate all of the installed kernels and creates an entry for each in the 10_Linux section of the grub.cfg file. It also creates a "rescue" entry to provide a method for recovering from significant problems that prevent Linux from booting.

It is strongly recommended that you do not edit the grub.cfg file manually because any direct modifications to the file will be overwritten the next time a new kernel is installed or grub2-mkconfig is run manually.

Issues


I encountered one problem with GRUB2 that could have serious consequences if you are not aware of it. The rescue kernel does not boot, instead, one of the other kernels boots. I found that to be the kernel at index 1 in the list, i.e., the second kernel in the list. Additional testing showed that this problem occurred whether using the original grub.cfg configuration file or one that I generated. I have tried this on both virtual and real hardware and the problem is the same on each. I only tried this with Fedora 25 so it may not be an issue with other Fedora releases.

Note that the "recovery" kernel entry that is generated from the "rescue" kernel does work and boots to a maintenance mode login.

I recommend changing GRUB_DISABLE_RECOVERY to "false" in the grub defaults file, and generating your own grub.cfg. This will generate usable recovery entries in the GRUB menu for each of the installed kernels. These recovery configurations work as expected and boot to runlevel 1—according to the runlevel command—at a command line entry that requests a password to enter maintenance mode. You could also press Ctrl-D to continue a normal boot to the default runlevel.

Conclusions


GRUB is the first step after BIOS in the sequence of events that boot a Linux computer to a usable state. Understanding how to configure GRUB is important to be able to recover from or to circumvent various types of problems.

I have had to boot to recovery or rescue mode many times over the years to resolve many types of problems. Some of those problems were actual boot problems due to things like improper entries in /etc/fstab or other configuration files, and others were due to problems with application or system software that was incompatible with the newest kernel. Hardware compatibility issues might also prevent a specific kernel from booting.

Wednesday, 24 January 2018

Example Uses Of The Linux grep Command

Linux grep Command, Linux Tutorials and Materials, LPI Certifications

Introduction


The Linux grep command is used as a method for filtering input.

GREP stands for Global Regular Expression Printer and therefore in order to use it effectively, you should have some knowledge about regular expressions.

In this article, I am going to show you a number of examples which will help you understand the grep command.

01. How To Search For A String In A File Using GREP


Linux grep Command, Linux Tutorials and Materials, LPI Certifications

Imagine you have a text file called books with the following children's book titles:

◈ Robin Hood
◈ Little Red Riding Hood
◈ Peter Pan
◈ Goldilocks And The Three Bears
◈ Snow White And The Seven Dwarfs
◈ Pinnochio
◈ The Cat In The Hat
◈ The Three Little Pigs
◈ The Gruffalo
◈ Charlie And The Chocolate Factory

To find all the books with the word "The" in the title you would use the following syntax:

grep The books

The following results will be returned:

◈ Goldilocks And The Three Bears
◈ Snow White And The Seven Dwarfs
◈ The Cat In The Hat
◈ The Three Little Pigs
◈ The Gruffalo
◈ Charlie And The Chocolate Factory

In each case, the word "The" will be highlighted.

Note that the search is case sensitive so if one of the titles had "the" instead of "The" then it would not have been returned.

◈ To ignore the case you can add the following switch:

grep the books --ignore-case

You can also use the -i switch as follows:

◈ grep -i the books

02. Search For A String In A File Using Wildcards


The grep command is very powerful. You can use a multitude of pattern matching techniques to filter results.

In this example, I will show you how to search for a string in a file using wildcards.

Imagine you have a file called places with the following Scottish place names:

aberdeen
aberystwyth
aberlour
inverurie
inverness
newburgh
new deer
new galloway
glasgow
edinburgh

If you want to find all the places with inver in the name use the following syntax:

grep inver* places

The asterisk (*) wildcard stands for 0 or many. Therefore if you have a place called inver or a place called inverness then both would be returned.

Another wildcard you can use is the period (.). You can use this to match a single letter.

grep inver.r places

The above command would find places called inverurie and inverary but wouldn't find invereerie because there can only be one wildcard between the two r's as denoted by the single period.

The period wildcard is useful but it can cause problems if you have one as part of the text you are searching.

To find all the about.coms you could just search using the following syntax:

grep *about* domainnames

The above command would fall down if the list contained the following name in it:

◈ everydaylinuxuser.com/about.html

You could, therefore, try the following syntax:

grep *about.com domainnames

This would work ok unless there was a domain with the following name:

aboutycom.com

To really search for the term about.com you would need to escape the dot as follows:

grep *about\.com domainnames

The final wildcard to show you is the question mark which stands for zero or one character.

For example:

grep ?ber placenames

The above command would return aberdeen, aberystwyth or even berwick.

03. Search For Strings At The Beginning And End Of Line Using grep


The carat (^) and the dollar ($) symbol allow you to search for patterns at the beginning and end of lines.

Imagine you have a file called football with the following team names:

◈ Blackpool
◈ Liverpool
◈ Manchester City
◈ Leicester City
◈ Manchester United
◈ Newcastle United
◈ FC United Of Manchester

If you wanted to find all the teams that began with Manchester you would use the following syntax:

grep ^Manchester teams

The above command would return Manchester City and Manchester United but not FC United Of Manchester.

Alternatively you can find all the teams ending with United using the following syntax:

grep United$ teams

The above command would return Manchester United and Newcastle United but not FC United Of Manchester.

04. Counting The Number Of Matches Using grep


If you don't want to return the actual lines that match a pattern using grep but you just want to know how many there are you can use the following syntax:

grep -c pattern inputfile

If the pattern was matched twice then the number 2 would be returned.

05. Finding All The Terms That Don't Match using grep


Imagine you have a list of place names with the countries listed as follows:

◈ aberdeen scotland
◈ glasgow scotland
◈ liverpool england
◈ colwyn bay
◈ london england

You may have noticed that colwyn bay has no country associated with it.

To search for all the places with a country you could use the following syntax:

grep land$ places

The results returns would be all the places except for colwyn bay.

This obviously only works for places which end in land (hardly scientific). 

You can invert the select using the following syntax:

grep -v land$ places

This would find all the places that didn't end with land.

06. How To Find Empty Lines In Files Using grep


Imagine you have an input file which is used by a third party application which stops reading the file when it finds an empty line as follows:

◈ aberdeen scotland
◈ inverness scotland
◈ liverpool england
◈ colwyn bay wales

When the application gets to the line after liverpool it will stop reading meaning colwyn bay is missed entirely.

You can use grep to search for blank lines with the following syntax:

grep ^$ places

Unfortunately this isn't particularly useful because it just returns the blank lines.

You could of course get a count of the number of blank lines as a check to see if the file is valid as follows:

grep -c ^$ places

It would however be more useful to know the line numbers that have a blank line so that you can replace them. You can do that with the following command:

grep -n ^$ places

07. How To Search For Strings Of Uppercase Or Lowercase Characters Using grep


Using grep you can determine which lines in a file have uppercase characters using the following syntax:

grep '[A-Z]' filename

The square brackets [] let you determine the range of characters. In the above example it matches any character which is between A and Z.

Therefore to match lowercase characters you can use the following syntax:

grep '[a-z]' filename

If you want to match only letters and not numerics or other symbols you can use the following syntax:

grep '[a-zA-Z]' filename

You can do the same with numbers as follows:

grep '[0-9]' filename

08. Looking For Repeating Patterns Using grep

You can use curly brackets {} to search for a repeating pattern.

Imagine you have a file with phone numbers as follows:

◈ 055-1234
◈ 055-4567
◈ 555-1545
◈ 444-0167
◈ 444-0854
◈ 4549-2234
◈ x44-1234

You know the first part of the number needs to be three digits and you want to find the lines that do not match this pattern.

From the previous example you know that [0-9] returns all numbers in a file.

In this instance we want the lines that start with three numbers followed by a hyphen (-). You can do that with the following syntax:

grep "^[0-9][0-9][0-9]-" numbers

As we know from previous examples the carat (^) means that the line must begin with the following pattern.

The [0-9] will search for any number between 0 and 9. As this is included three times it matches 3 numbers. Finally there is a hyphen to denote that a hyphen must succeed the three numbers.

By using the curly brackets you can make the search smaller as follows:

grep "^[0-9]\{3\}-" numbers

The slash escapes the { bracket so that it works as part of the regular expression but in essence what this is saying is [0-9]{3} which means any number between 0 and 9 three times.

The curly brackets can also be used as follows:

{5,10}

{5,}

The {5,10} means that the character being searched for must be repeated at least 5 times but no more than 10 whereas the {5,} means that the character must be repeated at least 5 times but it can be more than that.

09. Using The Output From Other Commands Using grep


Thus far we have looked at pattern matching within individual files but grep can use the output from other commands as the input for pattern matching.

A great example of this is using the ps command which lists active processes.

For example run the following command:

ps -ef

All of the running processes on your system will be displayed.

You can use grep to search for a particular running process as follows:

ps -ef | grep firefox