Saturday 29 February 2020

du command in Linux with examples

du command, short for disk usage, is used to estimate file space usage.

du command, LPI Certifications, LPI Guides, LPI Learning, LPI Tutorials and Materials

The du command can be used to track the files and directories which are consuming excessive amount of space on hard disk drive.

Syntax :


du [OPTION]... [FILE]...
du [OPTION]... --files0-from=F

Examples :


du /home/xyz/test

Output:


44    /home/xyz/test/data
2012    /home/xyz/test/system design
24    /home/xyz/test/table/sample_table/tree
28    /home/xyz/test/table/sample_table
32    /home/xyz/test/table
100104    /home/xyz/test


Options :


du command, LPI Certifications, LPI Guides, LPI Learning, LPI Tutorials and Materials
-0, –null : end each output line with NULL
-a, –all : write count of all files, not just directories
–apparent-size : print apparent sizes, rather than disk usage.
-B, –block-size=SIZE : scale sizes to SIZE before printing on console
-c, –total : produce grand total
-d, –max-depth=N : print total for directory only if it is N or fewer levels below command line argument
-h, –human-readable : print sizes in human readable format
-S, -separate-dirs : for directories, don’t include size of subdirectories
-s, –summarize : display only total for each directory
–time : show time of of last modification of any file or directory.
–exclude=PATTERN : exclude files that match PATTERN

Command usage examples with options :


1. If we want to print sizes in human readable format(K, M, G), use -h option


du -h /home/xyz/test

Output:

44K    /home/xyz/test/data
2.0M    /home/xyz/test/system design
24K    /home/xyz/test/table/sample_table/tree
28K    /home/xyz/test/table/sample_table
32K    /home/xyz/test/table
98M    /home/xyz/test

2. Use -a option for printing all files including directories.


du -a -h /home/xyz/test

Output:

This is partial output of above command.

4.0K    /home/xyz/test/blah1-new
4.0K    /home/xyz/test/fbtest.py
8.0K    /home/xyz/test/data/4.txt
4.0K    /home/xyz/test/data/7.txt
4.0K    /home/xyz/test/data/1.txt
4.0K    /home/xyz/test/data/3.txt
4.0K    /home/xyz/test/data/6.txt
4.0K    /home/xyz/test/data/2.txt
4.0K    /home/xyz/test/data/8.txt
8.0K    /home/xyz/test/data/5.txt
44K    /home/xyz/test/data
4.0K    /home/xyz/test/notifier.py

3. Use -c option to print total size


du -c -h /home/xyz/test

Output:

44K    /home/xyz/test/data
2.0M    /home/xyz/test/system design
24K    /home/xyz/test/table/sample_table/tree
28K    /home/xyz/test/table/sample_table
32K    /home/xyz/test/table
98M    /home/xyz/test
98M    total

4. To print sizes till particular level, use -d option with level no.


du -d 1 /home/xyz/test

Output:

44    /home/xyz/test/data
2012    /home/xyz/test/system design
32    /home/xyz/test/table
100104    /home/xyz/test

Now try with level 2, you will get some extra directories

du -d 2 /home/xyz/test

Output:

44    /home/xyz/test/data
2012    /home/xyz/test/system design
28    /home/xyz/test/table/sample_table
32    /home/xyz/test/table
100104    /home/xyz/test

5. Get summary of file system using -s option


du -s /home/xyz/test

Output:

100104    /home/xyz/test

6. Get the timestamp of last modified using --time option


du --time -h /home/xyz/test

Output:

44K    2018-01-14 22:22    /home/xyz/test/data
2.0M    2017-12-24 23:06    /home/xyz/test/system design
24K    2017-12-30 10:20    /home/xyz/test/table/sample_table/tree
28K    2017-12-30 10:20    /home/xyz/test/table/sample_table
32K    2017-12-30 10:20    /home/xyz/test/table
98M    2018-02-02 17:32    /home/xyz/test

Read More: LPI Certifications

Thursday 27 February 2020

LPIC-1 Using SED

LPIC-1 Using SED, LPIC-1 Tutorials and Materials, LPIC Exam Prep, LPI Guides

Sed is extremely powerful, and the tasks it can accomplish are limited only by your imagination. This small introduction should whet your appetite for sed, but is not intended to be complete or extensive.

As with many of the text commands we have looked at so far, sed can work as a filter or take its input from a file. Output is to the standard output stream. Sed loads lines from the input into the pattern space, applies sed editing commands to the contents of the pattern space, and then writes the pattern space to standard output. Sed might combine several lines in the pattern space, and it might write to a file, write only selected output, or not write at all.

Sed uses regular expression syntax to search for and replace text selectively in the pattern space as well as to control which lines of text should be operated on by sets of editing commands. Regular expressions are covered more fully in the tutorial on searching text files using regular expressions. A hold buffer provides temporary storage for text. The hold buffer might replace the pattern space, be added to the pattern space, or be exchanged with the pattern space. Sed has a limited set of commands, but these combined with regular expression syntax and the hold buffer make for some amazing capabilities. A set of sed commands is usually called a sed script.

Listing 1 shows three simple sed scripts. In the first one, we use the s (substitute) command to substitute an uppercase for a lowercase 'a' on each line. This example replaces only the first 'a', so in the second example, we add the 'g' (for global) flag to cause sed to change all occurrences. In the third script, we introduce the d (delete) command to delete a line. In our example, we use an address of 2 to indicate that only line 2 should be deleted. We separate commands using a semi-colon (;) and use the same global substitution that we used in the second script to replace 'a' with 'A'.

Listing 1. Beginning sed scripts

ian@Z61t-u14:~/lpi103-2$ sed 's/a/A/' text1
1 Apple
2 peAr
3 bAnana
ian@Z61t-u14:~/lpi103-2$ sed 's/a/A/g' text1
1 Apple
2 peAr
3 bAnAnA
ian@Z61t-u14:~/lpi103-2$ sed '2d;$s/a/A/g' text1
1 apple
3 bAnAnA

In addition to operating on individual lines, sed can operate on a range of lines. The beginning and end of the range is separated by a comma (,) and can be specified as a line number, a regular expression, or a dollar sign ($) for the end of file. Given an address or a range of addresses, you can group several commands between curly braces, { and } to have these commands operate only on lines selected by the range. Listing 2 illustrates two ways of having our global substitution applied to only the last two lines of our file. It also illustrates the use of the -e option to add multiple commands to the script.

Listing 2. Sed addresses

ian@Z61t-u14:~/lpi103-2$ sed -e '2,${' -e 's/a/A/g' -e '}' text1
1 apple
2 peAr
3 bAnAnA
ian@Z61t-u14:~/lpi103-2$ sed -e '/pear/,/bana/{' -e 's/a/A/g' -e '}' text1
1 apple
2 peAr
3 bAnAnA

Sed scripts can also be stored in files. In fact, you probably want to do this for frequently used scripts. Remember earlier we used the tr command to change blanks in text1 to tabs. Let's now do that with a sed script stored in a file. We use the echo command to create the file. The results are shown in Listing 3.

Listing 3. A sed one-liner

ian@Z61t-u14:~/lpi103-2$ echo -e "s/ /\t/g">sedtab
ian@Z61t-u14:~/lpi103-2$ cat sedtab
s/ /    /g
ian@Z61t-u14:~/lpi103-2$ sed -f sedtab text1
1   apple
2   pear
3   banana

There are many handy sed one-liners such as Listing 3.

Our final sed example uses the = command to print line numbers and then filter the resulting output through sed again to mimic the effect of the nl command to number lines. The = command in sed prints the current line number followed by a newline character, so the output contains two lines for each input line. Listing 4 uses = to print line numbers, then uses the N command to read a second input line into the pattern space, and finally removes the newline character (\n) between the two lines in the pattern space to merge the two lines into a single line.

Listing 4. Numbering lines with sed

ian@Z61t-u14:~/lpi103-2$ sed '=' text2
1
9   plum
2
3   banana
3
10  apple
ian@Z61t-u14:~/lpi103-2$ sed '=' text2|sed 'N;s/\n//'
19  plum
23  banana
310 apple

Not quite what we wanted! What we would really like is to have our numbers aligned in a column with some space before the lines from the file. In Listing 5, we enter several lines of commands (note the > secondary prompt). Study the example and refer to the explanation below.

Listing 5. Numbering lines with sed - round two

ian@Z61t-u14:~/lpi103-2$ cat text1 text2 text1 text2>text6
ian@Z61t-u14:~/lpi103-2$ ht=$(echo -en "\t")
ian@Z61t-u14:~/lpi103-2$ sed '=' text6|sed "N
> s/^/      /
> s/^.*\(......\)\n/\1$ht/"
     1  1 apple
     2  2 pear
     3  3 banana
     4  9   plum
     5  3   banana
     6  10  apple
     7  1 apple
     8  2 pear
     9  3 banana
    10  9   plum
    11  3   banana
    12  10  apple

Here are the steps that we took:
  1. We first used cat to create a 12-line file from two copies each of our text1 and text2 files. There's no fun in formatting numbers in columns if we don't have differing numbers of digits.
  2. The bash shell uses the tab key for command completion, so it can be handy to have a captive tab character that you can use when you want a real tab. We use the echo command to accomplish this and save the character in the shell variable 'ht'.
  3. We create a stream that contains line numbers followed by data lines as we did before and filter it through a second copy of sed.
  4. We read a second line into the pattern space.
  5. We prefix our line number at the start of the pattern space (denoted by ^) with six blanks.
  6. We then substitute all of the pattern space up to and including the first newline with the six characters immediately before the newline plus a tab character. This aligns our line numbers in the first six columns of the output line. The original line from the text6 file follows the tab character. Note that the left part of the 's' command uses '\(' and '\)' to mark the characters that we want to use in the right part. In the right part, we reference the first such marked set (and only such set in this example) as \1. Note that our command is contained between double quotation marks (") so that substitution occurs for $ht.

Tuesday 25 February 2020

apt-get command in Linux with Examples

apt-get is a command-line tool which helps in handling packages in Linux. Its main task is to retrieve the information and packages from the authenticated sources for installation, upgrade and removal of packages along with their dependencies. Here APT stands for the Advanced Packaging Tool.

Syntax:

apt-get [options] command

or

apt-get [options] install|remove pkg1 [pkg2 ...]

or

apt-get [options] source pkg1 [pkg2 ...]

Most Used Commands: You need to provide one of the commands below, if -h option is not used.

◉ update : This command is used to synchronize the package index files from their sources again. You need to perform an update before you upgrade or dist-upgrade.

apt-get update

◉ upgrade : This command is used to install the latest versions of the packages currently installed on the user’s system from the sources enumerated in /etc/apt/sources.list. The installed packages which have new packages available are retrieved and installed. You need to perform an update before the upgrade, so that apt-get knows that new versions of packages are available.

apt-get upgrade

◉ dselect-upgrade : This is used alongwith the Debian packaging tool, dselect. It follows the changes made by dselect to the Status field of available packages, and performs any actions necessary to realize that state.

apt-get dselect-upgrade

◉ dist-upgrade : This command performs the function of upgrade, and also handles changing dependencies with new versions of packages. If necessary, the apt-get command will try to upgrade important packages at the expense of less important ones. It may also remove some packages in this process.

apt-get dist-upgrade

◉ install : This command is used to install or upgrade packages. It is followed by one or more package names the user wishes to install. All the dependencies of the desired packages will also be retrieved and installed. The user can also select the desired version by following the package name with an ‘equals’ and the desired version number. Also, the user can select a specific distribution by following the package name with a forward slash and the version or the archive name (e.g. ‘stable’, ‘testing’ or ‘unstable’). Both of these version selection methods have the potential to downgrade the packages, so must be used with care.

apt-get install [...PACKAGES]

◉ remove : This is similar to install, with the difference being that it removes the packages instead of installing. It does not remove any configuration files created by the package.

apt-get remove [...PACKAGES]

◉ purge : This command removes the packages, and also removes any configuration files related to the packages.

apt-get purge [...PACKAGES]

◉ check : This command is used to update the package cache and checks for broken dependencies.

apt-get check

◉ download : This command is used to download the given binary package in the current directory.

apt-get download [...PACKAGES]

◉ clean : This command is used to clear out the local repository of retrieved package files. It removes everything but not the lock file from /var/cache/apt/archives/partial/ and /var/cache/apt/archives/.

apt-get clean

◉ autoremove : Sometimes the packages which are automatically installed to satisfy the dependencies of other packages, are no longer needed then autoremove command is used to remove these kind of packages.

apt-get autoremove

Options:

◉ –no-install-recommends : By passing this option, the user lets apt-get know not to consider recommended packages as a dependency to install.

apt-get --no-install-recommends [...COMMAND]

◉ –install-suggests : By passing this option, the user lets apt-get know that it should consider suggested packages as dependencies to install.

apt-get --install-suggests  [...COMMAND]

◉ -d or –download-only : By passing this option, the user specifies that apt-get should only retrieve the packages, and not unpack or install them.

apt-get -d [...COMMAND]

◉ -f or –fix-broken : By passing this option, the user specifies that apt-get should attempt to correct the system with broken dependencies in place.

apt-get -f [...COMMAND]

◉ -m or –ignore-missing or –fix-missing : By passing this option, the user specifies that apt-get should ignore the missing packages ( packages that cannot be retrieved or fail the integrity check ) and handle the result.

apt-get -m [...COMMAND]

◉ –no-download : By passing this command, the user disables downloading for apt-get. It means that it should only use the .debs it has already downloaded.

apt-get  [...COMMAND]

◉ -q or –quiet : When this option is specified, apt-get produces output which is suitable for logging.

apt-get  [...COMMAND]

◉ -s or –simulate or –just-print or –dry-run or –recon or –no-act : This option specifies that no action should be taken, and perform a simulation of events that would occur based on the current
system, but do not change the system.

apt-get -s [...COMMAND]

◉ -y or –yes or –assume-yes : During the execution of apt-get command, it may sometimes prompt the user for a yes/no. With this option, it is specified that it should assume ‘yes’ for all prompts, and should run without any interaction.

apt-get -y [...COMMAND]

◉ –assume-no : With this option, apt-get assumes ‘no’ for all prompts.

apt-get --assume-no [...COMMAND]

◉ –no-show-upgraded : With this option, apt-get will not show the list of all packages that are to be upgraded.

apt-get --no-show-upgraded [...COMMAND]

◉ -V or –verbose-versions : With this option, apt-get will show full versions for upgraded and installed packages.

apt-get -V [...COMMAND]

◉ –show-progress : With this option, apt-get will show user-friendly progress in the terminal window when the packages are being installed, removed or upgraded.

apt-get --show-progress [...COMMAND]

◉ -b or –compile or –build : With this option, apt-get will compile/build the source packages it downloads.

apt-get -b [...COMMAND]

◉ –no-upgrade : With this option, apt-get prevents the packages from being upgraded if they are already installed.

apt-get --no-upgrade [...COMMAND]

◉ –only-upgrade : With this option, apt-get will only upgrade the packages which are already installed, and not install new packages.

apt-get --only-upgrade [...COMMAND]

◉ –reinstall : With this option, apt-get reinstalls the packages that are already installed, at their latest versions.

apt-get --reinstall [...COMMAND]

◉ –auto-remove or –autoremove : When using apt-get with install or remove command, this option acts like running the autoremove command.

apt-get install/remove --autoremove [...PACKAGES]

◉ -h or –help : With this option, apt-get displays a short usage summary.

apt-get -h

Output:

xyz@pq: ~ $ apt-get  --version

apt-get Command, Linux Study Materials, LPI Prep, LPI Exam Prep, LPI Tutorial and Material

xyz@pq: ~ $

◉ -v or –version : With this option, apt-get displays it’s current version number.
apt-get  [...COMMAND]

Output:

xyz@pq: ~ $ apt-get  --version

apt-get Command, Linux Study Materials, LPI Prep, LPI Exam Prep, LPI Tutorial and Material

xyz@pq: ~ $

Note: apt-get command will return 0 for successful executions, and decimal 100 in case of errors.

Thursday 20 February 2020

Unix Sed Command to Delete Lines in File - Top 10 Examples

Unix Sed Command, LPI Study Materials, LPI Guides, LPI Prep, LPI Exam Prep

Sed Command to Delete Lines: Sed command can be used to delete or remove specific lines which matches a given pattern or in a particular position in a file. Here we will see how to delete lines using sed command with various examples.

The following file contains a sample data which is used as input file in all the examples:

> cat file
linux
unix
fedora
debian
ubuntu

Sed Command to Delete Lines - Based on Position in File


In the following examples, the sed command removes the lines in file that are in a particular position in a file.

1. Delete first line or header line

The d option in sed command is used to delete a line. The syntax for deleting a line is:

> sed 'Nd' file

Here N indicates Nth line in a file. In the following example, the sed command removes the first line in a file.

> sed '1d' file
unix
fedora
debian
ubuntu

2. Delete last line or footer line or trailer line

The following sed command is used to remove the footer line in a file. The $ indicates the last line of a file.

> sed '$d' file
linux
unix
fedora
debian

3. Delete particular line

This is similar to the first example. The below sed command removes the second line in a file.

> sed '2d' file
linux
fedora
debian
ubuntu

4. Delete range of lines

The sed command can be used to delete a range of lines. The syntax is shown below:

> sed 'm,nd' file

Here m and n are min and max line numbers. The sed command removes the lines from m to n in the file. The following sed command deletes the lines ranging from 2 to 4:

> sed '2,4d' file
linux
ubuntu

5. Delete lines other than the first line or header line

Use the negation (!) operator with d option in sed command. The following sed command removes all the lines except the header line.

> sed '1!d' file
linux

6. Delete lines other than last line or footer line

> sed '$!d' file
ubuntu

7. Delete lines other than the specified range

> sed '2,4!d' file
unix
fedora
debian

Here the sed command removes lines other than 2nd, 3rd and 4th.

8. Delete first and last line

You can specify the list of lines you want to remove in sed command with semicolon as a delimiter.

> sed '1d;$d' file
unix
fedora
debian

9. Delete empty lines or blank lines

> sed '/^$/d' file

The ^$ indicates sed command to delete empty lines. However, this sed do not remove the lines that contain spaces.

Sed Command to Delete Lines - Based on Pattern Match


In the following examples, the sed command deletes the lines in file which match the given pattern.

10. Delete lines that begin with specified character

> sed '/^u/d' file
linux
fedora
debian

^ is to specify the starting of the line. Above sed command removes all the lines that start with character 'u'.

Tuesday 18 February 2020

Exam 701: DevOps Tools Engineer

DevOps Tools Engineer, LPI Study Material, LPI Tutorial and Material, LPI Certification, LPI

Businesses across the globe are increasingly implementing DevOps practices to optimize daily systems administration and software development tasks. As a result, businesses across industries are hiring IT professionals that can effectively apply DevOps to reduce delivery time and improve quality in the development of new software products.

To meet this growing need for qualified professionals, LPI developed the Linux Professional Institute DevOps Tools Engineer certification which verifies the skills needed to use the tools that enhance collaboration in workflows throughout system administration and software development.

In developing the Linux Professional Institute DevOps Tools Engineer certification, LPI reviewed the DevOps tools landscape and defined a set of essential skills when applying DevOps. As such, the certification exam focuses on the practical skills required to work successfully in a DevOps environment – focusing on the skills needed to use the most prominent DevOps tools. The result is a certification that covers the intersection between development and operations, making it relevant for all IT professionals working in the field of DevOps.

Current Version: 1.0 (Exam code 701-100)

Objectives: 701-100

Prerequisites: There are no prerequisites for this certification.

Requirements: Pass the Linux Professional Institute DevOps Tools Engineer exam. The 90-minute exam consists of 60 multiple choice and fill-in-the-blank questions.

Validity Period: 5 years

Languages: English, Japanese

Exam 701 Objectives


Topic 701: Software Engineering


701.1 Modern Software Development (weight: 6) 

Weight: 6

Description: Candidates should be able to design software solutions suitable for modern runtime environments. Candidates should understand how services handle data persistence, sessions, status information, transactions, concurrency, security, performance, availability, scaling, load balancing, messaging, monitoring and APIs. Furthermore, candidates should understand the implications of agile and DevOps on software development.

Key Knowledge Areas:

◈ Understand and design service based applications
◈ Understand common API concepts and standards
◈ Understand aspects of data storage, service status and session handling
◈ Design software to be run in containers
◈ Design software to be deployed to cloud services
◈ Awareness of risks in the migration and integration of monolithic legacy software
◈ Understand common application security risks and ways to mitigate them
◈ Understand the concept of agile software development
◈ Understand the concept of DevOps and its implications to software developers and operators

The following is a partial list of the used files, terms and utilities:

◈ REST, JSON
◈ Service Orientated Architectures (SOA)
◈ Microservices
◈ Immutable servers
◈ Loose coupling
◈ Cross site scripting, SQL injections, verbose error reports, API authentication, consistent enforcement of transport encryption
◈ CORS headers and CSRF tokens
◈ ACID properties and CAP theorem

701.2 Standard Components and Platforms for Software (weight: 2)

Weight: 2

Description: Candidates should understand services offered by common cloud platforms. They should be able to include these services in their application architectures and deployment toolchains and understand the required service configurations. OpenStack service components are used as a reference implementation.

Key Knowledge Areas:

◈ Features and concepts of object storage
◈ Features and concepts of relational and NoSQL databases
◈ Features and concepts of message brokers and message queues
◈ Features and concepts of big data services
◈ Features and concepts of application runtimes / PaaS
◈ Features and concepts of content delivery networks

The following is a partial list of the used files, terms and utilities:

◈ OpenStack Swift
◈ OpenStack Trove
◈ OpenStack Zaqar
◈ CloudFoundry
◈ OpenShift

701.3 Source Code Management (weight: 5)

Weight: 5

Description: Candidates should be able to use Git to manage and share source code. This includes creating and contributing to a repository as well as the usage of tags, branches and remote repositories. Furthermore, the candidate should be able to merge files and resolve merging conflicts.

Key Knowledge Areas:

◈ Understand Git concepts and repository structure
◈ Manage files within a Git repository
◈ Manage branches and tags
◈ Work with remote repositories and branches as well as submodules
◈ Merge files and branches
◈ Awareness of SVN and CVS, including concepts of centralized and distributed SCM solutions

The following is a partial list of the used files, terms and utilities:

◈ git
◈ .gitignore

701.4 Continuous Integration and Continuous Delivery (weight: 5)

Weight: 5

Description: Candidates should understand the principles and components of a continuous integration and continuous delivery pipeline. Candidates should be able to implement a CI/CD pipeline using Jenkins, including triggering the CI/CD pipeline, running unit, integration and acceptance tests, packaging software and handling the deployment of tested software artifacts. This objective covers the feature set of Jenkins version 2.0 or later.

Key Knowledge Areas:

◈ Understand the concepts of Continuous Integration and Continuous Delivery
◈ Understand the components of a CI/CD pipeline, including builds, unit, integration and acceptance tests, artifact management, delivery and deployment
◈ Understand deployment best practices
◈ Understand the architecture and features of Jenkins, including Jenkins Plugins, Jenkins API, notifications and distributed builds
◈ Define and run jobs in Jenkins, including parameter handling
◈ Fingerprinting, artifacts and artifact repositories
◈ Understand how Jenkins models continuous delivery pipelines and implement a declarative continuous delivery pipeline in Jenkins
◈ Awareness of possible authentication and authorization models
◈ Understanding of the Pipeline Plugin
◈ Understand the features of important Jenkins modules such as Copy Artifact Plugin, Fingerprint Plugin, Docker Pipeline, Docker Build and Publish plugin, Git Plugin, Credentials Plugin
◈ Awareness of Artifactory and Nexus

The following is a partial list of the used files, terms and utilities:

◈ Step, Node, Stage
◈ Jenkins SDL
◈ Jenkinsfile
◈ Declarative Pipeline
◈ Blue-green and canary deployment

Topic 702: Container Management


702.1 Container Usage (weight: 7)

Weight: 7

Description: Candidates should be able to build, share and operate Docker containers. This includes creating Dockerfiles, using a Docker registry, creating and interacting with containers as well as connecting containers to networks and storage volumes. This objective covers the feature set of Docker version 17.06 or later.

Key Knowledge Areas:

◈ Understand the Docker architecture
◈ Use existing Docker images from a Docker registry
◈ Create Dockerfiles and build images from Dockerfiles
◈ Upload images to a Docker registry
◈ Operate and access Docker containers
◈ Connect container to Docker networks
◈ Use Docker volumes for shared and persistent container storage

The following is a partial list of the used files, terms and utilities:

◈ docker
◈ Dockerfile
◈ .dockerignore

702.2 Container Deployment and Orchestration (weight: 5)

Weight: 5

Description: Candidates should be able to run and manage multiple containers that work together to provide a service. This includes the orchestration of Docker containers using Docker Compose in conjunction with an existing Docker Swarm cluster as well as using an existing Kubernetes cluster. This objective covers the feature sets of Docker Compose version 1.14 or later, Docker Swarm included in Docker 17.06 or later and Kubernetes 1.6 or later.

Key Knowledge Areas:

◈ Understand the application model of Docker Compose
◈ Create and run Docker Compose Files (version 3 or later)
◈ Understand the architecture and functionality of Docker Swarm mode
◈ Run containers in a Docker Swarm, including the definition of services, stacks and the usage of secrets
◈ Understand the architecture and application model Kubernetes
◈ Define and manage a container-based application for Kubernetes, including the definition of ◈ Deployments, Services, ReplicaSets and Pods

The following is a partial list of the used files, terms and utilities:

◈ docker-compose
◈ docker
◈ kubectl

702.3 Container Infrastructure (weight: 4)

Weight: 4

Description: Candidates should be able to set up a runtime environment for containers. This includes running containers on a local workstation as well as setting up a dedicated container host. Furthermore, candidates should be aware of other container infrastructures, storage, networking and container specific security aspects. This objective covers the feature set of Docker version 17.06 or later and Docker Machine 0.12 or later.

Key Knowledge Areas:

◈ Use Docker Machine to setup a Docker host
◈ Understand Docker networking concepts, including overlay networks
◈ Create and manage Docker networks
◈ Understand Docker storage concepts
◈ Create and manage Docker volumes
◈ Awareness of Flocker and flannel
◈ Understand the concepts of service discovery
◈ Basic feature knowledge of CoreOS Container Linux, rkt and etcd
◈ Understand security risks of container virtualization and container images and how to mitigate them

The following is a partial list of the used files, terms and utilities:

◈ docker-machine

Topic 703: Machine Deployment


703.1 Virtual Machine Deployment (weight: 4)

Weight: 4

Description: Candidates should be able to automate the deployment of a virtual machine with an operating system and a specific set of configuration files and software.

Key Knowledge Areas:

◈ Understand Vagrant architecture and concepts, including storage and networking
◈ Retrieve and use boxes from Atlas
◈ Create and run Vagrantfiles
◈ Access Vagrant virtual machines
◈ Share and synchronize folder between a Vagrant virtual machine and the host system
◈ Understand Vagrant provisioning, including File, Shell, Ansible and Docker
◈ Understand multi-machine setup

The following is a partial list of the used files, terms and utilities:

◈ vagrant
◈ Vagrantfile

703.2 Cloud Deployment (weight: 2)

Weight: 2

Description: Candidates should be able to configure IaaS cloud instances and adjust them to match their available hardware resources, specifically, disk space and volumes. Additinally, candidates should be able to configure instances to allow secure SSH logins and prepare the instances to be ready for a configuration management tool such as Ansible.

Key Knowledge Areas:

◈ Understanding the features and concepts of cloud-init, including user-data and initializing and configuring cloud-init
◈ Use cloud-init to create, resize and mount file systems, configure user accounts, including login credentials such as SSH keys and install software packages from the distribution’s repository
◈ Understand the features and implications of IaaS clouds and virtualization for a computing instance, such as snapshotting, pausing, cloning and resource limits.

703.3 System Image Creation (weight: 2)

Weight: 2

Description: Candidates should be able to create images for containers, virtual machines and IaaS cloud instances.

Key Knowledge Areas:

◈ Understand the functionality and features of Packer
◈ Create and maintain template files
◈ Build images from template files using different builders

The following is a partial list of the used files, terms and utilities:

◈ packer

Topic 704: Configuration Management


704.1 Ansible (weight: 8)

Weight: 8

Description: Candidates should be able to use Ansible to ensure a target server is in a specific state regarding its configuration and installed software. This objective covers the feature set of Ansible version 2.2 or later.

Key Knowledge Areas:

◈ Understand the principles of automated system configuration and software installation
◈ Create and maintain inventory files
◈ Understand how Ansible interacts with remote systems
◈ Manage SSH login credentials for Ansible, including using unprivileged login accounts
◈ Create, maintain and run Ansible playbooks, including tasks, handlers, conditionals, loops and registers
◈ Set and use variables
◈ Maintain secrets using Ansible vaults
◈ Write Jinja2 templates, including using common filters, loops and conditionals
◈ Understand and use Ansible roles and install Ansible roles from Ansible Galaxy
◈ Understand and use important Ansible tasks, including file, copy, template, ini_file, lineinfile, patch, replace, user, group, command, shell, service, systemd, cron, apt, debconf, yum, git, and debug
◈ Awareness of dynamic inventory
◈ Awareness of Ansibles features for non-Linux systems
◈ Awareness of Ansible containers

The following is a partial list of the used files, terms and utilities:

◈ ansible.cfg
◈ ansible-playbook
◈ ansible-vault
◈ ansible-galaxy
◈ ansible-doc

704.2 Other Configuration Management Tools (weight: 2)

Weight: 2

Description: Candidates should understand the main features and principles of important configuration management tools other than Ansible.

Key Knowledge Areas:

◈ Basic feature and architecture knowledge of Puppet.
◈ Basic feature and architecture knowledge of Chef.

The following is a partial list of the used files, terms and utilities:

◈ Manifest, Class, Recipe, Cookbook
◈ puppet
◈ chef
◈ chef-solo
◈ chef-client
◈ chef-server-ctl
◈ knife

Topic 705: Service Operations


705.1 IT Operations and Monitoring (weight: 4)

Weight: 4

Description: Candidates should understand how IT infrastructure is involved in delivering a service. This includes knowledge about the major goals of IT operations, understanding functional and nonfunctional properties of an IT services and ways to monitor and measure them using Prometheus. Furthermore candidates should understand major security risks in IT infrastructure. This objective covers the feature set of Prometheus 1.7 or later.

Key Knowledge Areas:

◈ Understand goals of IT operations and service provisioning, including nonfunctional properties such as availability, latency, responsiveness
◈ Understand and identify metrics and indicators to monitor and measure the technical functionality of a service
◈ Understand and identify metrics and indicators to monitor and measure the logical functionality of a service
◈ Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana
◈ Monitor containers and microservices using Prometheus
◈ Understand the principles of IT attacks against IT infrastructure
◈ Understand the principles of the most important ways to protect IT infrastructure
◈ Understand core IT infrastructure components and their the role in deployment

The following is a partial list of the used files, terms and utilities:

◈ Prometheus, Node exporter, Pushgateway, Altermanager, Grafana
◈ Service exploits, brute force attacks, and denial of service attacks
◈ Security updates, packet filtering and application gateways
◈ Virtualization hosts, DNS and load balancers

705.2 Log Management and Analysis (weight: 4)

Weight: 4

Description: Candidates should understand the role of log files in operations and troubleshooting. They should be able to set up centralized logging infrastructure based on Logstash to collect and normalize log data. Furthermore, candidates should understand how Elasticsearch and Kibana help to store and access log data.

Key Knowledge Areas:

◈ Understand how application and system logging works
◈ Understand the architecture and functionality of Logstash, including the lifecycle of a log message and Logstash plugins
◈ Understand the architecture and functionality of Elasticsearch and Kibana in the context of log data management (Elastic Stack)
◈ Configure Logstash to collect, normalize, transform and store log data
◈ Configure syslog and Filebeat to send log data to Logstash
◈ Configure Logstash to send email alerts
◈ Understand application support for log management

The following is a partial list of the used files, terms and utilities:

◈ logstash
◈ input, filter, output
◈ grok filter
◈ Log files, metrics
◈ syslog.conf
◈ /etc/logstash/logstash.yml
◈ /etc/filebeat/filebeat.yml

Source: lpi.org

Sunday 16 February 2020

accept - Linux Command

LPI Tutorial and Materials, LPI Guides, LPI Study Materials, LPI Prep, LPI Exam Prep

NAME


accept - This command causes the print queue to accept printing job requests.

SYNOPSIS


accept [ -E ] [ -U username ] [ -h hostname[:port] ] destination(s)

DESCRIPTION


The accept command allows the queuing of print requests for the named Destinations. A Destination can be either a printer or a class of printers. When a printer is accepting requests, a user is able to submit a print job to the printer, even if it is not enabled. This allows short maintenance to be completed on the printer while still allowing jobs to be submitted.

This commands only works when you're logged in as root, either when you log in or after switching to root using the su command.

If the printer is not accepting requests, a user submitting a job will receive an error. In other words, the administrator can disable a printer to change paper or to change toner, but the scheduler will still accept the request.

This command allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive.

OPTIONS


Tag Description
-E Forces encryption when connecting to the server.
-U  Sets the username that is sent when connecting to the server. 
-h hostname[:port]  You can use hostname and port to connect to a remote server. 
-r "reason"   Sets the reason string that is shown for a printer that is rejecting jobs. 

EXAMPLES


LPI Tutorial and Materials, LPI Guides, LPI Study Materials, LPI Prep, LPI Exam Prep
Consider a printer named laserjetV attached to your PC running Linux/Unix system. Following is a sequence of steps to print a document:

Step 1 - To enable the printer

$enable laserjetV

Step 2 - To check the status of printer, run lpstat command:

$lpstat -a -p laserjetV

Step 3 - The output shown will be as follows:

laserjetV not accepting requests since Jan 01 00:00
printer laserjetV is idle. enabled since Jan 01 00:00

At this point, the printer is enabled, but still not accepting requests. In order to have the printer accept requests, run the following accept command:

Step 4 - Start accepting print requests:

$accept  laserjetV

Consider our printer is available on a server available remotely whose IP address is 120.10.100.100 and port is 631, then we will issue following accept command:

$accept -h 120.10.100.100:631 laserjetV 

Now again use lpstat -a command

$lpstat -a -p laserjetV

This will produce following result:

laserjetV accepting requests since Jan 01 00:00

Saturday 15 February 2020

Internal and External Commands in Linux

LPI Tutorial and Material, LPI Certification, Linux Study Materials, Unix Learning

The UNIX system is command-based i.e things happen because of the commands that you key in. All UNIX commands are seldom more than four characters long.

They are grouped into two categories:


◈ Internal Commands : Commands which are built into the shell. For all the shell built-in commands, execution of the same is fast in the sense that the shell doesn’t have to search the given path for them in the PATH variable and also no process needs to be spawned for executing it.
Examples: source, cd, fg etc.

◈ External Commands : Commands which aren’t built into the shell. When an external command has to be executed, the shell looks for its path given in PATH variable and also a new process has to be spawned and the command gets executed. They are usually located in /bin or /usr/bin. For example, when you execute the “cat” command, which usually is at /usr/bin, the executable /usr/bin/cat gets executed.
Examples: ls, cat etc.

If you know about UNIX commands, you must have heard about the ls command. Since ls is a program or file having an independent existence in the /bin directory(or /usr/bin), it is branded as an external command that actually means that the ls command is not built into the shell and these are executables present in separate file. In simple words, when you will key in the ls command, to be executed it will be found in /bin. Most commands are external in nature, but there are some which are not really found anywhere, and some which are normally not executed even if they are in one of the directories specified by PATH. For instance, take echo command:

$type echo
echo is a shell builtin

echo isn’t an external command in the sense that, when you type echo, the shell won’t look in its PATH to locate it(even if it is there in /bin). Rather, it will execute it from its own set of built-in commands that are not stored as separate files. These built-in commands, of which echo is a member, are known as internal commands.

You now might have noticed that it’s the shell that actually does all this works. This program starts running as soon the user log in, and dies when the user log out. The shell is an external command with a difference, it possesses its own set of internal commands. So, if a command exists both as an internal command of the shell as well as external one(in /bin or /usr/bib), the shell will accord top priority to its own internal command of the same name.

This is exactly the case with echo which is also found in /bin, but rarely ever executed because the shell makes sure that the internal echo command takes precedence over the external. Now, talk more about the internal and external commands.

Getting the list of Internal Commands


If you are using bash shell you can get the list of shell built-in commands with help command :

$help

// this will list all
the shell built-in commands //


How to find out whether a command is internal or external?


In addition to this you can also find out about a particular command i.e whether it is internal or external with the help of type command :

$type cat
cat is /bin/cat

//specifying that cat is
external type//

$type cd
cd is a shell builtin

//specifying that cd is
internal type//


Internal vs External


The question that when to use which command between internal and external command is of no use cause the user uses a command according to the need of the problem he wants to solve. The only difference that exists between internal and external command is that internal commands work much faster than the external ones as the shell has to look for the path when it comes to the use of external commands.

There are some cases where you can avoid the use of external by using internal in place of them, like if you need to add two numbers you can do it as:

//use of internal command let
for addition//

$let c=a+b

instead of using :

//use of external command expr
for addition//

$c=`expr $a+$b`

In such a case, use of let will be more better option as it is a shell built-in command so will work faster than the expr which is an external command.

Thursday 13 February 2020

Knowledge Freedom - The Linux Professional Institute trip to Cuba

LPI Study Materials, LPI Tutorial and Material, LPI Exam Prep, LPI Learning

From November 17th to 25th of 2019, the Linux Professional Institute (LPI) Latin America and Caribbean (LAC) team was reunited in Cuba for a series of activities involving Academia, Industry, Government, and, of course, the local Free Software Community. For Rafael Peregrino (Director of Partnerships), Eduardo Lima (Business Executive), and me, Cesar Brod (Director of Community Engagement), this was the first time on the beautiful island. Hernán Pachas (Alliance and Business Executive) had been there a couple months before to establish the first face-to-face contacts and to fine-tune the agenda for the November events.

Those visitors to Cuba who expect to see old cars, a mix of old Spanish colonial architecture, and French-style two to four floors buildings will not be disappointed. The old portion of the capital city, Havana (La Habana Vieja), was designated as a UNESCO World Heritage site. There is much more to Cuba, though. Some very modern buildings and brand new cars can be seen in a short walk through the boardwalk (el Malecón). What makes Cuba beautiful, though, is its people. The literacy index is practically 100%, and all Cubans taxi drivers are well trained in the history of the country, which makes all taxi rides and city walks a lesson. Also, many of the drivers are Mechanical Engineers, which makes a lot of sense since the ones who drive the old cars have to be extremely creative to keep them running. Furthermore, due to the embargo the United States imposes on Cuba, creativity is a must have quality.

LPI Study Materials, LPI Tutorial and Material, LPI Exam Prep, LPI Learning
The Cuban government has been very concerned about the education of its people and that is why Cubans working with technology have developed knowledge and skills in Linux and open source software. From the UCI - Universidad de las Ciencias Informáticas (the Computer Science University), a team of engineers have been developing FLOSS projects. From the Linux community, people collaborate on OSS projects such as the ParrotOS distribution. The commercial software industry, meanwhile, has been concerned with developing OSS solutions which are implemented in many industrial sectors inside and outside Cuba.

The first days in Havana were dedicated to activities inside the UCI. I conducted a Linux Essentials review workshop and was quite impressed by the fact the more than 20 people present were well versed on the contents. Other than having studied the LE topics prior to our arrival, they all have hands-on experience as systems administrators. I am pretty sure all of them were already able to take our LPIC-1 and LPIC-2 certifications. After the workshop everyone stayed there and committed to working together, community members and the University scholars, towards the evolution of NOVA, the Cuban-made Linux distro already used by the UCI and several governmental offices. On the following day, 20 people took our online Linux Essentials exam, proctored by Hernán, Eduardo, and me. As the UCI became an LPI Academic Partner, two professors also helped proctoring so they learned about the process. Everyone passed the exam, three of them with a 100% score. Hernán gave a plush penguin to the oldest exam taker, 65 years old, who showed it is never too late to keep learning. At the end of the day, the team had a meeting with the vice rector and other high regents of the university to work on the details of the partnership.

On the 20th, Rafael arrived for meetings with the Ministry of Communications, who opened their doors to several possibilities, including introductions to ICT companies in Cuba who could be interested to become LPI’s Hiring Partners. The next day, our team met with GEIC, the Association of ICT Cuban Companies, and talked about the possibility of them becoming a Channel Partner. Over the next few days, we met with several companies we were introduced to by GEIC, the Ministry of Communications, and also others Hernán had contacted before our arrival. All of them were quite interested in how LPI could be part of their future, working with them to form Cuban professionals who will be able to export their services and knowledge throughout the world. Cubans, of course, already export their intelligence and culture through literature, music, and medical knowledge.

As this was also an opportunity to get all of our executives together face-to-face, all the time we were not meeting with the locals, we were working on our strategic planning for 2020 and beyond.

We also spent a lot of time with the Cuban FOSS community, learning from them how LPI can better help them. We will no doubt be seeing a lot of new educational materials being produced, translated, and reviewed by our new friends.

Source: lpi.org

Tuesday 11 February 2020

Run Levels in Linux

LPI Study Materials, LPI Guides, LPI Certification, LPI Tutorial and Material, LPI Prep, LPI Learning

A run level is a state of init and the whole system that defines what system services are operating. Run levels are identified by numbers. Some system administrators use run levels to define which subsystems are working, e.g., whether X is running, whether the network is operational, and so on.

◉ Whenever a LINUX system boots, firstly the init process is started which is actually responsible for running other start scripts which mainly involves initialization of you hardware, bringing up the network, starting the graphical interface.

◉ Now, the init first finds the default runlevel of the system so that it could run the start scripts corresponding to the default run level.

◉ A runlevel can simply be thought of as the state your system enters like if a system is in a single-user mode it will have a runlevel 1 while if the system is in a multi-user mode it will have a runlevel 5.

◉ A runlevel in other words can be defined as a preset single digit integer for defining the operating state of your LINUX or UNIX-based operating system. Each runlevel designates a different system configuration and allows access to different combination of processes.

The important thing to note here is that there are differences in the runlevels according to the operating system. The standard LINUX kernel supports these seven different runlevels :

◉ 0 – System halt i.e the system can be safely powered off with no activity.3

◉ 1 – Single user mode.

◉ 2 – Multiple user mode with no NFS(network file system).

◉ 3 – Multiple user mode under the command line interface and not under the graphical user interface.

◉ 4 – User-definable.

◉ 5 – Multiple user mode under GUI (graphical user interface) and this is the standard runlevel for most of the LINUX based systems.

◉ 6 – Reboot which is used to restart the system.

By default most of the LINUX based system boots to runlevel 3 or runlevel 5.

In addition to the standard runlevels, users can modify the preset runlevels or even create new ones according to the requirement. Runlevels 2 and 4 are used for user defined runlevels and runlevel 0 and 6 are used for halting and rebooting the system.

Obviously the start scripts for each run level will be different performing different tasks. These start scripts corresponding to each run level can be found in special files present under rc sub directories.

At /etc/rc.d directory there will be either a set of files named rc.0, rc.1, rc.2, rc.3, rc.4, rc.5 and rc.6, or a set of directories named rc0.d, rc1.d, rc2.d, rc3.d, rc4.d, rc5.d and rc6.d.
For example, run level 1 will have its start script either in file /etc/rc.d/rc.1 or any files in the directory /etc/rc.d/rc1.d.

Changing runlevel


init is the program responsible for altering the run level which can be called using telinit command.

For example, to change a runlevel from 3 to runlevel 5 which will actually allow the GUI to be started in multi-user mode the telinit command can be used as :

/*using telinit to change
runlevel from 3 to 5*/

telinit 5

NOTE : The changing of runlevels is a task for the super user and not the normal user that’s why it is necessary to be logged in as super user for the successful execution of the above telinit command or you can use sudo command as :

// using sudo to execute telinit
sudo telinit 5

The default runlevel for a system is specified in /etc/initab file which will have an entry id : 5 : initdefault if the default runlevel is set to 5 or will have an entry id : 3 : initdefault if the default runlevel is set to 3.

Need for changing the runlevel


◉ There can be a situation when you may find trouble in logging in in case you don’t remember the password or because of the corrupted /etc/passwd file (have all the user names and passwords), in this case the problem can be solved by booting into a single user mode i.e runlevel 1.

◉ You can easily halt the system by changing the runlevel to 0 by using telinit 0.

Sunday 9 February 2020

Process Control Commands in Unix/Linux

Process control commands in Unix are:


bg - put suspended process into background
fg - bring process into foreground
jobs - list processes

LPI Guides, LPI Learning, LPI Tutorial and Materials, LPI Certification, Linux Study Materials

1. bg Command: bg is a process control command that resumes suspended process while keeping them running in the background. User can run a job in the background by adding a “&” symbol at the end of the command.

Syntax :

bg [job]

Options

The character % introduces a job specification. The Job can be a process ID (PID) number, or we can use one of the following symbol combinations:

%Number  : Use the job number such as %1 or %2.
%String  : Use the string whose name begins
                  with suspended command such as %commandNameHere or
                  %ping.
%+ OR %% : Refers to the current job.
%-       : Refers to the previous job.

bg examples

Command

bg %1

Output:

The stopped job will resume operation, but remain in the background.
It will not receive any input from the terminal while it's in the background,
but it will keep running.

2. fg Command: fg command moves a background job in the current shell environment into the foreground. Use the job ID parameter to indicate a specific job to be run in the foreground. If this parameter is not supplied, the fg command uses the job most recently suspended, placed in the background, or run as a background job .

Syntax :

fg [ %job]

Options

%job: Specifies the job that you want to run in the foreground.

fg examples

Command

$ fg

Output:

It will resume the most recently suspended or background job.

Command

$ fg 1

Output:

It brings the job with the id 1 into the foreground, resuming it if it was suspended.

3. Jobs Command: Jobs command is used to list the jobs that you are running in the background and in the foreground. If the prompt is returned with no information no jobs are present. All shells are not capable of running this command. This command is only available in the csh, bash, tcsh, and ksh shells.

Syntax : 

jobs  [JOB]

Options

JOB      Job name or number.
-l   Lists process IDs in addition to the normal information.
-n   List only processes that have changed status since the last notification.
-p   Lists process IDs only.
-r   Restrict output to running jobs.
-s   Restrict output to stopped jobs.

LPI Guides, LPI Learning, LPI Tutorial and Materials, LPI Certification, Linux Study Materials
jobs command examples

To display the status of jobs in the current shell:

Command

$ jobs

Output:

[1]   7893 Running                 gpass &
[2]   7904 Running                 gnome-calculator &
[3]-  7955 Running                 gedit fetch-stock-prices.py &
[4]+  7958 Stopped                 ping cyberciti.biz

To display the process ID or jobs for the job whose name begins with “p,”:

Command

$ jobs -p %p

OR

$ jobs %p

Output:

[4]-  Stopped                 ping cyberciti.biz

The character % introduces a job specification. In this example, you are using the string whose name begins with suspended command such as %ping.

Pass the -p option to jobs command to display PIDs only:

Command

$ jobs -p

Output:

7895
7906
7910
7946

Pass the -r option to jobs command to display only running jobs only:

Command

$ jobs -r

Output:

[1]   Running                 gpass &
[2]   Running                 gnome-calculator &
[3]-  Running                 gedit fetch-stock-prices.py &

Saturday 8 February 2020

The DevOps Paradox

As the evolution of technology marches forward, more and more tools and knowledge becomes available. We actually have so many tools and so much knowledge at our disposal nowadays that it’s sometimes difficult to choose what tool(s) we use and what knowledge we use. Of course, all tool-inventors and knowledge-makers have their own opinions, mental models, and beliefs. The same goes for Linux and Open Source engineers and management (in the broadest sense of the word). With this in mind, how is DevOps ever going to work?

LPI Study Materials, LPI Tutorial and Material, LPI Certifications, LPI Guides, LPI Learning, LPI Prep

Modern scientists and a growing number of authors argue that we drastically need to change the way we are currently working. We need to alter our mindset, company goals and the way we achieve those goals. Some of the publications in the field form the scaffolding for DevOps. Think of “Drive” by Daniel H. Pink, “Mindset” by Carol S. Dweck, and “The Goal” by Eliyahu M. Goldratt. Of course the whole Agile/Scrum, Lean, and Continuous Delivery movements, cannot be forgotten in this regard.

All these words of wisdom say we need to collaborate, invest in culture, and stimulate personal growth. Great! Point is: if you encourage everyone to grow as an individual and endorse people to have their own opinion, how do you make sure things will still get done? What might be a good idea to person #0, may sound like the worst idea to person #1. (In IT we start to count at zero, right?) Since every engineer should be stubborn by nature, things might not improve.

Then, there is management. In a way, they are even worse and more stubborn than engineers. They have a lot of wild ideas and are very opportunistic, but often have no clue how to actually implement something. Sad, but true. Managers are often busy with change management on a corporate level, be it restructuring, cost saving, optimizing, R&D-ing, and other types of ing-ing. So in a way, they are always busy changing, and in most cases, with the intention to improve things. Since companies rely on IT a little more every day, decent knowledge of what IT is and what’s going on there, is no joke. Because this is an article about DevOps, let’s keep it close to the chest. As a DevOps trainer, I sadly have very few managers, executives or chiefs in my classrooms. Of course, there are several ways to upgrade your IT knowledge, but I find the lack of management in my classroom to be troubling.

Before things get uncomfortable for my team at “AT Computing”, when they read this, I must confess I’m not an engineer myself. Yes, I try very hard to keep track of all the cool CLI-stuff my team is doing, but they are simply way more talented in that area than I am. Despite this sometimes causing a dent in my technical self-confidence, and a lot of fun and laughter, it does not relieve me from trying to stay in their slipstream. And neither should this for other managers or directors.

LPI Study Materials, LPI Tutorial and Material, LPI Certifications, LPI Guides, LPI Learning, LPI Prep

If you have no clue of what the (Linux) engineers are doing, how will you be able to facilitate and support them? Even worse, how are you going to make the right decisions for the company?

All of the above result in a kind of stalemate. On the one end are the engineers that say, “I want to change things, but the management doesn’t allow it.” or “I’ve changed, but they are still doing the same thing.” On the other end are the managers saying things like, “My engineers are stuck in old habits; they do not want to change,” or, “I’ve started a project to implement DevOps, but it failed because the engineers did not embrace it.” As you can see, both sides want to change and are, on their own, trying to change, but because they are blaming each other, nothing really happens. That sounds like quite the paradox to me!

How can we battle this status quo? That’s really simple and really hard at the same time. First, we need to start talking. Not through email, Telegram, Slack, or a good old conference call, but all together, in the same room. Make sure there is coffee and some snacks, and make sure you leave all your assumptions and grudges at home. Be open. Be honest. Listen to understand, not to respond. Try to find mutual interests, common goals, or irritations, and just get to know each other a little better. We all have a personal life and some hobbies, right?

This meeting will not magically solve issues. After it’s over, you cannot say, “we are DevOps now.” What will this conversation do, then? It will make you level. It will make it clear that, within the company, you are all on the same ship, and you all have reasons to help that ship sail. This improved understanding provides a small but firm foundation to take the next step. Maybe managers will be willing to attend a technical training, and get out of their comfort zone. Maybe engineers will be interested in learning more about coaching, collaboration, and leadership. Most important, is that the DevOps journey starts. Only in that way, will the principle of flow originate. And, by pure coincidence, that happens to be “The First Way of DevOps”.

Source: lpi.org

Thursday 6 February 2020

Tar Command Examples in Unix / Linux

Tar Command, Unix Certification, Linux Certification, LPI Tutorial and Material

In windows operating system, you might have used the winzip and winrar softwares for extracting and archiving the files. Similarly in unix or linux operating system, the tar command is used for creating archive files and also extracting files from the archives.

With the tar command, you can also create compressed archive files. In unix or linux operating system, there are many other commands like gzip and gunzip for creating and extracting archive files. Here we will see the important tar command examples in unix and linux systems which are used frequently in our daily work.

The syntax of tar command is

tar [options] [Archive file] [files list]

The options of tar command are:

c : creates a tar file.
v : verbose. Displays the files information.
f : Specify the tar file name.
r : updates the tar file with new files.
x : Extracts files from the archive (tar file).
t : view contents of tar file.
z : Specify the tar command to create a tar file using gzip in unix.
j : uses bzip2 to create the tar file.

Tar Command Examples:

1. Creating a tar file


Let see a sample example by archiving all the files in my current directory. The ls -l command displays the files and directories in the current directory.

> ls -l 
drwxr-xr-x 2 user group 4096 Aug  8 03:23 debian
-rw-r--r-- 1 user group  174 Aug  2 23:39 file
-rw-r--r-- 1 user group    0 Aug  8 03:22 linux_server.bat
-rw-r--r-- 1 user group   76 Aug  2 02:21 test.sh
-rw-r--r-- 1 user group    0 Aug  8 03:22 unix_distro

We see how to tar all these files using the -c option with the tar command. This is shown below:

> tar -cvf archive.tar *
debian/
file
linux_server.bat
test.sh
unix_distro

> ls
archive.tar  debian  file  linux_server.bat  test.sh  unix_distro

Observe the output of ls command and see the archive.tar file is created.

2. Printing the contents of tar file


We have created the tar file and we dont know whether it contains the actual files or not. To view the contents of the tar file use the -t option as

> tar -tvf archive.tar
drwxr-xr-x user/group   0 2012-08-08 03:23:07 debian/
-rw-r--r-- user/group 174 2012-08-02 23:39:51 file
-rw-r--r-- user/group   0 2012-08-08 03:22:19 linux_server.bat
-rw-r--r-- user/group  76 2012-08-02 02:21:32 test.sh
-rw-r--r-- user/group   0 2012-08-08 03:22:09 unix_distro

3. Updating the tar file with new contents.


You can add new files to the existing archive (tar) file using the -r option.

>touch red-hat-linux.dat

>tar -rvf archive.tar red-hat-linux.dat
red-hat-linux.dat

>tar -tvf archive.tar
drwxr-xr-x pcenter/pcenter   0 2012-08-08 03:23:07 debian/
-rw-r--r-- pcenter/pcenter 174 2012-08-02 23:39:51 file
-rw-r--r-- pcenter/pcenter   0 2012-08-08 03:22:19 linux_server.bat
-rw-r--r-- pcenter/pcenter  76 2012-08-02 02:21:32 test.sh
-rw-r--r-- pcenter/pcenter   0 2012-08-08 03:22:09 unix_distro
-rw-r--r-- pcenter/pcenter   0 2012-08-08 04:00:00 red-hat-linux.dat

Here the touch command creates a new file. The first tar command adds the new file to the existing archive file. The second command displays the contents of the tar file.

4. Extracting the contents of tar file


In the first example, we have created the archive file. Now we will see how to extract the set of files from the archive. To extract the contents of the tar file use the -x option.

> tar -xvf archive.tar
debian/
file
linux_server.bat
test.sh
unix_distro

5. Creating compressed tar file


So far we have created a uncompressed tar file in the above examples. We can create a compressed tar file using the gzip or bzip2.

Compressing files using gzip

> tar -zcvf new_tar_file.tar.gz *

Compressing files using bzip2

> tar -jcvf new_tar_file.tar.bz2 *

To extract or to view the files in a compressed tar file use the appropriate compression option (z or j).

To view files in a gzip compressed tar file
> tar -ztvf new_tar_file.tar.gz

To extract files from a gip compressed tar file
> tar -zxvf new_tar_file.tar.gz

To view files in a bzip2 compressed tar file
> tar -jtvf new_tar_file.tar.bz2

To extract files from a bzip2 compressed tar file
> tar -jxvf new_tar_file.tar.bz2

6. Creating tar file with specified list of files


You can specify a list of files to be included in the newly created tar file.

> tar -cvf unix_files.tar unix_server.bat unix_system.dat

Here the tar command creates the unix_files.tar file which contains only the files unix_server.bat and unix_system.dat

7. Extracting specific files from the tar


You can extract a specific file or a set of files from the archived file.

To extract a specifi file

> tar -xvf unix_files.tar unix_server.bat

To extract all files that start with name unix

> tar -xvf unix_files.tar --wildcards "unix*"

8. Extracting files from multiple archive files.


To extract the files from multiple archive files use the -M option with each -f option. This is shown below:

> tar -xv -Mf archive.tar -Mf unix_files.tar

Tuesday 4 February 2020

LPIC-2: Kernel Components

IBM Study Material, IBM Guides, IBM Prep, IBM Certification, IBM Guides, LPI Prep

This section covers material for topic 2.201.1 for the Intermediate Level Administration (LPIC-2) exam 201. The topic has a weight of 1.

What makes up a kernel?


A Linux kernel is made up of the base kernel itself plus any number of kernel modules. In many or most cases, the base kernel and a large collection of kernel modules are compiled at the same time and installed or distributed together, based on the code created by Linus Torvalds or customized by Linux distributors. A base kernel is always loaded during system boot and stays loaded during all uptime; kernel modules may or may not be loaded initially (though generally some are), and kernel modules may be loaded or unloaded during runtime.

The kernel module system allows the inclusion of extra modules that are compiled after, or separately from, the base kernel. Extra modules may be created either when you add hardware devices to a running Linux system or are sometimes distributed by third parties. Third parties sometime distribute kernel modules in binary form, though doing so takes away your capability as a system administrator to customize a kernel module. In any case, once a kernel module is loaded, it becomes part of the running kernel for as long as it remains loaded. Contrary to some conceptions, a kernel module is not simply an API for talking with a base kernel, but becomes patched in as part of the running kernel itself.

Kernel naming conventions


Linux kernels follow a naming/numbering convention that quickly tells you significant information about the kernel you are running. The convention used indicates a major number, minor number, revision, and, in some cases, vendor/customization string. This same convention applies to several types of files, including the kernel source archive, patches, and perhaps multiple base kernels (if you run several).

As well as the basic dot-separated sequence, Linux kernels follow a convention to distinguish stable from experimental branches. Stable branches use an even minor number, whereas experimental branches use an odd minor number. Revisions are simply sequential numbers that represent bug fixes and backward-compatible improvements. Customization strings often describe a vendor or specific feature. For example:

◉ linux-2.4.37-foo.tar.gz: Indicates a stable 2.4 kernel source archive from the vendor "Foo Industries"

◉ /boot/bzImage-2.7.5-smp: Indicates a compiled experimental 2.7 base kernel with SMP support
enabled

◉ patch-2.6.21.bz2: Indicates a patch to update an earlier 2.6 stable kernel to revision 21


Kernel files


IBM Study Material, IBM Guides, IBM Prep, IBM Certification, IBM Guides, LPI Prep
The Linux base kernel comes in two versions: zImage, which is limited to about 508 KB, and bzImage for larger kernels (up to about 2.5 MB). Generally, modern Linux distributions use the bzImage kernel format to allow inclusion of more features. You might expect that since the "z" in zImage indicates gzip compression, the "bz" in bzImage might mean bzip2 compression is used there. However, the "b" simply stands for "big" -- gzip compression is still used. In either case, as installed in the /boot/ directory, the base kernel is often renamed as vmlinuz. Generally the file /vmlinuz is a link to a version names file such as /boot/vmlinuz-2.6.10-5-386.

There are a few other files in the /boot/ directory associated with a base kernel that you should be aware of (sometimes you will find these at the file system root instead). System.map is a table showing the addresses for kernel symbols. initrd.img is sometimes used by the base kernel to create a simple file system in a ramdisk prior to mounting the full file system.

Kernel modules


Kernel modules contain extra kernel code that may be loaded after the base kernel. Modules typically provide one of the following functions:

◉ Device drivers: Support a specific type of hardware

◉ File system drivers: Provide the optional capability to read and/or write a particular file system

◉ System calls: Most are supported in the base kernel, but kernel modules can add or modify system services

◉ Network drivers: Implement a particular network protocol

◉ Executable loaders: Parse and load additional executable formats