Saturday 31 August 2019

Linux Distributions (Distros)

LPI Study Materials, LPI Tutorials and Materials, LPI Learning, LPI Online Exam, LPI Guides

Other operating systems like Microsoft combine each bit of codes internally and release it as a single package. You have to choose from one of the version they offer.

But Linux is different from them. Different parts of Linux are developed by different organizations.

Different parts include kernel, shell utilities, X server, system environment, graphical programs, etc. If you want you can access the codes of all these parts and assemble them yourself. But its not an easy task seeking a lot of time and all the parts has to be assembled correctly in order to work properly.

From here on distribution (also called as distros) comes into the picture. They assemble all these parts for us and give us a compiled operating system of Linux to install and use.

Linux Distributions List


There are on an average six hundred Linux distributors providing different features. Here, we'll discuss about some of the popular Linux distros today.

1) Ubuntu

It came into existence in 2004 by Canonical and quickly became popular. Canonical wants Ubuntu to be used as easy graphical Linux desktop without the use of command line. It is the most well known Linux distribution. Ubuntu is a next version of Debian and easy to use for newbies. It comes with a lots of pre-installed apps and easy to use repositories libraries.

Earlier, Ubuntu uses GNOME2 desktop environment but now it has developed its own unity desktop environment. It releases every six months and currently working to expand to run on tablets and smartphones.

2) Linux Mint

Mint is based on Ubuntu and uses its repository software so some packages are common in both.

Earlier it was an alternative of Ubuntu because media codecs and proprietary software are included in mint but was absent in Ubuntu. But now it has its own popularity and it uses cinnamon and mate desktop instead of Ubuntu's unity desktop environment.

3) Debian

Debian has its existence since 1993 and releases its versions much slowly then Ubuntu and mint.

This makes it one of the most stable Linux distributor.

Ubuntu is based on Debian and was founded to improve the core bits of Debian more quickly and make it more user friendly. Every release name of Debian is based on the name of the movie Toy Story.

4) Red Hat Enterprise / CentOS

Red hat is a commercial Linux distributor. There products are red hat enterprise Linux (RHEL) and Fedora which are freely available. RHEL is well tested before release and supported till seven years after the release, whereas, fedora provides faster update and without any support.

Red hat uses trademark law to prevent their software from being redistributed. CentOS is a community project that uses red hat enterprise Linux code but removes all its trademark and make it freely available. In other words, it is a free version of RHEL and provide a stable platform for a long time.

5) Fedora

It is a project that mainly focuses on free software and provides latest version of software. It doesn't make its own desktop environment but used 'upstream' software. By default it has GNOME3 desktop environment. It is less stable but provides the latest stuff.

Choosing a Linux Distro


Distribution Why To Use 
UBuntu It works like Mac OS and easy to use. 
Linux mint  It works like windows and should be used by newcomers. 
Debian  It provides stability but not recommended to a new user. 
Fedora  If you want to use a red hat and latest software. 
Red hat enterprise  To be used commercially. 
CentOS  If you want to use a red hat but without its trademark. 
OpenSUSE  It works the same as Fedora but slightly older and more stable. 
Arch Linux   It is not for the beginners because every package has to be installed by yourself.

Thursday 29 August 2019

What is Linux System Administration?

Linux System Administration, LPI Tutorial and Materials, LPI Guides, LPI Study Materials, LPI Online Exam, LPI Certifications

Linux is an operating system or a kernel created by Linus Torvalds with other contributors. It was first released on September 17, 1991. The main advantage of Linux is that it is distributed under an open-source license means programmers can use the Linux Kernel to design their own custom operating systems.

Linux System Administration, LPI Tutorial and Materials, LPI Guides, LPI Study Materials, LPI Online Exam, LPI Certifications

Some of the most popular operating systems that use Linux as kernel are Debian, Knoppix, Ubuntu, and Fedora. Nevertheless, the list does not end here as there are thousands of operating systems based on Linux which offer a variety of functions to the users.

Introduction to Linux System Administration: Linux is a major strength in computing technology. Most of the webserver, mobile phones, personal computers, supercomputers, and cloud-servers are powered by Linux. The job of a Linux systems administrator is to manage the operations of a computer system like maintain, enhance, create user account/report, taking backups using Linux tools and command-line interface tools. Most computing devices are powered by Linux because of its high stability, high security, and open-source environment. There are some of the things that a Linux system administrator should know and understand:

◈ Linux File Systems
◈ File System Hierarchy
◈ Managing Root/super User
◈ Basic Bash Command
◈ Handling File, Directories and Users

Duties of a Linux Administrator: System Administration has become a solid criterion for an organization and institute that requires a solid IT foundation. Hence, the need for efficient Linux administrators is the requirement of the time. The job profile might change from each organization as there may be added responsibilities and duties to the role. Below are some duties of a Linux Administrator:

◈ Maintain all internet requests inclusive to DNS, RADIUS, Apache, MySQL, PHP.
◈ Taking regular back up of data, create new stored procedures and listing back-up is one of the duties.
◈ Analyzing all error logs and fixing along with providing excellent customer support for Webhosting, ISP and LAN Customers on troubleshooting increased support troubles.
◈ Communicating with the staff, vendors, and customers in a cultivated, professional manner at all times has to be one of his characteristics.
◈ Enhance, maintain and creating the tools for the Linux environment and its users.
◈ Detecting and solving the service problems ranging from disaster recovery to login problems.
◈ Installing the necessary systems and security tools. Working with the Data Network Engineer and other personnel/departments to analyze hardware requirements and makes acquiring recommendations.
◈ Troubleshoot, when the problem occurs in the server.

Steps to Start the Career as Linux System Admin:


◈ Install and learn to use Linux environment

◈ Get Certified in Linux administration

◈ Learn to do Documentation

◈ Joining up with a local Linux Users Group or Community for Support and Help

In short, the main role of the Linux Systems Administrator is to manage the operations like install, observe the software and hardware systems and taking backup. And also have a good ability to describe an In-depth understanding of technical knowledge. Even freshman-level Professionals has great possibilities for the position of System Administrator with the yearly median salary is around INR 3 Lacs, salary increase with an increase in job experience. To get the experience you need to check for the latest skills and learning in the Linux community.

Tuesday 27 August 2019

Linux Professional Institute (LPI) Certifications

LPI Online Exam, LPI Guides, LPI Tutorials and Materials, LPI Certifications, LPI Learning, LPIC-1 Certification, LPIC-2 Certification, LPIC-3 Certification

Started back in 1999 by Linus Torvalds, these Linux certifications today has become important for any Linux professional. This program is available in three distinct levels, which are:

LPIC- 1: Linux Administrator


It is a junior-level Linux certification with no perquisites. The candidate needs to pass 2 exams, which covers all basic Linux skills that even include installing and configuring Linux on a workstation, performing maintenance tasks, making LAN or internet connections, and more. Obtain CompTIA Linux+ powered by LPI credential first; which will make you qualified both for Linux+ and LPIC-1 credentials.

Also Read: 101-500: Linux Administrator - 101 (LPIC-1 101)
                     102-500: Linux Administrator - 102 (LPIC-1 102)

LPIC- 2: Linux Engineer


This is an advanced level Linux Certification, which requires an active LPIC-1 certification. It has two exams- First covers the file system and devices, kernel, system startup, network configuration, system maintenance, storage administration, and even capacity planning and the second exam covers email services, network client management, domain name servers, system security and troubleshooting, and the like tasks.

Also Read: 201-450: Linux Engineer - 201 (LPIC-2 201)
                     202-450: Linux Engineer - 202 (LPIC-2 202)

LPIC- 3: Linux Enterprise Professional Certification


It is a senior-level Linux certification, which needs an active LPIC-2 besides passing any single exam in the 300 series. This certification includes exam IDs, which are:

300: Mixed Environment
303: Security
304: Virtualization and High Availability

300: Mixed Environment covers Samba, work with Linux & windows client, and even plus OpenLDAP.

303: Security covers operations, application security, and the network are covered under the security exam besides cryptography and access controls.

304: Virtualization and High Availability covers virtualization and high availability cluster storage and engagement.

Latest Certification in LPIC


LPI’s latest certification is the LPIC-OT DevOps Tools Engineer, which allows the Linux professionals to utilize the tools for collaboration during software and system development. The exam has 60 questions and it lasts for about 90 minutes.

Linux professionals, thus, have a whole new set of Linux certifications to attain in 2019. We believe that this guide is helpful for all the Linux professionals in finding some of the best Linux certifications and finding a new way in the world of Linux thereafter.

Thursday 22 August 2019

Using the find Command Function for Linux and Unix

LPI Tutorial and Materials, LPI Certifications, LPI Learning, LPI Online Exam, LPI Guides

The Linux and Unix command find executes a search for files in a directory hierarchy.

Syntax for find command:


find [path...] [expression]

Description


This manual page documents the GNU version of find. The command find searches the directory tree rooted at each given file name by evaluating the given expression from left to right, according to the rules of precedence (see section on Operators below), until the outcome is known; in other words, the left hand side is false for and operations, true for or, at which point find moves on to the next file name.


The first argument that begins with:

◈ - 
◈ ( or ) ,
◈ !

is taken to be the beginning of the expression; any arguments before it are paths to search, and any arguments after it are the rest of the expression. If no paths are given, the current directory is used. If no expression is given, the expression -print is used.

The find command exits with status 0 if all files are processed successfully, greater than 0 if errors occur.

Expressions


The expression is made up of options (which affect overall operation rather than the processing of a specific file, and always return true), tests (which return a true or false value), and actions (which have side effects and return a true or false value), all separated by operators. The expression -and is assumed where the operator is omitted. If the expression contains no actions other than -prune, then -print is performed on all files for which the expression is true.

Options


All options always return true. They always take effect, rather than being processed only when their place in the expression is reached. Therefore, for clarity, it is best to place them at the beginning of the expression.

-daystart Measure times (for -amin, -atime, -cmin, -ctime, -mmin, and -mtime) from the beginning of today rather than from 24 hours ago.
-depth  Process each directory's contents before the directory itself. 
-follow  Dereference symbolic links. Implies -noleaf. 
-help or --help  Print a summary of the command-line usage of find and exit. 
-maxdepth [number]  Descend at most number of levels (a non-negative integer) of directories below the command line arguments. The expression -maxdepth 0 means only apply the tests and actions to the command line arguments. 
-mindepth [number]  Do not apply any tests or actions at levels less than the number (a non-negative integer). The expression -mindepth 1 means process all files except the command line arguments. 
-mount  Don't descend directories on other filesystems. An alternate name for -xdev, for compatibility with some other versions of find.
-noleaf  Do not optimize by assuming that directories contain 2 fewer subdirectories than their hard link count.*
-version or --version  Print the find version number and exit.
-xdev  Don't descend directories on other filesystems. 

This option is needed when searching filesystems that do not follow the Unix directory-link convention, such as CD-ROM or MS-DOS filesystems or AFS volume mount points. Each directory on a normal Unix filesystem has at least 2 hard links: its name and its . (period) entry. Additionally, its subdirectories (if any) each have . entry linked to that directory. 

When find is examining a directory, after it has statted two fewer subdirectories than the directory's link count, it knows that the rest of the entries in the directory are non-directories (leaf files in the directory tree). If only the files' names need to be examined, there is no need to stat them; this gives a significant increase in search speed.

Tests


Numeric arguments can be specified as:

+n For greater than n.
-n  For less than n. 
For exactly n. 
-amin n  File was last accessed n minutes ago. 
-anewer [file]  File was last accessed more recently than file was modified. -anewer is affected by -follow only if -follow comes before -anewer on the command line. 
-atime n  File was last accessed n*24 hours ago. 
-cmin n  File's status was last changed n minutes ago. 
-cnewer [file]  File's status was last changed more recently than file was modified.
-cnewer is affected by -follow only if -follow comes before -cnewer on the command line. 
-ctime n  File's status was last changed n*24 hours ago. 
-empty  File is empty and is either a regular file or a directory. 
-false  Always false. 
-fstype [type]  File is on a filesystem of specified type. The valid filesystem types vary among different versions of Unix; an incomplete list of filesystem types that are accepted on some version of Unix or another is: ufs, 4.2, 4.3, nfs, tmp, mfs, S51K, S52K. You can use -printf with the %F directive to see the types of your filesystems. 
-gid n  File's numeric group ID is n. 
-group [gname]  File belongs to group gname (numeric group ID allowed). 
-ilname [pattern]  Like -lname, but the match is case insensitive. 
-iname [pattern]  Like -name, but the match is case insensitive. For example, the patterns fo* and F?? match the file names Foo, FOO, foo, fOo, etc. 
-inum n  File has inode number n. 
-ipath [pattern]  Like -path, but the match is case insensitive. 
-iregex [pattern]  Like -regex, but the match is case insensitive. 
-links n  File has n links. 
-lname [pattern]  File is a symbolic link whose contents match shell pattern. The metacharacters do not treat / or . specially. 
-mmin n  File's data was last modified n minutes ago. 
-mtime n  File's data was last modified n*24 hours ago. 
-name [pattern]  Base of file name (the path with the leading directories removed) matches shell pattern. The metacharacters (*, ?, and []) do not match a . at the start of the base name. To ignore a directory and the files under it, use -prune; see an example in the description of -path. 
-newer [file]  File was modified more recently than file. The expression -newer is affected by -follow only if -follow comes before -newer on the command line. 
-nouser  No user corresponds to file's numeric user ID. 
-nogroup  No group corresponds to file's numeric group ID. 
-path [pattern]  File name matches shell pattern pattern. The metacharacters do not treat / or . specially; so, for example,find . -path './sr*sc will print an entry for a directory called ./src/misc (if one exists). To ignore a whole directory tree, use -prune rather than checking every file in the tree. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path './src/emacs' -prune -o -print 
-perm [mode]  File's permission bits are exactly [mode] (octal or symbolic). Symbolic modes use mode 0 as a point of departure. 
-perm -mode  All of the permission bits [mode] are set for the file. 
perm +mode Any of the permission bits [mode] are set for the file. 
-regex [pattern]  File name matches regular expression pattern. This is a match on the whole path, not a search. For example, to match a file named ./fubar3, you can use the regular expression .*bar. or .*b.*3, but not b.*r3. 
-size n[bckw]  File uses n units of space. The units are 512-byte blocks by default or if b follows n, bytes if c follows n, kilobytes if k follows n, or 2-byte words if w follows n. The size does not count indirect blocks, but it does count blocks in sparse files that are not actually allocated. 
-true  Always true. 
-type c  File is of type c: 
Block (buffered) special 
Character (unbuffered) special 
Directory 
Named pipe (FIFO) 
Regular file 
Symbolic link 
Socket 
door (Solaris) 
-uid n  File's numeric user ID is n. 
-used n  File was last accessed n days after its status was last changed. 
-user uname  File is owned by user uname (numeric user ID allowed). 
-xtype c  The same as -type unless the file is a symbolic link. For symbolic links: if -follow has not been given, true if the file is a link to a file of type c; if -follow has been given, true if c is l. In other words, for symbolic links,
-xtype checks the type of the file that -type does not check. 

Actions


-exec command ;

Execute  command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these constructions might need to be escaped (with a `\') or quoted to protect them from expansion by the shell. The command is executed in the starting directory.

-fls file

True; like -ls but write to file like -fprint.

-fprint file

True; print the full file name into file file. If file does not exist when find is run, it is created; if it does exist, it is truncated. The file names ``/dev/stdout'' and ``/dev/stderr'' are handled specially; they refer to the standard output and standard error output, respectively.

-fprint0 file

True; like -print0 but write to file like -fprint.

-fprintf file format

True; like -printf but write to file like -fprint.

-ok command ;

Like -exec but ask the user first (on the standard input); if the response does not start with `y' or `Y', do not run the command, and return false.

-print

True; print the full file name on the standard output, followed by a newline.

-print0

True; print the full file name on the standard output, followed by a null character. This allows file names that contain newlines to be correctly interpreted by programs that process the find output.

-printf format

True; print format on the standard output, interpreting `\' escapes and `%' directives. Field widths and precisions can be specified as with the `printf' C function. Unlike -print, -printf does not add a newline at the end of the string. The escapes and directives are:

\a

Alarm bell.

\b

Backspace.

\c

Stop printing from this format immediately and flush the output.

\f

Form feed.

\n

Newline.

\r

Carriage return.

\t

Horizontal tab.

\v

Vertical tab.

\\

A literal backslash (`\').

\NNN

The character whose ASCII code is NNN (octal).

A `\' character followed by any other character is treated as an ordinary character, so they both are printed.

%%

A literal percent sign.

%a

File's last access time in the format returned by the C `ctime' function.

%Ak

File's last access time in the format specified by k, which is either `@' or a directive for the C `strftime' function. The possible values for k are listed below; some of them might not be available on all systems, due to differences in `strftime' between systems.

@

seconds since Jan. 1, 1970, 00:00 GMT.

Time fields:

H

hour (00..23)

I

hour (01..12)

k

hour ( 0..23)

l

hour ( 1..12)

M

minute (00..59)

p

locale's AM or PM

r

time, 12-hour (hh:mm:ss [AP]M)

S

second (00..61)

T

time, 24-hour (hh:mm:ss)

X

locale's time representation (H:M:S)

Z

time zone (e.g., EDT), or nothing if no time zone is determinable

Date fields:

a

locale's abbreviated weekday name (Sun..Sat)

A

locale's full weekday name, variable length (Sunday..Saturday)

b

locale's abbreviated month name (Jan..Dec)

B

locale's full month name, variable length (January..December)

c

locale's date and time (Sat Nov 04 12:02:33 EST 1989)

d

day of month (01..31)

D

date (mm/dd/yy)

h

same as b

j

day of year (001..366)

m

month (01..12)

U

week number of year with Sunday as first day of week (00..53)

w

day of week (0..6)

W

week number of year with Monday as first day of week (00..53)

x

locale's date representation (mm/dd/yy)

y

last two digits of year (00..99)

Y

year (1970...)

%b

File's size in 512-byte blocks (rounded up).

%c

File's last status change time in the format returned by the C `ctime' function.

%Ck

File's last status change time in the format specified by k, which is the same as for %A.

%d

File's depth in the directory tree; 0 means the file is a command line argument.

%f

File's name with any leading directories removed (only the last element).

%F

Type of the filesystem the file is on; this value can be used for -fstype.

%g

File's group name, or numeric group ID if the group has no name.

%G

File's numeric group ID.

%h

Leading directories of file's name (all but the last element).

%H

Command line argument under which file was found.

%i

File's inode number (in decimal).

%k

File's size in 1K blocks (rounded up).

%l

Object of symbolic link (empty string if file is not a symbolic link).

%m

File's permission bits (in octal).

%n

Number of hard links to file.

%p

File's name.

%P

File's name with the name of the command line argument under which it was found removed.

%s

File's size in bytes.

%t

File's last modification time in the format returned by the C `ctime' function.

%Tk

File's last modification time in the format specified by k, which is the same as for %A.

%u

File's user name, or numeric user ID if the user has no name.

%U

File's numeric user ID.

A `%' character followed by any other character is discarded (but the other character is printed).

-prune

If -depth is not given, true; do not descend the current directory.
If -depth is given, false; no effect.

-ls

True; list current file in `ls -dils' format on standard output. The block counts are of 1K blocks, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used.

Operators


Listed in order of decreasing precedence:

( expr )

Force precedence.

! expr

True if expr is false.

-not expr

Same as ! expr.

expr1 expr2

And (implied); expr2 is not evaluated if expr1 is false.

expr1 -a expr2

Same as expr1 expr2.

expr1 -and expr2

Same as expr1 expr2.

expr1 -o expr2

Or; expr2 is not evaluated if expr1 is true.

expr1 -or expr2

Same as expr1 -o expr2.

expr1 , expr2

List; both expr1 and expr2 are always evaluated. The value of expr1 is discarded; the value of the list is the value of expr2.

Examples


find /home -user joe

Find every file under the directory /home owned by the user joe.

find /usr -name *stat

Find every file under the directory /usr ending in ".stat".

find /var/spool -mtime +60

Find every file under the directory /var/spool that was modified more than 60 days ago.

find /tmp -name core -type f -print | xargs /bin/rm -f

Find files named core in or below the directory /tmp and delete them. Note that this will work incorrectly if there are any filenames containing newlines, single or double quotes, or spaces.

find /tmp -name core -type f -print0 | xargs -0 /bin/rm -f

Find files named core in or below the directory /tmp and delete them, processing filenames in such a way that file or directory names containing single or double quotes, spaces or newlines are correctly handled. The -name test comes before the -type test in order to avoid having to call stat(2) on every file.

find . -type f -exec file '{}' \;

Runs `file' on every file in or below the current directory. Notice that the braces are enclosed in single quote marks to protect them from interpretation as shell script punctuation. The semicolon is similarly protected by the use of a backslash, though ';' could have been used in that case also.

find / \( -perm -4000 -fprintf /root/suid.txt '%#m %u %p\n' \) , \
 \( -size +100M -fprintf /root/big.txt '%-10s %p\n' \)

Traverse the filesystem just once, listing setuid files and directories into /root/suid.txt and large files into /root/big.txt.

find $HOME -mtime 0

Search for files in your home directory which have been modified in the last twenty-four hours. This command works this way because the time since each file was last modified is divided by 24 hours and any remainder is discarded. That means that to match -mtime

0, a file will have to have a modification in the past which is less than 24 hours ago.

find . -perm 664

Search for files which have read and write permission for their owner, and group, but which other users can read but not write to. Files which meet these criteria but have other permissions bits set (for example if someone can execute the file) will not be matched.

find . -perm -664

Search for files which have read and write permission for their owner and group, and which other users can read, without regard to the presence of any extra permission bits (for example the executable bit). This will match a file which has mode 0777, for example.

find . -perm /222

Search for files which are writable by somebody (their owner, or their group, or anybody else).

find . -perm /220
find . -perm /u+w,g+w
find . -perm /u=w,g=w

All three of these commands do the same thing, but the first one uses the octal representation of the file mode, and the other two use the symbolic form. These commands all search for files which are writable by either their owner or their group. The files don't have to be writable by both the owner and group to be matched; either will do.

find . -perm -220
find . -perm -g+w,u+w

Both these commands do the same thing; search for files which are writable by both their owner and their group.

find . -perm -444 -perm /222 ! -perm /111
find . -perm -a+r -perm /a+w ! -perm /a+x

These two commands both search for files that are readable for everybody (-perm -444 or -perm -a+r), have at least on write bit set (-perm /222 or -perm /a+w) but are not executable for anybody (! -perm /111 and ! -perm /a+x respectively)

Tuesday 20 August 2019

halt command in Linux with examples

halt command, LPI Online Exam, LPI Tutorials and Materials, LPI Certifications

This command in Linux is used to instruct the hardware to stop all the CPU functions. Basically, it reboots or stops the system. If the system is in runlevel 0 or 6 or using the command with –force option, it results in rebooting of the system otherwise it results in shutdown.

Syntax:


halt [OPTION]...


Options:


OPTION DESCRIPTION 
-f, –force  It does not invoke shutdown.
-w, –wtmp-only  It will not call shutdown or the reboot system call but writes the shutdown record to /var/log/wtmp file. 
-p, –poweroff  To behave as poweroff 
–verbose  Gives verbose messages when reebooting which helps in debugging problems with shutdown. 


Files:


◈ /var/log/wtmp : Consists a new runlevel record for the shutdown time.
◈ /var/run/utmp : Gets updated by a shutdown time record when the current runlevel will be read.

Example 1: To cease all CPU function on the system.

$halt

Output:

Broadcast message from ubuntu@ubuntu
root@ubuntu:/var/log# (/dev/pts/0) at 10:15...
The system is going down for halt NOW.

Example 2: To power off the system using halt command.

$halt -p

Output:

Broadcast message from ubuntu@ubuntu
(/dev/pts/0) at 10:16...
The system is going down for power off NOW.

Example 3: halt command with -w option to write shutdown record.

$halt -w

Note: For this, there will be no output on the screen.

Saturday 17 August 2019

All About Linux and Linux+ (2019 Refresh)

What Is Linux?


In short, Linux is an open-source, UNIX-like operating system created by Linus Torvalds that runs a plethora of different devices today. When you do your online banking or use Google, Facebook or Twitter, you’re talking to Linux servers in the cloud. In fact, nearly all supercomputers and cloud servers run Linux, as does your Android smartphone and many other devices around your home and workplace, such as firewalls and routers. Even my touch-screen refrigerator, home media center, smart thermostat and in-car GPS run Linux.

LPI Online Exam, LPI Guides, LPI Learning, LPI Tutorial and Materials
Open source has been the key to Linux’s success. Software released under an open-source license gives other software developers access to modify the original source code that was used to create the software. This, in turn allows other software developers worldwide to quickly identify and fix bugs and security loopholes, as well as make feature improvements to the software. Consequently, open-source software evolves rapidly, and this is what transformed Linux into the world’s most flexible and powerful operating system since its conception more than 25 years ago.

Linus Torvalds and his team still develop the core operating system kernel and libraries. However, software developers worldwide develop the additional open-source libraries and software packages used with the Linux kernel. You may obtain different distributions (or distros) of Linux as a result. All Linux distros share the same kernel and libraries, yet have different software packaged with the kernel. There are hundreds of Linux distributions available – some common ones include Red Hat, Fedora, SuSE, Debian, Ubuntu and CentOS. And don’t forget Android!

It’s also important to note that Linux is functionally an open-source UNIX operating system – nearly all of the concepts, commands and files are identical between UNIX and Linux. If you use a Mac computer or iPhone, you are using a flavor of UNIX (macOS X and iOS are both UNIX operating systems), and many embedded systems and large servers still run UNIX today as well (e.g., BSD UNIX, Solaris AIX, QNX). As a result, those who administer Linux systems often administer UNIX systems, and vice versa.

Why Should I Get a Linux Certification?


For the past two decades, employers have used certification as a skills benchmark for hiring and advancement in the IT industry. Today, Linux certification provides an important skills benchmark for a wide range of different industries and job roles, as illustrated below. And as these industries and job roles continue to grow, so does the need for skilled Linux users, administrators and developers.

LPI Online Exam, LPI Guides, LPI Learning, LPI Tutorial and Materials

What Is CompTIA Linux+?


Until recently, CompTIA Linux+ comprised two exams covering the same content as the two exams for the Linux Professional Institute's (LPI) LPIC-1 (LPI Level 1 - Linux Administrator).

However, the latest version of CompTIA Linux+ (XK0-004) is no longer reciprocal with LPIC-1. Instead, it's a single exam that tests the fundamental usage and administrative tasks that are common to nearly all Linux distributions and UNIX flavors, but with an added focus on security, troubleshooting, server configuration and cloud technologies to match current industry needs.

Read more about the new LPI

101-500: Linux Administrator - 101 (LPIC-1 101)

102-500: Linux Administrator - 102 (LPIC-1 102)

201-450: Linux Engineer - 201 (LPIC-2 201)

202-450: Linux Engineer - 202 (LPIC-2 202)

Why Should I Get CompTIA Linux+?


1. You get the industry brand recognition that comes with CompTIA. Many IT managers and human resources departments are very familiar with CompTIA certifications – they know that if the certification ends with a + symbol, it’s a good skills benchmark.

2. The added focus on security, troubleshooting, server configuration and cloud computing better aligns to the job roles that require proficiency in those areas, compared to other, similar Linux certifications on the market. 

3. For most jobs involving Linux and/or UNIX, CompTIA Linux+ is the only Linux certification that you will need, as it covers the general administration tasks that most organizations seek when hiring for Linux/UNIX administration positions. Advanced topic areas not tested on Linux+ often involve specialized configuration that is specific to a particular organization and Linux distribution or UNIX flavor. Those who have a working knowledge of the general administration concepts tested on CompTIA Linux+ can easily research and perform these advanced configuration tasks as necessary.

Tuesday 13 August 2019

The Vikings, eBooks, and Open Source

LPI Study Materials, LPI Guides, LPI Online Exam, LPI Learning

If you were hoping to read a rollicking tale of those Norse seafaring explorers of old, heading out in their dragon-headed ships to explore and conquer distant lands, launching enthusiastically into battle with sword held high above their horned helmets, you may be disappointed. If, however, a story about exploring the cosmos and open document formats are your thing, then you've come to the right place.

In 1976, Viking I and Viking II became the first spacecrafts from Earth to successfully land on the planet Mars. Over the next few days and weeks, humanity was gifted with the first high-resolution pictures from the surface of the red planet, a time that I remember with an ongoing combination of awe and nostalgia. Most amazing, and most interesting to the 16 year old me, was that the Viking landers had experiments on board to look for and detect Martian life. Okay, so they were looking for microbial life, as opposed to something more epic like the warring peoples of John Carter's Barsoon, but it was still awesome.

As it turns out, the results of those experiments were inconclusive. Decades past, and eventually, in the early part of the 21st century, somebody decided that they should go back and take a look at the data from those original probes, to see if there was something they might have missed in "Life on Mars" department.

This wasn't a simple task; nothing as easy as opening a document or searching with Google. The problem is that we were looking at really old technology. The data from those missions was captured on microfilm, and I'm going to guess that many of you will have no idea what I'm talking about. As far as anybody was concerned, however, this was the recording technology of the future. In essence they took the vast collection of paper pages that were printed out, took high resolution pictures of them in very tiny film, and wound them on spools that would then be, at some future time, loaded into a microfilm viewer. Now, microfilm viewers are a little hard to come by these days. As you might imagine.

The reason that I mention the Viking landers as my example is because it was a big, audacious, and hugely expensive project where keeping the results of the information you gathered in a way that future generations (or even just the people 40 years down the road) could access it, was probably extremely important.

You may have heard stories of people who wrote documents in the original versions of Microsoft Word, who would then try to open them up, years later, with the newer versions of Microsoft Word, only to find that they were unable to do so. Curiously enough, and of interest to the Open Source enthusiasts among you, products like OpenOffice could read the Word documents that Microsoft could not. But I digress . . . You see, the beautiful thing about microfilm, is that it is a fairly easy technology to reverse-engineer, and scientists and researchers were able to take those microfilm images and digitize them, so that the data I spoke of could then be accessed in our modern, digital age.

Spoiler alert, as it turns out, looking at the data from Viking 1 and Viking 2, it turns out that the data is still inconclusive. We may, or may not, have discovered Life on Mars, but we can't be one hundred percent sure.

Fast-forward to today. A few months ago, Microsoft announced that they were getting out of the eBook business starting in July; meaning now. If you didn't know that Microsoft had been in the eBook business, you can be forgiven. It also goes a long way to explaining why they are getting out of the eBook business. Unfortunately, for those who bought eBooks from Microsoft, a painful lesson is coming home to roost. Those books will stop working.

To be fair to Microsoft, they are refunding customers for the books they never really owned, even though they paid for it. You might still have a copy of the book, but you can no longer read it. The reason they stop working is because of DRM, or Digital Rights Management. DRM allows companies, like Microsoft, to put digital locks on the things you thought you bought and you thought you owned (like those eBooks) with the intention of making it impossible to copy, share, and/or pirate. In fact, it's just hard, not impossible, but that's a story for another time.

We could spend a lot of time here talking about the evils of DRM, of whether we actually own anything we 'buy' electronically, but what I want to talk about is the future. Somewhere, in that future, you may want to read a book you bought, or listen to a song, or watch a movie, or go back to archived data from a spacecraft that landed on another planet decades ago.

Imagine if classic works like "Hamlet" or "Frankenstein" were written in a format that could not be read by modern technology and you start to get the idea. If for no other reason than to "future-proof" those books, music, or data, we need to make sure that no media ever gets recorded in a closed or DRM'ed format.

Open Source licenses, properly executed, work to ensure that the code from which applications are built, is not only freely available in a 'share and share alike' fashion, but also that programs can be extended, modified, or maintained long after the developers have moved on. Open document and image formats, properly executed, work to ensure that future generations, or applications, will be able to read them. Open music formats work to ensure that you and your loved one's favourite song will still be playable in your golden years.

We are often reminded that Open Source is big business these days, a fact that I don't deny, but it's important that we never lose sight of that word, "Open". We, as customers and technologists, must demand that our information, in whatever form it takes, remains open and unencumbered by digital locks.

One parting thought. I am strangely ambivalent on the subject of subscription content. I don't have an issue with Netflix or Spotify, to name two examples, because there's no suggestion of ownership. You don't buy an eBook or a movie from subscription services and typically, customers of these companies go in with their eyes wide open. More or less. I still don't believe that content should be encumbered with DRM for the reasons I have already mentioned, but when I watch Lucifer on Netflix, no one is suggesting that I now own a copy.

Saturday 10 August 2019

202-450: Linux Engineer - 202 (LPIC-2 202)

LPI Certification, LPIC-2 Linux Engineer, 201-450 LPIC-2, 201-450 Online Test, 201-450 Questions, 201-450 Quiz, 201-450, LPIC-2 Certification Mock Test, LPI LPIC-2 Certification, LPIC-2 Practice Test, LPI LPIC-2 Primer, LPIC-2 Study Guide, LPI 201-450 Question Bank, LPIC-2 201, LPIC-2 201 Simulator, LPIC-2 201 Mock Exam, LPI LPIC-2 201 Questions, LPI LPIC-2 201 Practice Test

LPIC-2 is the second certification in LPI’s multi-level professional certification program. The LPIC-2 will validate the candidate's ability to administer small to medium–sized mixed networks. The candidate must have an active LPIC-1 certification to receive LPIC-2 certification, but the LPIC-1 and LPIC-2 exams may be taken in any order.

Current Version: 4.5 (Exam codes 201-450 and 202-450)

Objectives: 201-450, 202-450

Prerequisites: The candidate must have an active LPIC-1 certification to receive LPIC-2 certification, but the LPIC-1 and LPIC-2 exams may be taken in any order

Requirements: Passing exams 201 and 202

Validity Period: 5 years

Languages: English, German, Japanese


To become LPIC-2 certified the candidate must be able to:

◈ perform advanced system administration, including common tasks regarding the Linux kernel, system startup and maintenance;
◈ perform advanced Management of block storage and file systems as well as advanced networking and authentication and system security, including firewall and VPN;
◈ install and configure fundamental network services, including DHCP, DNS,  SSH, Web servers, file servers using FTP, NFS and Samba, email delivery; and
◈ supervise assistants and advise management on automation and purchases.

LPI Certification, LPIC-2 Linux Engineer, 201-450 LPIC-2, 201-450 Online Test, 201-450 Questions, 201-450 Quiz, 201-450, LPIC-2 Certification Mock Test, LPI LPIC-2 Certification, LPIC-2 Practice Test, LPI LPIC-2 Primer, LPIC-2 Study Guide, LPI 201-450 Question Bank, LPIC-2 201, LPIC-2 201 Simulator, LPIC-2 201 Mock Exam, LPI LPIC-2 201 Questions, LPI LPIC-2 201 Practice Test

LPIC-2 Exam 202


Exam Objectives Version: 4.5 (Exam code 202-450).

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

Topic 207: Domain Name Server


207.1 Basic DNS server configuration

Weight: 3

Description: Candidates should be able to configure BIND to function as a caching-only DNS server. This objective includes the ability to manage a running server and configuring logging.

Key Knowledge Areas:

◈ BIND 9.x configuration files, terms and utilities
◈ Defining the location of the BIND zone files in BIND configuration files
◈ Reloading modified configuration and zone files
◈ Awareness of dnsmasq, djbdns and PowerDNS as alternate name servers

The following is a partial list of the used files, terms and utilities:

◈ /etc/named.conf
◈ /var/named/
◈ /usr/sbin/rndc
◈ kill
◈ host
◈ dig

207.2 Create and maintain DNS zones

Weight: 3

Description: Candidates should be able to create a zone file for a forward or reverse zone and hints for root level servers. This objective includes setting appropriate values for records, adding hosts in zones and adding zones to the DNS. A candidate should also be able to delegate zones to another DNS server.

Key Knowledge Areas:

◈ BIND 9 configuration files, terms and utilities
◈ Utilities to request information from the DNS server
◈ Layout, content and file location of the BIND zone files
◈ Various methods to add a new host in the zone files, including reverse zones

Terms and Utilities:

◈ /var/named/
◈ zone file syntax
◈ resource record formats
◈ named-checkzone
◈ named-compilezone
◈ masterfile-format
◈ dig
◈ nslookup
◈ host

207.3 Securing a DNS server

Weight: 2

Description: Candidates should be able to configure a DNS server to run as a non-root user and run in a chroot jail. This objective includes secure exchange of data between DNS servers.

Key Knowledge Areas:

◈ BIND 9 configuration files
◈ Configuring BIND to run in a chroot jail
◈ Split configuration of BIND using the forwarders statement
◈ Configuring and using transaction signatures (TSIG)
◈ Awareness of DNSSEC and basic tools
◈ Awareness of DANE and related records

Terms and Utilities:

◈ /etc/named.conf
◈ /etc/passwd
◈ DNSSEC
◈ dnssec-keygen
◈ dnssec-signzone

Topic 208: Web Services


208.1 Implementing a web server

Weight: 4

Description: Candidates should be able to install and configure a web server. This objective includes monitoring the server’s load and performance, restricting client user access, configuring support for scripting languages as modules and setting up client user authentication. Also included is configuring server options to restrict usage of resources. Candidates should be able to configure a web server to use virtual hosts and customize file access.

Key Knowledge Areas:

◈ Apache 2.4 configuration files, terms and utilities
◈ Apache log files configuration and content
◈ Access restriction methods and files
◈ mod_perl and PHP configuration
◈ Client user authentication files and utilities
◈ Configuration of maximum requests, minimum and maximum servers and clients
◈ Apache 2.4 virtual host implementation (with and without dedicated IP addresses)
◈ Using redirect statements in Apache’s configuration files to customize file access

Terms and Utilities:

◈ access logs and error logs
◈ .htaccess
◈ httpd.conf
◈ mod_auth_basic, mod_authz_host and mod_access_compat
◈ htpasswd
◈ AuthUserFile, AuthGroupFile
◈ apachectl, apache2ctl
◈ httpd, apache2

208.2 Apache configuration for HTTPS

Weight: 3

Description: Candidates should be able to configure a web server to provide HTTPS.

Key Knowledge Areas:

◈ SSL configuration files, tools and utilities
◈ Generate a server private key and CSR for a commercial CA
◈ Generate a self-signed Certificate
◈ Install the key and certificate, including intermediate CAs
◈ Configure Virtual Hosting using SNI
◈ Awareness of the issues with Virtual Hosting and use of SSL
◈ Security issues in SSL use, disable insecure protocols and ciphers

Terms and Utilities:

◈ Apache2 configuration files
◈ /etc/ssl/, /etc/pki/
◈ openssl, CA.pl
◈ SSLEngine, SSLCertificateKeyFile, SSLCertificateFile
◈ SSLCACertificateFile, SSLCACertificatePath
◈ SSLProtocol, SSLCipherSuite, ServerTokens, ServerSignature, TraceEnable

208.3 Implementing a proxy server

Weight: 2

Description: Candidates should be able to install and configure a proxy server, including access policies, authentication and resource usage.

Key Knowledge Areas:

◈ Squid 3.x configuration files, terms and utilities
◈ Access restriction methods
◈ Client user authentication methods
◈ Layout and content of ACL in the Squid configuration files

Terms and Utilities:

◈ squid.conf
◈ acl
◈ http_access

208.4 Implementing Nginx as a web server and a reverse proxy

Weight: 2

Description: Candidates should be able to install and configure a reverse proxy server, Nginx. Basic configuration of Nginx as a HTTP server is included.

Key Knowledge Areas:

◈ Nginx
◈ Reverse Proxy
◈ Basic Web Server

Terms and Utilities:

◈ /etc/nginx/
◈ nginx

Topic 209: File Sharing


209.1 SAMBA Server Configuration

Weight: 5

Description: Candidates should be able to set up a Samba server for various clients. This objective includes setting up Samba as a standalone server as well as integrating Samba as a member in an Active Directory. Furthermore, the configuration of simple CIFS and printer shares is covered. Also covered is a configuring a Linux client to use a Samba server. Troubleshooting installations is also tested.

Key Knowledge Areas:

◈ Samba 4 documentation
◈ Samba 4 configuration files
◈ Samba 4 tools and utilities and daemons
◈ Mounting CIFS shares on Linux
◈ Mapping Windows user names to Linux user names
◈ User-Level, Share-Level and AD security

Terms and Utilities:

◈ smbd, nmbd, winbindd
◈ smbcontrol, smbstatus, testparm, smbpasswd, nmblookup
◈ samba-tool
◈ net
◈ smbclient
◈ mount.cifs
◈ /etc/samba/
◈ /var/log/samba/

209.2 NFS Server Configuration

Weight: 3

Description: Candidates should be able to export filesystems using NFS. This objective includes access restrictions, mounting an NFS filesystem on a client and securing NFS.

Key Knowledge Areas:

◈ NFS version 3 configuration files
◈ NFS tools and utilities
◈ Access restrictions to certain hosts and/or subnets
◈ Mount options on server and client
◈ TCP Wrappers
◈ Awareness of NFSv4

Terms and Utilities:

◈ /etc/exports
◈ exportfs
◈ showmount
◈ nfsstat
◈ /proc/mounts
◈ /etc/fstab
◈ rpcinfo
◈ mountd
◈ portmapper

Topic 210: Network Client Management


210.1 DHCP configuration

Weight: 2

Description: Candidates should be able to configure a DHCP server. This objective includes setting default and per client options, adding static hosts and BOOTP hosts. Also included is configuring a DHCP relay agent and maintaining the DHCP server.

Key Knowledge Areas:

◈ DHCP configuration files, terms and utilities
◈ Subnet and dynamically-allocated range setup
◈ Awareness of DHCPv6 and IPv6 Router Advertisements

Terms and Utilities:

◈ dhcpd.conf
◈ dhcpd.leases
◈ DHCP Log messages in syslog or systemd journal
◈ arp
◈ dhcpd
◈ radvd
◈ radvd.conf

210.2 PAM authentication

Weight: 3

Description: The candidate should be able to configure PAM to support authentication using various available methods. This includes basic SSSD functionality.

Key Knowledge Areas:

◈ PAM configuration files, terms and utilities
◈ passwd and shadow passwords
◈ Use sssd for LDAP authentication

Terms and Utilities:

◈ /etc/pam.d/
◈ pam.conf
◈ nsswitch.conf
◈ pam_unix, pam_cracklib, pam_limits, pam_listfile, pam_sss
◈ sssd.conf

210.3 LDAP client usage

Weight: 2

Description: Candidates should be able to perform queries and updates to an LDAP server. Also included is importing and adding items, as well as adding and managing users.

Key Knowledge Areas:

◈ LDAP utilities for data management and queries
◈ Change user passwords
◈ Querying the LDAP directory

Terms and Utilities:

◈ ldapsearch
◈ ldappasswd
◈ ldapadd
◈ ldapdelete

210.4 Configuring an OpenLDAP server

Weight: 4

Description: Candidates should be able to configure a basic OpenLDAP server including knowledge of LDIF format and essential access controls.

Key Knowledge Areas:

◈ OpenLDAP
◈ Directory based configuration
◈ Access Control
◈ Distinguished Names
◈ Changetype Operations
◈ Schemas and Whitepages
◈ Directories
◈ Object IDs, Attributes and Classes

Terms and Utilities:

◈ slapd
◈ slapd-config
◈ LDIF
◈ slapadd
◈ slapcat
◈ slapindex
◈ /var/lib/ldap/
◈ loglevel

Topic 211: E-Mail Services


211.1 Using e-mail servers

Weight: 4

Description: Candidates should be able to manage an e-mail server, including the configuration of e-mail aliases, e-mail quotas and virtual e-mail domains. This objective includes configuring internal e-mail relays and monitoring e-mail servers.

Key Knowledge Areas:

◈ Configuration files for postfix
◈ Basic TLS configuration for postfix
◈ Basic knowledge of the SMTP protocol
◈ Awareness of sendmail and exim

Terms and Utilities:

◈ Configuration files and commands for postfix
◈ /etc/postfix/
◈ /var/spool/postfix/
◈ sendmail emulation layer commands
◈ /etc/aliases
◈ mail-related logs in /var/log/

211.2 Managing E-Mail Delivery

Weight: 2

Description: Candidates should be able to implement client e-mail management software to filter, sort and monitor incoming user e-mail.

Key Knowledge Areas:

◈ Understanding of Sieve functionality, syntax and operators
◈ Use Sieve to filter and sort mail with respect to sender, recipient(s), headers and size
◈ Awareness of procmail

Terms and Utilities:

◈ Conditions and comparison operators
◈ keep, fileinto, redirect, reject, discard, stop
◈ Dovecot vacation extension

211.3 Managing Remote E-Mail Delivery

Weight: 2

Description: Candidates should be able to install and configure POP and IMAP daemons.

Key Knowledge Areas:

◈ Dovecot IMAP and POP3 configuration and administration
◈ Basic TLS configuration for Dovecot
◈ Awareness of Courier

Terms and Utilities:

◈ /etc/dovecot/
◈ dovecot.conf
◈ doveconf
◈ doveadm

Topic 212: System Security


212.1 Configuring a router

Weight: 3

Description: Candidates should be able to configure a system to forward IP packet and perform network address translation (NAT, IP masquerading) and state its significance in protecting a network. This objective includes configuring port redirection, managing filter rules and averting attacks.

Key Knowledge Areas:

◈ iptables and ip6tables configuration files, tools and utilities
◈ Tools, commands and utilities to manage routing tables.
◈ Private address ranges (IPv4) and Unique Local Addresses as well as Link Local Addresses (IPv6)
◈ Port redirection and IP forwarding
◈ List and write filtering and rules that accept or block IP packets based on source or destination protocol, port and address
◈ Save and reload filtering configurations

Terms and Utilities:

◈ /proc/sys/net/ipv4/
◈ /proc/sys/net/ipv6/
◈ /etc/services
◈ iptables
◈ ip6tables

212.2 Securing FTP servers

Weight: 2

Description: Candidates should be able to configure an FTP server for anonymous downloads and uploads. This objective includes precautions to be taken if anonymous uploads are permitted and configuring user access.

Key Knowledge Areas:

◈ Configuration files, tools and utilities for Pure-FTPd and vsftpd
◈ Awareness of ProFTPd
◈ Understanding of passive vs. active FTP connections

Terms and Utilities:

◈ vsftpd.conf
◈ important Pure-FTPd command line options

212.3 Secure shell (SSH)

Weight: 4

Description: Candidates should be able to configure and secure an SSH daemon. This objective includes managing keys and configuring SSH for users. Candidates should also be able to forward an application protocol over SSH and manage the SSH login.

Key Knowledge Areas:

◈ OpenSSH configuration files, tools and utilities
◈ Login restrictions for the superuser and the normal users
◈ Managing and using server and client keys to login with and without password
◈ Usage of multiple connections from multiple hosts to guard against loss of connection to remote host following configuration changes

Terms and Utilities:

◈ ssh
◈ sshd
◈ /etc/ssh/sshd_config
◈ /etc/ssh/
◈ Private and public key files
◈ PermitRootLogin, PubKeyAuthentication, AllowUsers, PasswordAuthentication, Protocol

212.4 Security tasks

Weight: 3

Description: Candidates should be able to receive security alerts from various sources, install, configure and run intrusion detection systems and apply security patches and bugfixes.

Key Knowledge Areas:

◈ ​Tools and utilities to scan and test ports on a server
◈ Locations and organizations that report security alerts as Bugtraq, CERT or other sources
◈ Tools and utilities to implement an intrusion detection system (IDS)
◈ Awareness of OpenVAS and Snort

Terms and Utilities:

◈ telnet
◈ nmap
◈ fail2ban
◈ nc
◈ iptables

212.5 OpenVPN

Weight: 2

Description: Candidates should be able to configure a VPN (Virtual Private Network) and create secure point-to-point or site-to-site connections.

Key Knowledge Areas:

◈ OpenVPN

Terms and Utilities:

◈ /etc/openvpn/
◈ openvpn