Thursday 31 October 2019

Difference between Linux and Unix – Linux vs Unix

LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorial and Material, LPI Certifications, LPI Online Guides

Before the invention of Windows and Linux, Unix had a heavy influence in the computing world. In fact, the Linux operating system is a clone of the Unix operating system. With the latest Linux trends, the growth and demand for the Linux operating system have increased a lot. So, in this article, we will see how Linux is different from Unix i.e. Linux vs Unix. Also, we’ll compare Linux and Unix on various grounds.

It all began in 1969 when AT&T developed the first portable operating system UNIX. The Unix OS was completely written in C language allowing instant modification, portability and reaching several platforms. The project run successfully under the leadership of Ken Thompson and became the widely used OS of that time. Most of today’s UNIX variants are the licensed versions and every version has its own unique features, Sun’s Solaris, TBM’s AIX are the most widely used variants of UNIX.

Linux was built by Linus Torvalds in 1991 which was introduced to provide a free alternative to UNIX. When the GNU project was under process in 1983 which aimed to provide a free OS, LINUX was introduced. Unlike UNIX, Linux has the potential to run on several other platforms which the UNIX couldn’t. They both have a common foundation but have different tools and utilities.

A Comparison of Linux and Unix – Linux vs Unix


People are generally confused between Linux and Unix os and a number of questions hover in their minds. Some of the questions are – Are Linux and Unix same? Is Linux  OS different from Unix? Is Linux like Unix? and many others. So, as an answer, Linux is a Unix-like operating system with some modifications in the Unix design. There are a number of differences between Unix and Linux, so let’s dive deep into the features of both to understand the difference of Linux vs Unix.

Before moving forward, you can have a quick look at the difference table below:

Differences between Linux and Unix


Category Linux Unix 
Cost Available freely as it is open-source software. There are different prices for different Unix OS on the basis of the vendor.
Development  Open Source and several communities of developers come across a single platform for the development of Linux.  Developed by AT&T and some other commercial developers. 
User  Linux has a wide range of users from end-users to developers.  Used in basically servers and large workstations. 
Text Made Interface  The default shell for Linux is BASH. It also supports multiple command interpreters.  The default shell for UNIX is Bourne. It does not support all software but with time it is becoming compatible with other software. 
GUI  There are mainly two GUI – KDE and Gnome but have alternatives like Mate, LXDE, etc.  It has a Common Desktop Environment, and also the Gnome. 
Usage  One can install it on computers, mobile, and tablets as well.  It only works on internet servers, large workstations, and personal computers. 
Portability  Linux is Portable.  Unix is not Portable. 
Versions   Some common versions of Linux are – Ubuntu, RedHat, Solaris, OpenSuse, etc.  Some common versions of Unix are AIS, HP-UX, BSD, etc. 
Source Code  The source code of Linux is available in general public.  The source code of Linux is not available to anyone. 
Threat Detection  Threat detection and solution is fast as Linux is mainly community-driven.  To get proper bug fixing, users have to wait longer. 

1. Linux vs Unix – User Interface


Linux has more interactive user interface and so, the developers working on Linux will find it difficult to work on any commercial UNIX systems. It is easy to install the sound card, flash player, and other utilities in Linux. Apple OS X is an example of UNIX where it is difficult to install applications from a third party. Both are multiuser OS where UNIX is based on Command Line Interface and Linux is based on Graphics User Interface. But now all modern UNIX systems support GUI as well as CLI.

2. Linux vs Unix – Usage & Operations


Linux is used from small to medium-sized operations while previously UNIX was the only option. Most of the software vendors have moved to Linux as it is open software which is freely distributed and preferred for web service and office operations.

At most of the places, Linux is used but there are times where UNIX has an advantage. Like in enterprises which make use of massive symmetric multiprocessor systems, UNIX is a great choice to handle the operations. Now, the time has changed and since 2011 Linux is powering 90% of the top 500 servers.

3. Linux vs Unix – Basic Features


Linux is a kernel and Unix is a standardization. There are a number of features in which both the operating system differs, some of them are below.

UNIX Features:

◈ It is a multi-user and multi-tasking OS.
◈ In the servers and workstations, UNIX is used as the master control program.
◈ Since it was first built so several of the commercial applications are there in the market.

Linux Features:

◈ It is a multi-tasking OS and also supports multi-user programs.
◈ One program could have more than one process and each of the processes is capable of having more than one thread.
◈ In one machine you can install Linux as well as other OS, and both of the OS will run smoothly.
◈ It has authorized account so individual accounts are secured.

4. Linux vs Unix – Security


There is no OS which is fully secured but if we compare Unix and Linux we see that Linux is far more responsive in dealing with bugs and threats. Both have same characteristics like proper segmentation of the domain in a multi-user environment, there is a password system by which the system is encrypted and so on. The open software system has an advantage that it is freely available and this makes it more secure to bugs. When any of the developers will see a bug in the software he may report it to anyone in the developers’ forum. In the case of Unix, the system is not open software so it has limitations and is far more exposed to threats.

5. Linux vs Unix – Hardware Architecture


If we see the commercial versions of Unix then most of them support their own individual hardware machines. E.g. HP-UX supports only PA-RISC and Itanium machines, Solaris works on SPARC and x86 which is a power processor. These come under the limitations of UNIX and this is the reason Unix vendors have an advantage that they could optimize the code and the drivers.

In the case of Linux, this is not so. Linux has been written so that it could support the maximum number of machines. There are several platforms and machines that can run Linux with the support of several other I/O devices. Here the developers do not know that in which system the software will be installed so they cannot optimize the code.

6. Linux vs Unix – Kernel


The process involved in patching and compiling is different for Linux and Unix. In Linux, a patch could be released in the forum and the end user has the capacity to install it in their machine. This patch could also be edited and modified by the end-user. As there are many environments which supports Linux applications so developers are dependent on many eyes to know the errors and threats.

The kernels are released only in binary form by the commercial Unix vendors. In case an update has to be installed, then the administrator has to wait for the vendor to release the patch in binary form.

7. Linux vs Unix – File System Support


There are a plethora of filesystems that are supported by Linux whereas in case of Unix it supports less number of systems. Below we will see some of the filesystems supported by the different OS.

Linux – Jfs, Xfs, Btrfs, Ext2, Ext3, Ext4, FAT, FAT32, NTFS, devpts etc.

Unix – ufs, xfs, zfs, jfs, hfs+, hfs, etc.

8. Linux vs Unix – Availability of Applications


As mentioned above, Linux is a clone of Unix. So, many applications are the same between both OS. Some similar commands are cp, ls, vi and cc. Linux is a GNU version whereas Unix is based on original tools. But it is not to be confused as several of the Unix vendors use GNU tools in their installations. Most of the vendors supply these tools as a pre-compiled package which are installed or come as an optional component.

All the Linux distributions come with a set of open source applications and there are several other freely available for the developers and end-users. So Unix also has ported these applications and are available on a commercial version of Unix. 

9. Linux vs Unix – Limitations


While discussing the difference between Unix and Linux, it is important to cover the limitations of both operating systems. Here are the limitations of Linux and Unix.

Limitations of Linux

◈ Due to patchier support in drivers, there may occur a malfunction for the whole of the system.
◈ It is not as easy as a user uses windows OS.
◈ Several of the programs that we run on windows may run in Linux using a complicated emulator.
◈ Most of the corporate world uses Linux, although a home user could also make use of Linux.

Limitations of Unix

◈ It has a complicated user interface.
◈ Linux requires a high-performance machine. It will run slow on the normal machine and PC.
◈ Shell interface is complicated as as simple error in typing could cause the whole program to be unresponsive.
◈ It does not have the support of the real-time response.

10. Linux vs Unix – Support


All of the Unix versions are paid and Linux versions are free to use. This also adds a feature to Unix that if anyone purchases Unix, they will get commercial support. In the case of Linux, we have several open forums where the user could ask questions and come up with a better solution. Linux is more responsive as several end users have claimed that the forums are more responsive than commercial technical support of Unix.

Final Words


There is a huge market for Linux and it is supposed to be constantly growing due to the rich features of Linux. The International Data Corp. (IDC) is in support of Linux and said that there are more than 25 million machines of Linux while Unix has only 5.5 million. The above comparison of Unix vs Linux is really helpful to understand the difference between Unix and Linux os.

LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorial and Material, LPI Certifications, LPI Online Guides

Linux is popular among the developers and end-users due to its embedded technology, and open user interface. Unix is also competing with Linux and vendors of Unix such as HP, IBM, Sun has come up with the graphical user interface and user-friendly interface which is also compatible with Linux. 

Thursday 24 October 2019

Top 8 LPI Certification Books for the Linux Professional

LPI Certifications, LPI Guides, LPI Learning, LPI Tutorial and Materials

If you would like to be confident and prepared for anything in your career, check out the valuable books in this list of the top 8 LPI certification books for the Linux professional.

You won’t miss anything with any of these winner guide books in hand or under your pillow!

1. LPIC-1/CompTIA Linux+ Certification All-in-One Exam Guide (Exams LPIC-1/LX0-101 & LX0-102) by Robb Trac


One of the best written books on inofrmation technology you will have read in a while, this LPI certification book is awesome for anyone who is new to Linux administration. It also works as a refresher for advanced users for information that they might have forgotten. It’s easy to navigate, and the author also makes it very easy to use for those open book tests.

2. CompTIA Linux+ Complete Study Guide Authorized Courseware: Exams LX0-101 and LX0-102 by Roderick W. Smith


This LPI certification book covers all the needed materials for passing the exams. The information discussed is accurate and covers all the topics nicely. For optimum test results, use every command line tool indicated in this book more than once and explore their command line options prior to taking the exam.

3. LPIC-2 Linux Professional Institute Certification Study Guide: Exams 201 and 202 by Roderick W. Smith


After the completion of this LPI certification book, you’d end up wanting to thank the author. Not at all boring, it remains useful and interesting throughout all chapters. Its well-divided into two (2) parts sufficient to pass the exam, including LPIC 201 and LPIC 202. All the questions in the book are also very similar to the exam.

4. LPI Linux Certification in a Nutshell by Adam Haeder, Stephen Addison Schneiter, Bruno Gomes Pessanha and James Stanger


This LPI certification book is really concise and helpful. If you use Linux in your profession, this will help sharpen your skills. This book will completely prepare you for the certification test. It is also a great tool for system administration and to use as a reference for anyone desiring to become a system administrator.

5. LPIC-1: Linux Professional Institute Certification Study Guide: (Exams 101 and 102) by Roderick W. Smith


Only a true master such as Roderick Smith can dominate all the topics included in the exam and explain himself in perfect clarity with real-world scenarios and tips. The practice exams at the end of all the chapters are also excellent to test your knowledge. The bonus exams are definitely a must for passing the actual exams.

6. LPIC-1 In Depth by Michael Jang


This book showcases material in a good, in-depth fashion. Michael Jang’s LPI certification book explores the LPIC-1 exams and provides almost 500 practice questions to offer the latest test prep guide available for both LPI Level 1 exams 101 and 102. It’s a must for anyone taking the exams, with key terms, chapter summaries, review questions and others for multiple learning paths.

7. LPIC I Exam Cram 2: Linux Professional Institute Certification Exams 101 and 102 by Ross Brunson


A review of this particular LPI certification book can not reflect how superbly this book prepares you for the LPIC exam. The beauty of it is its conciseness and simplicity. Without getting excessively drawn out and wordy, it tells you exactly what you need to know through crystal clear explanations presented superbly.

8. LPIC-1: Linux Professional Institute Certification Study Guide (Level 1 Exams 101 and 102) by Roderick W. Smith


This is the best LPI certification book to prepare for LPIC-1. It follows LPIC-1 objectives closely, is well-written, and manages to keep your interest. It is also a great reference for basic Linux administration. The author not only includes the objectives of the exam, but gives enough background for you to fill the holes.

Tuesday 22 October 2019

LPI 1: File Management Part 2

LPI Study Materials, LPI Guides, LPI Tutorial and Material, LPI Certifications, LPI Learning

Times will come when you will be required to search for files in a directory and you know that there are specific characters in those files that you can use to find them easily. Linux provides wildcards for such situation so that yo can match them easily and quickly. For example, you have an idea that there is file in current directory that ends with the word ‘fix’. To find the file fast, wildcards can be of huge benefit to you. There are these wildcards in Linux

Asterisk (*)

The asterisk matches many characters as well as no character. As an example, l*k will match: look, luck, lawl345essk and many more file names beginning with l and ending with k.

Question mark (?)

While the asterisk matches many characters, the question mark on the other hand matches only one character on its position. For example: l??k, matches look, luck and any four-letter word beginning with l and ending with k. It is worth mentioning that the two letters must not be lowercase so even lBGk can be matched using the question mark.

Brackets

There is this amazing bracketed group that is so interesting as well. The characters can be enclosed in square brackets [ ] and it will match using the characters in that set. For example, l[oc]k matched lok and lck. It can be implemented like this as well l[uk]g[oi]p which matche lugop, lugip, lkgop, and lkgip. This amazing group also allows you to specify a range for example a range of alphabets in the alphabetical set. For instance: l[a-z]k matches: lak, lbk, lck, ldk and any three letter word whose second letter is a lowercase.

What we have learnt above is used in the command prompt for example in listing specific files matching the letters you specify

A gook example is if you are searching for a file beginning with l or ending with .docx extension.

You just do:

# ls l*
# ls *.docx

Other examples are

# ls l??k 

The above will bring up files like look, luck and others if they exist in the directory being probed

Sunday 20 October 2019

LPI 1: File Management Part 1

File Management


When all is said and done, Linux is a collection of files stored in your hard disk. For this reason, it is very important for anyone striving to be a Linux System Administrator to know how to manage these files already mentioned. Being an operating system that can be used by many users, Linux has got tools that enables you to permit who may access what files in the system. Let us cruise around these file management study, shall we?

LPI Study Materials, LPI Guides, LPI Tutorials and Materials, LPI Learning, LPI Online Exam

Commands used to manage files


The administrator of a Linux system must know how to create, delete, rename, move, share, archive and other manipulations on files. Before all that, there are rules that govern how the files should be named and stuff like that and we shall spend a small section next to demystify the quirks that the file naming has.

How to name the Files


You can name the files in Linux using uppercase, lowercase, control characters numbers and punctuation. Linux different from windows is case sensitive and files such as admin.txt, Admin.txt and ADMIN.TXT are three different files. If you have used windows, you will discover that those file names stated above are the same. For better management, it is advised to stick to the following non-alphanumeric characters in your file names, that is the dot (.), underscore (_), and the dash (-). At the same breath, there are some characters that should not be used in the file naming because they have special meanings to the Linux System. The files can have them but it is not such a good practice. These include the following:

The Asterisk (*)
The back slash (\)
The forward slash (/)
The question mark (?)
And the quotation mark (“)

The file name extension convention is similar to the other OS’s which follows a single dot. Linux file names can have a number of dots as well and in your exploration, you will discover that there are files that begin with a dot. They have a special name these files and you can as well guess. Dot files, bingo! You got it, admin. These dot files have a unique characteristic, they are usually hidden unless explicitly made to be viewed by special command as will be discussed. They especially store configuration files in the home folder.

There are some file names that we cannot afford to forget mentioning. There is this file name that consists of a single dot (.) and another one with two dots (..). The former is used to refer to the present directory you are in while the latter is used to refer to the previous file before the one you are in. To describe it well, let us use an example.
If you are at /home/admin directory, a dot . refers to /home/admin while two dots .. Refers to /home. So if you are at /usr/share/fonts and you would wish to be at /usr, you just do

# cd ../..

Saturday 19 October 2019

LPI 1 - Introduction to LPIC-1 and Linux

LPIC-1 Study Materials, LPI Guides, LPI Learning, LPI Tutorial and Materials, LPI Online Exam

In our very first article, we had a little dive into the Linux World by making a few introductions that are imperative to every new user or enthusiast. In our second article, we are going to learn some cool stuff that will enable you to get some hands on feel of your Distribution.

Tricks on the shell

Command-completion

If you are one of those like me who finds it tedious to type in the whole of the command in a shell, then command-completion is here to rescue you from the tedium. This is pretty simple, if you have typed the first few letters of a command, just hit the tab key and there will be a number of options available for you to accept. If the command exists, it will be automatically completed for you. While trying to get into a specific directory of getting a file which might have a long name, then command-completion comes in handy.

Getting into the basics of commands


Internal and external commands

Internal commands


Internal command are the ones inherent or built into the shell. The discussed shells usually offer similar internal commands though as might be expected there are a few differences here and there. Most of the internal commands enables you to perform some common activities within the shell such as: (examples)

1. Displaying some text

# echo text

2. Changing from one directory to another

# cd /home/

3. Opening an application

# exec vlc

4. Closing the shell

# exit

or

# logout

4. Timing an operation

# time pwd

5. Displaying the directory you are currently in

# pwd

External commands


These are commands that are not built into the the shell but can be executed by the shell. They are usually checked in PATH environment variable to look for a program that will run it. The PATH environment variable holds the list of directories where commands can be located. Most of the commands are external commands and their documentation are usually provided by the man page. You just type:

# man command

And the documentation of the command is provided.

Thursday 17 October 2019

Best LPIC-1 and LPIC-2 certification study books 2019

LPIC-1 and LPIC-2 Certification, LPI Study Material, LPI Tutorials and Tutorials, LPI Learning, LPI Guides

To become proficient in administering Linux boxes which opens a plethora of other opportunities such as DevOps, Cloud Computing and System Administration to mention a few, then a lot of work needs to be put in mastering Linux together with other tools. In the quest to master this niche, this article generously provides resources in form of books that will propel one in the right direction. You will not only be able to pass your LPI exams but you will stick with something much more. And that is the power to administer systems. If that sounds great, please stick with me all through.

About LPIC-1 and LPIC-2 Certification


Linux Professional Institute (LPI) is the global certification standard and career support organization for open source professionals. LPIC-1 is the first certification in LPI’s multi-level Linux professional certification program. The LPIC-1 will validate the candidate’s ability to perform maintenance tasks on the command line, install and configure a computer running Linux and configure basic networking.

On the other hand, LPIC-2 is the second certification in LPI’s multi-level professional certification program. The LPIC-2 will validate the candidate’s ability to administer small to medium–sized mixed networks (lpi.org, 2019).

Best LPIC-1 Certification Study Books


Below is a list of the best LPIC-1 certification study books.

1. LPIC-1: Linux Professional Institute Certification Study Guide: Exams 101 and 102 3rd Edition by Roderick W. Smith


Relying on the extensive experience and in depth experience from the author, this practical book covers key Linux administration topics and all exam objectives and includes real-world examples and review questions to help you practice your skills. In addition, you’ll gain access to a full set of online study tools, including bonus practice exams, electronic flashcards, and more. This book is quite suitable for you because it:

◈ Prepares candidates to take the Linux Professional Institute exams 101 and 102 and achieve their LPIC-1 certification

◈ Covers all exam objectives and features expanded coverage on key topics in the exam

◈ Includes real-world scenarios, and challenging review questions

◈ Gives you online access to bonus practice exams, electronic flashcards, and a searchable glossary

◈ Topics include system architecture, installation, GNU and Unix commands, Linux filesystems, essential system services, networking fundamentals, security, and more

To get your LPIC-1 exam right and have a good understanding of Linux, get your copy by clicking on the link below:

LPIC-1: Linux Professional Institute Certification Study Guide: Exams 101 and 102

2. Linux Command Line and Shell Scripting Bible, 3rd Edition


This book by Richard Blum serves as a basic and very essential Linux resource that will guide you with plenty of examples. Linux Command Line and Shell Scripting Bible goes right away into the fundamentals of the command line, introduces you to bash scripting which will be very important in your day to day Linux administration and goes an extra mile by providing detailed examples. The third edition being the latest release, it has new updated content and examples aligned with the latest Linux features which will help you fulfill the LPIC-1 objectives

What is attractive about this resource is how the author has gone out of the way to provide sound tutorials that you can easily follow through and actually understand. The examples are apt and relevant. Take is away from Amazon by clicking on the link below:

Linux Command Line and Shell Scripting Bible, 3rd Edition

3. Linux Essentials, Second Edition


Authored by experts Christine Bresnahan and Richard Blum Linux Essentials has a professional approach that aims at developing one for Linux Administration profession as well as passing the Linux Essentials exam which can serve as a lasting foundation to Linux and LPIC-1 at the same time. It has hands-on tutorials and learning-by-doing style of learning that equip you with a solid foundation as well giving you the confidence to pass the Linux Exam. For beginners with a keen interest in joining the IT industry as a professional, this book is highly recommended. You can check the reviews at Amazon below:

Linux Essentials, Second Edition

4. Linux Bible 9th Edition


Brought to you by veteran bestselling author Christopher Negus and Christine Bresnahan (contributor), Linux Bible brings to you a complete tutorial packed with major updates, revisions, and hands-on exercises so that you can confidently start using Linux today. There are exercises in abundance aimed to make your learning interesting and hence enable it as a better learning tool. Moreover, Linux Bible places an emphasis on the Linux command line tools and can be used with all distributions and versions of Linux.

Check it out on:

Linux Bible 9th Edition


5. Linux: The Complete Reference, Sixth Edition


Richard Petersen, a Linux Expert has once again released this book that gives the reader an in-depth coverage of all Linux features. As a beginner, you will have the advantage of having a thorough coverage of all aspects of Linux distributions ranging from shells, desktops, deployment of servers, management of applications, understanding security and a good grounding of basic network administration.

Linux: The Complete Reference is the ultimate guide where you will have the chance to learn how to:

◆ Administer any Linux distribution by installing and configuring them

◆ File and directory administration/manipulation from the BASH, TCSH, and Z shells

◆ Understand and use various desktop environments such as GNOME and KDE desktops, X Windows, and display managers

◆ Understand how to install essential applications such as office, database, connection to the Internet, and manage multimedia applications

◆ Have a good coverage of security by learning SELinux, netfilter, SSH, and Kerberos

◆ Get a good grounding of encryption such as encrypting network transmissions with GPG, LUKS, and IPsec

◆ Acquire skills of deploying FTP, Web, mail, proxy, print, news, and database servers and many more.

Check out the book at: Linux: Linux: The Complete Reference, Sixth Edition

Check Book: LPI Books

Best LPIC-2 Certification Study Books


After you have had your LPIC-1 Certification and you have had the opportunity to practice your skills in administering systems, time will come when you will need to upgrade your certification level to get the trust from employers to handle more responsibilities in your wonderful career. This section gives you a place to select the resources that will help you climb the ladder stress-free.

1. LPIC-2: Linux Professional Institute Certification Study Guide: Exam 201 and Exam 202, 2nd Edition by Christine Bresnahan


The LPI-level 2 certification confirms your advanced Linux skill set, and the demand for qualified professionals continues to grow.

Christine once again goes over the objectives that LPIC-2 demands and produced this study guide that covers it all 100 percent. This book provides clear and concise coverage of the Linux administration topics you’ll need to know for exams 201 and 202. Crafted to benefit you to not only pass the exams, the examples provided highlight the real-world applications of important concepts, and together, the author team provides insights based on almost fifty years in the IT industry.

This brand new second edition has been completely revamped to align with the latest versions of the exams, with authoritative coverage of the Linux kernel, system startup, advanced storage, network configuration, system maintenance, web services, security, troubleshooting, and more.

You will be ahead by learning a lot for example:

◈ Understand all of the material for both LPIC-2 exams
◈ Gain insight into real-world applications
◈ Test your knowledge with chapter tests and practice exams
◈ Access online study aids for more thorough preparation

All this and more awaits you when you purchase your copy on the link below:

LPIC-2: Linux Professional Institute Certification Study Guide: Exam 201 and Exam 202, 2nd Edition

2. LPIC-2 Linux Professional Institute Certification Study Guide: Exams 201 and 202 1st Edition by Roderick W. Smith


Compiled from a long period of Linux exposure and confident experiences of Roderick, this book picks up from your LPIC-1 journey and takes you a step at a time to master the advanced stuff hidden in Linux. You will be sure to get satisfied by the manner in which the clear explanations and language has been done. This study guide provides unparalleled coverage of the LPIC-2 objectives for exams 201 and 202. Clear and concise coverage examines all Linux administration topics while practical, real-world examples enhance your learning process. What is more, the book:

◈ Prepares you for exams 201 and 202 of the Linux Professional Institute Certification

◈ Offers clear, concise coverage on exam topics such as the Linux kernel, system startup, networking configuration, system maintenance, domain name server, file sharing, and more

◈ Addresses additional key topics for the exams including network client management, e-mail services, system security, and troubleshooting

There is no other place to look than Amazon for a copy of this book. Click on the link below and you will be taken there

LPIC-2 Linux Professional Institute Certification Study Guide: Exams 201 and 202 1st Edition

3. LPIC-2 Cert Guide: (201-400 and 202-400 exams) (Certification Guide) 1st Edition by William Rothwell


There is so much that one can learn from a veteran or one with so much experience in a given area of expertise especially when they go out of their way to put it all on paper. Expert Linux/Unix instructor William “Bo” Rothwell does this and shares preparation hints and test-taking tips, helping students identify areas of weakness and improve both conceptual knowledge and hands-on skills. Material is presented in a concise manner, focusing on increasing understanding and retention of exam topics such as

◈ Capacity planning
◈ Managing the kernel
◈ Managing system startup
◈ Managing filesystems and devices
◈ Administering advanced storage devices
◈ Configuring the network
◈ Performing system maintenance
◈ Administering Domain Name Server (DNS)
◈ Configuring web services
◈ Administering file sharing
◈ Managing network clients
◈ Administering e-mail services
◈ Administering system security

This can be of real value to your preparation process. If you are inspired, head over to Amazon and take a look at it below:

LPIC-2 Cert Guide: (201-400 and 202-400 exams) (Certification Guide)

Tuesday 15 October 2019

How To Read and Set Environmental and Shell Variables on a Linux VPS

Introduction


When interacting with your server through a shell session, there are many pieces of information that your shell compiles to determine its behavior and access to resources. Some of these settings are contained within configuration settings and others are determined by user input.

LPI Certifications, LPI Guides, LPI Learning, LPI Study Materials, LPI Online Guides

One way that the shell keeps track of all of these settings and details is through an area it maintains called the environment. The environment is an area that the shell builds every time that it starts a session that contains variables that define system properties.

In this guide, we will discuss how to interact with the environment and read or set environmental and shell variables interactively and through configuration files. We will be using an Ubuntu 12.04 VPS as an example, but these details should be relevant on any Linux system.

How the Environment and Environmental Variables Work


Every time a shell session spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.

Basically the environment provides a medium through which the shell process can get or set settings and, in turn, pass these on to its child processes.

The environment is implemented as strings that represent key-value pairs. If multiple values are passed, they are typically separated by colon (:) characters. Each pair will generally look something like this:

KEY=value1:value2:...

If the value contains significant white-space, quotations are used:

KEY="value with spaces"

The keys in these scenarios are variables. They can be one of two types, environmental variables or shell variables.

Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.

Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.

By convention, these types of variables are usually defined using all capital letters. This helps users distinguish environmental variables within other contexts.

Printing Shell and Environmental Variables


Each shell session keeps track of its own shell and environmental variables. We can access these in a few different ways.

We can see a list of all of our environmental variables by using the env or printenv commands. In their default state, they should function exactly the same:

printenv

SHELL=/bin/bash
TERM=xterm
USER=demouser
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca:...
MAIL=/var/mail/demouser
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
PWD=/home/demouser
LANG=en_US.UTF-8
SHLVL=1
HOME=/home/demouser
LOGNAME=demouser
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/printenv

This is fairly typical of the output of both printenv and env. The difference between the two commands is only apparent in their more specific functionality. For instance, with printenv, you can requests the values of individual variables:

printenv SHELL

/bin/bash

On the other hand, env let’s you modify the environment that programs run in by passing a set of variable definitions into a command like this:

env VAR1="blahblah" command_to_run command_options

Since, as we learned above, child processes typically inherit the environmental variables of the parent process, this gives you the opportunity to override values or add additional variables for the child.

As you can see from the output of our printenv command, there are quite a few environmental variables set up through our system files and processes without our input.

These show the environmental variables, but how do we see shell variables?

The set command can be used for this. If we type set without any additional parameters, we will get a list of all shell variables, environmental variables, local variables, and shell functions:

set

BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
. . .

This is usually a huge list. You probably want to pipe it into a pager program to deal with the amount of output easily:

set | less

The amount of additional information that we receive back is a bit overwhelming. We probably do not need to know all of the bash functions that are defined, for instance.

We can clean up the output by specifying that set should operate in POSIX mode, which won’t print the shell functions. We can execute this in a sub-shell so that it does not change our current environment:

(set -o posix; set)

This will list all of the environmental and shell variables that are defined.

We can attempt to compare this output with the output of the env or printenv commands to try to get a list of only shell variables, but this will be imperfect due to the different ways that these commands output information:

comm -23 <(set -o posix; set | sort) <(env | sort)

This will likely still include a few environmental variables, due to the fact that the set command outputs quoted values, while the printenv and env commands do not quote the values of strings.

This should still give you a good idea of the environmental and shell variables that are set in your session.

These variables are used for all sorts of things. They provide an alternative way of setting persistent values for the session between processes, without writing changes to a file.

Common Environmental and Shell Variables

Some environmental and shell variables are very useful and are referenced fairly often.
Here are some common environmental variables that you will come across:

◈ SHELL: This describes the shell that will be interpreting any commands you type in. In most cases, this will be bash by default, but other values can be set if you prefer other options.
◈ TERM: This specifies the type of terminal to emulate when running the shell. Different hardware terminals can be emulated for different operating requirements. You usually won’t need to worry about this though.
◈ USER: The current logged in user.
◈ PWD: The current working directory.
◈ OLDPWD: The previous working directory. This is kept by the shell in order to switch back to your previous directory by running cd -.
◈ LS_COLORS: This defines color codes that are used to optionally add colored output to the ls command. This is used to distinguish different file types and provide more info to the user at a glance.
◈ MAIL: The path to the current user’s mailbox.
◈ PATH: A list of directories that the system will check when looking for commands. When a user types in a command, the system will check directories in this order for the executable.
◈ LANG: The current language and localization settings, including character encoding.
◈ HOME: The current user’s home directory.
◈ _: The most recent previously executed command.

In addition to these environmental variables, some shell variables that you’ll often see are:

◈ BASHOPTS: The list of options that were used when bash was executed. This can be useful for finding out if the shell environment will operate in the way you want it to.
◈ BASH_VERSION: The version of bash being executed, in human-readable form.
◈ BASH_VERSINFO: The version of bash, in machine-readable output.
◈ COLUMNS: The number of columns wide that are being used to draw output on the screen.
◈ DIRSTACK: The stack of directories that are available with the pushd and popd commands.
◈ HISTFILESIZE: Number of lines of command history stored to a file.
◈ HISTSIZE: Number of lines of command history allowed in memory.
◈ HOSTNAME: The hostname of the computer at this time.
◈ IFS: The internal field separator to separate input on the command line. By default, this is a space.
◈ PS1: The primary command prompt definition. This is used to define what your prompt looks like when you start a shell session. The PS2 is used to declare secondary prompts for when a command spans multiple lines.
◈ SHELLOPTS: Shell options that can be set with the set option.
◈ UID: The UID of the current user.

Setting Shell and Environmental Variables


To better understand the difference between shell and environmental variables, and to introduce the syntax for setting these variables, we will do a small demonstration.

Creating Shell Variables

We will begin by defining a shell variable within our current session. This is easy to accomplish; we only need to specify a name and a value. We’ll adhere to the convention of keeping all caps for the variable name, and set it to a simple string.

TEST_VAR='Hello World!'

Here, we’ve used quotations since the value of our variable contains a space. Furthermore, we’ve used single quotes because the exclamation point is a special character in the bash shell that normally expands to the bash history if it is not escaped or put into single quotes.

We now have a shell variable. This variable is available in our current session, but will not be passed down to child processes.

We can see this by grepping for our new variable within the set output:

set | grep TEST_VAR
TEST_VAR='Hello World!'

We can verify that this is not an environmental variable by trying the same thing with printenv:

printenv | grep TEST_VAR

No output should be returned.

Let’s take this as an opportunity to demonstrate a way of accessing the value of any shell or environmental variable.

echo $TEST_VAR
Hello World!

As you can see, reference the value of a variable by preceding it with a $ sign. The shell takes this to mean that it should substitute the value of the variable when it comes across this.

So now we have a shell variable. It shouldn’t be passed on to any child processes. We can spawn a new bash shell from within our current one to demonstrate:

bash
echo $TEST_VAR

If we type bash to spawn a child shell, and then try to access the contents of the variable, nothing will be returned. This is what we expected.

Get back to our original shell by typing exit:

exit

Creating Environmental Variables

Now, let’s turn our shell variable into an environmental variable. We can do this by exporting the variable. The command to do so is appropriately named:

export TEST_VAR

This will change our variable into an environmental variable. We can check this by checking our environmental listing again:

printenv | grep TEST_VAR

TEST_VAR=Hello World!

This time, our variable shows up. Let’s try our experiment with our child shell again:

bash
echo $TEST_VAR

Hello World!

Great! Our child shell has received the variable set by its parent. Before we exit this child shell, let’s try to export another variable. We can set environmental variables in a single step like this:

export NEW_VAR="Testing export"

Test that it’s exported as an environmental variable:

printenv | grep NEW_VAR

NEW_VAR=Testing export

Now, let’s exit back into our original shell:

exit

Let’s see if our new variable is available:

echo $NEW_VAR

Nothing is returned.

This is because environmental variables are only passed to child processes. There isn’t a built-in way of setting environmental variables of the parent shell. This is good in most cases and prevents programs from affecting the operating environment from which they were called.

The NEW_VAR variable was set as an environmental variable in our child shell. This variable would be available to itself and any of its child shells and processes. When we exited back into our main shell, that environment was destroyed.

Demoting and Unsetting Variables


We still have our TEST_VAR variable defined as an environmental variable. We can change it back into a shell variable by typing:

export -n TEST_VAR

It is no longer an environmental variable:

printenv | grep TEST_VAR

However, it is still a shell variable:

set | grep TEST_VAR

TEST_VAR='Hello World!'

If we want to completely unset a variable, either shell or environmental, we can do so with the unset command:

unset TEST_VAR

We can verify that it is no longer set:

echo $TEST_VAR

Nothing is returned because the variable has been unset.

Setting Environmental Variables at Login


We’ve already mentioned that many programs use environmental variables to decide the specifics of how to operate. We do not want to have to set important variables up every time we start a new shell session, and we have already seen how many variables are already set upon login, so how do we make and define variables automatically?

This is actually a more complex problem than it initially seems, due to the numerous configuration files that the bash shell reads depending on how it is started.

The Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions

The bash shell reads different configuration files depending on how the session is started.

One distinction between different sessions is whether the shell is being spawned as a “login” or “non-login” session.

A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a “login” shell.

If you start a new shell session from within your authenticated session, like we did by calling the bash command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.

Another distinction that can be made is whether a shell session is interactive, or non-interactive.

An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.

So each shell session is classified as either login or non-login and interactive or non-interactive.

A normal session that begins with SSH is usually an interactive login shell. A script run from the command line is usually run in a non-interactive, non-login shell. A terminal session can be any combination of these two properties.

Whether a shell session is classified as a login or non-login shell has implications on which files are read to initialize the shell session.

A session started as a login session will read configuration details from the /etc/profile file first. It will then look for the first login shell configuration file in the user’s home directory to get user-specific configuration details.

It reads the first file that it can find out of ~/.bash_profile, ~/.bash_login, and ~/.profile and does not read any further files.

In contrast, a session defined as a non-login shell will read /etc/bash.bashrc and then the user-specific ~/.bashrc file to build its environment.

Non-interactive shells read the environmental variable called BASH_ENV and read the file specified to define the new environment.

Implementing Environmental Variables


As you can see, there are a variety of different files that we would usually need to look at for placing our settings.

This provides a lot of flexibility that can help in specific situations where we want certain settings in a login shell, and other settings in a non-login shell. However, most of the time we will want the same settings in both situations.

Fortunately, most Linux distributions configure the login configuration files to source the non-login configuration files. This means that you can define environmental variables that you want in both inside the non-login configuration files. They will then be read in both scenarios.

We will usually be setting user-specific environmental variables, and we usually will want our settings to be available in both login and non-login shells. This means that the place to define these variables is in the ~/.bashrc file.

Open this file now:

nano ~/.bashrc

This will most likely contain quite a bit of data already. Most of the definitions here are for setting bash options, which are unrelated to environmental variables. You can set environmental variables just like you would from the command line:

export VARNAME=value

Any new environmental variables can be added anywhere in the ~/.bashrc file, as long as they aren’t placed in the middle of another command or for loop. We can then save and close the file. The next time you start a shell session, your environmental variable declaration will be read and passed on to the shell environment. You can force your current session to read the file now by typing:

source ~/.bashrc

If you need to set system-wide variables, you may want to think about adding them to /etc/profile, /etc/bash.bashrc, or /etc/environment.

Sunday 13 October 2019

bison command in Linux with Examples

bison command is an replacement for the yacc. It is basically a parser generator similar to yacc. Input files should follow the yacc convention of ending in .y format. Similar to yacc, the generated files do not have fixed names, but instead use the prefix of the input file. Moreover, if you need to put C++ code in the input file, you can end his name by a C++-like extension as .ypp or .y++, then bison will follow your extension to name the output file as .cpp or .c++.

Syntax:


bison [OPTION]... FILE

Operation modes:


◈ -h, –help : It display this help and exit.
◈ -V, –version : It display version information and exit.
◈ –print-localedir : It display directory containing locale-dependent data.
◈ –print-datadir : It display directory containing skeletons and XSLT.
◈ -y, –yacc : It emulate POSIX Yacc.
◈ -W, –warnings[=CATEGORY] : It report the warnings falling in CATEGORY.
◈ -f, –feature[=FEATURE] : It activate miscellaneous features.

Parser:


◈ -L, –language=LANGUAGE : It specify the output programming language.
◈ -S, –skeleton=FILE : It specify the skeleton to use.
◈ -t, –debug : It instrument the parser for tracing same as ‘-Dparse.trace’.
◈ –locations : It enable location support.
◈ -D, –define=NAME[=VALUE] : It similar to ‘%define NAME “VALUE”‘.
◈ -F, –force-define=NAME[=VALUE] : It override ‘%define NAME “VALUE”‘.
◈ -p, –name-prefix=PREFIX : It prepend PREFIX to the external symbols deprecated by ‘-Dapi.prefix=PREFIX’.
◈ -l, –no-lines : It don’t generate ‘#line’ directives.
◈ -k, –token-table : It include a table of token names.

Output:


◈ –defines[=FILE] : It also produce a header file.
◈ -d : It’s likewise but cannot specify FILE (for POSIX Yacc).
◈ -r, –report=THINGS : It also produce details on the automaton.
◈ –report-file=FILE : It write report to FILE.
◈ -v, –verbose : It same as ‘–report=state’.
◈ -b, –file-prefix=PREFIX : It specify a PREFIX for output files.
◈ -o, –output=FILE : It leave output to FILE.
◈ -g, –graph[=FILE] : It also output a graph of the automaton.
◈ -x, –xml[=FILE] : It also output an XML report of the automaton (the XML schema is experimental).

Example:


◈ bison: For example, the bison grammar file called file.y. By default, bison will create the output file with the same name as the input file, with .tab appended to the name.
bison file.y


◈ -y: It emulate POSIX Yacc i.e. it created the output file name using the typical yacc convention (y.tab.c), instead of y.tab.cpp by bison.
bison -y file1.ypp


◈ –defines: It is used to generate a header file along with the .c (or .cpp) file.
bison --defines file.y


◈ -p: It is used to add own custom prefix to the external symbols.
bison -p FILE file.y


◈ -r: It is used to generate a report file. The default report file name is file.output
bison -r all file.y


◈ -V: It display the bison version information.
bison -V


◈ –print-localedir: It display directory containing locale-dependent data.
bison --print-localedir


◈ –print-datadir: It display directory containing skeletons and XSLT.
bison --print-datadir


◈ -x: It display an XML report of the automaton (the XML schema is experimental).
bison -x file.y


◈ -g: It display a graph of the automation.
bison -g file.y


Note:

◈ To check for the manual page of bison command, use the following command:
man bison

◈ To check the help page of bison command, use the following command:
bison --help 

Saturday 12 October 2019

Methods to Use Uniq Command in Linux with Examples

Uniq Command, Linux Tutorial and Material, Linux Study Materials, Linux Online Guides, LPI Study Materials

If you are a Linux user and your work involves with working with and manipulating text files and strings, then you should be already familiar with the uniq command, as it is most commonly used in that area.

For those who are not familiar with uniq command, it is a command line tool which is used to report or omit repeated strings or lines. This basically filter adjacent matching lines from INPUT (or standard input) and write to OUTPUT (or standard output). With no options, matching lines are merged to the first occurrence.

Below are few examples of usage of the uniq command

1) Omit duplicates


Executing the uniq commands without specifying any parameters simply omits duplicates and displays a unique string output.

fluser@fvm:~/Documents/files$cat file1
Hello
Hello
How are you?
How are you?
Thank you
Thank you
fluser@fvm:~/Documents/files$ uniq file1
Hello
How are you?
Thank you

2) Display number of repeated lines


With the -c parameter, it is possible to view the duplicate line count in a file

fluser@fvm:~/Documents/files$ cat file1
Hello
Hello
How are you?
How are you?
Thank you
Thank you
fluser@fvm:~/Documents/files$ uniq -c file1
      2 Hello
      2 How are you?
      2 Thank you

3) Print only the duplicates


By using -d parameter, we can select only the lines which have been duplicated inside a file

fluser@fvm:~/Documents/files$ cat file1
Hello
Hello
Good morning
How are you?
How are you?
Thank you
Thank you
Bye
fluser@fvm:~/Documents/files$ uniq -d file1
Hello
How are you?
Thank you

4) Ignore case when comparing


Normally when you use the uniq command it take the case of letters into consideration. But if you want to ignore the case, you can use -i parameter

fluser@fvm:~/Documents/files$ cat file1
Hello
hello
How are you?
How are you?
Thank you
thank you
fluser@fvm:~/Documents/files$ uniq file1
Hello
hello
How are you?
Thank you
thank you
fluser@fvm:~/Documents/files$ uniq -i file1
Hello
How are you?
Thank you

5) Only print unique lines


If you only want to see the unique lines in a file, you can use -u parameter

fluser@fvm:~/Documents/files$ cat file1
Hello
Hello
Good morning
How are you?
How are you?
Thank you
Thank you
Bye
fluser@fvm:~/Documents/files$ uniq -u file1
Good morning
Bye

6) Sort and find duplicates


Sometimes duplicate entries may contain in different places of a files. In that case if we simply use the uniq command, it will not detect these duplicate entries in different lines. In that case we first need to sort the file and then we can find duplicates

fluser@fvm:~/Documents/files$ cat file1
Adam
Sara
Frank
John
Ann
Matt
Harry
Ann
Frank
John
fluser@fvm:~/Documents/files$ sort file1 | uniq -c
      1 Adam
      2 Ann
      2 Frank
      1 Harry
      2 John
      1 Matt
      1 Sara

7) Save the output in another file


The output of our uniq command can be simply saved in another file as below

fluser@fvm:~/Documents/files$ cat file1
Hello
Hello
How are you?
Good morning
Good morning
Thank you
fluser@fvm:~/Documents/files$ uniq -u file1
How are you?
Thank you
fluser@fvm:~/Documents/files$ uniq -u file1 output
fluser@fvm:~/Documents/files$ cat output
How are you?
Thank you

Uniq Command, Linux Tutorial and Material, Linux Study Materials, Linux Online Guides, LPI Study Materials

8) Ignore characters


In order to ignore few characters at the beginning you can use -s parameter, but you need to specify the number of characters you need to ignore

fluser@fvm:~/Documents/files$ cat file1
1apple
2apple
3pears
4banana
5banana
fluser@fvm:~/Documents/files$ uniq -s 1 file1
1apple
3pears
4banana

Thursday 10 October 2019

Open Source, Artificial Intelligence, and LPI

I'm going to lead with the punchline on this one. I believe that LPI should invest in providing a certification path for some kind of machine learning, specifically geared to open source development in artificial intelligence.

Artificial Intelligence, LPI, LPI Study Materials, LPI Learning, LPI Tutorial and Material, LPI Online Exam

Whatever you may think about automation and artificial intelligence from the perspective of what it will eventually mean for humanity, there's no question that some form of artificial intelligence is present in every aspect of our lives. Those of us who own one or more Google Home or Alexa speakers know full well how much AI touches our lives. For us, it's an ever-present companion.

Smart systems like Google's Assistant are built using TensorFlow, an open source programming library that has become a kind of goto set of tools for anyone building machine learning, deep learning, natural language processing (as in your smart speaker), or neural network based applications. TensorFlow based applications are programmed using Python, another free and open source development platform.

Speaking of Python, there's also PyTorch, a set of deep learning Python libraries that is built on Torch, yet another machine learning set of tools developed, this time, by Facebook. It's primary purpose was computer vision, facial recognition, and natural language processing.

Keep in mind that there are already plenty of AI and ML tools out there, built with and distributed as open source. We also have organizations that are dedicated to AI and ML being entirely open. For instance . . .

H2O.ai at https://www.h2o.ai/

AI.Google at https://ai.google

OpenAI at https://open.ai

While I understand that the focus for LPI has been to champion Open Source and to help build the futures and careers of Linux systems administrators, including DevOps, machine and artificial intelligence tools are making their way into every aspect these professions. In fact, the smart sysadmin has always sought to use the tools at their disposal to automate as much of the processes regarding administration as is possible with the available technology.

As systems get more complex and distributed across both the physical and virtual world, a simple hands-on approach is no longer practical. Automation is key to keeping things running smoothly. Even so, simply replying on these automated systems to spit out interpreted logs doesn't help if there isn't someone there to respond should something catastrophic happens. That's why we've been automating a variety of responses based on selected events. We can tell our systems, "Only call me if it's really important." Only tell me about those things that actually require my intervention.

Trouble is, those complex distributed systems I was talking about are getting more complex, and more distributed. At some point, human intervention, by even the best and most highly trained human, becomes a bottleneck.

Have you heard of DeepMind? This machine learning startup was bought by Google (technically Alphabet, but I still think of the whole thing as Google) in 2014. In 2015, it's AlphaGo program beat Fan Hui, the European champion Go player, 5 games to zero, in a demonstration that a machine learning system could learn to win a game so complex, with so many combinations and permutations, that it was deemed nigh impossible for a computer to win.

AlphaGo continued to flex it's machine learning muscles until, in 2017, it beat Ke Jie, the reigning world champion of Go.

Later that same year, a next generation system, AlphaZero taught itself to play Go in less than three days then went on to beat AlphaGo 100 games to zero.

Fast forward to 2018. Alphabet (who I'll probably just keep thinking of as Google) turned DeepMind loose on its monolithic data centres, giving the algorithm complete control over the cooling of those data centres, providing Alphabet with a 40% savings on ventilation, air conditioning, and whatever other cooling measures might be used. No humans are involved or required. This is data centre infrastructure management, fully automated.

It is, in fact, the logical end goal of every sysadmin.

So, am I suggesting that LPI should get behind and provide certification for a technology that will, if all goes well, do away with the need for systems and network administrators? In a word, yes. The next logical question is why?

Since full automation is the logical end game for what we've come to think of as systems administration, and since pretty much all of this smart technology runs on Linux servers, and is built on open source software and tools, we must embrace the technology and direct it, making sure that intelligent machines have our collective human best interests at heart. I don't know how long it will be before the last sysadmin retires, but that day is coming whether we are a part of it or not. It behooves us to make sure that when fully autonomous systems take over, that we have done everything we can to make sure that they operate on safe and ethical principles.

Furthermore, as the need for classic administration fades into history, it is those people with the skills to tackle these marvellous new technologies who will benefit from a slightly longer career. For as long as that might last, this will be valuable knowledge indeed.

Needless to say, there must be conflicting opinions on this subject and this is where I turn it over to you. Am I right? Should LPI follow a path to Artificial Intelligence and Machine Learning Certification? The first one could be AIML-1 in the spirit of past course naming conventions. Perhaps I've read the tea leaves wrong and the age of human admins is far from over. Either way, I open the floor to you and look forward to your comments.

Saturday 5 October 2019

30 Best DevOps Tools & Technologies (2019 List)

DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

DevOps is a software development and delivery process. It emphasizes communication, collaboration between product management, software development, and operations professionals.

Read More: LPIC-OT 701: DevOps Tools Engineer

Following is a curated list of the Top DevOps Tool, along with their features and latest download links.

1) QuerySurge


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

QuerySurge is the smart data testing solution that is the first-of-its-kind full DevOps solution for continuous data testing.

Key Features

◈ Robust API with 60+ calls
◈ Seamlessly integrates into the DevOps pipeline for continuous testing
◈ Verifies large amounts of data quickly
◈ Validates difficult transformation rules between multiple source and target systems
◈ Detects requirements and code changes, updates tests accordingly and alerts team members of said changes
◈ Provides detailed data intelligence & data analytics

2) Buddy


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Buddy is a smart CI/CD tool for web developers designed to lower the entry threshold to DevOps. It uses delivery pipelines to build, test and deploy software. The pipelines are created with over 100 ready-to-use actions that can be arranged in any way – just like you build a house of bricks.

◈ 15-minute configuration in clear & telling UI/UX
◈ Lightning-fast deployments based on changesets
◈ Builds are run in isolated containers with cached dependencies
◈ Supports all popular languages, frameworks & task managers
◈ Dedicated roster of Docker/Kubernetes actions
◈ Integrates with AWS, Google, DigitalOcean, Azure, Shopify, WordPress & more
◈ Supports parallelism & YAML configuration

3) Jenkins


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Jenkins a DevOps tool for monitoring execution of repeated tasks. It helps to integrate project changes more easily by quickly finding issues.

Features:

◈ It increases the scale of automation
◈ Jenkins requires little maintenance and has built-in GUI tool for easy updates.
◈ It offers 400 plugins to support building and testing virtually any project.
◈ It is Java-based program ready to run with Operating systems like Windows, Mac OS X, and UNIX
◈ It supports continuous integration and continuous delivery
◈ It can easily set up and configured via web interface
◈ It can distribute tasks across multiple machines thereby increasing concurrency.

4) Vagrant


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Vagrant is a DevOps tool. It allows building and managing virtual machine environments in a single workflow. It offers easy-to-use workflow and focuses on automation. Vagrant lowers development environment setup time and increases production parity.

Features:

◈ Vagrant integrates with existing configuration management tools like Chef, Puppet, Ansible, and Salt
◈ Vagrant works seamlessly on Mac, Linux, and Window OS
◈ Create a single file for projects to describe the type of machine and software users want to install
◈ It helps DevOps team members to have an ideal development environment

5) PagerDuty:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

PagerDuty is a DevOps tool that helps businesses to enhance their brand reputation. It is an incident management solution supporting continuous delivery strategy. It also allows DevOps teams to deliver high-performing apps.

Key Features:

◈ Provide Real-time alerts
◈ Reliable & Rich Alerting facility
◈ Event Grouping & Enrichment
◈ Gain visibility into critical systems and applications
◈ Easily detect and resolve incidents from development through production
◈ It offers Real-Time Collaboration System & User Reporting
◈ It supports Platform Extensibility
◈ It allows scheduling & automated Escalations
◈ Full-stack visibility across development and production environments
◈ Event intelligence for actionable insights

6) Prometheus:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Prometheus is 100% open source free to use service monitoring system. It offers support for more than ten languages.

Key Features:

◈ Flexible query language for slicing collected time series data to generate tables, graphs, and alerts
◈ Stores time series, streams of timestamped values belonging to the same metric, and the same set of labeled dimensions
◈ Stores time series in memory and also on local disk
◈ It has easy-to-implement custom libraries
◈ Alert manager handles notifications and silencing

7) Ganglia:


Ganglia DevOps tool offers teams with cluster and grid monitoring capabilities. This tool is designed for high-performance computing systems like clusters and grids.

Key Features:

◈ Free and open source tool
◈ Scalable monitoring system based on a hierarchical design
◈ Achieves low per-node overheads for high concurrency
◈ It can handle clusters with 2,000 nodes

8) Snort:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Snort is a very powerful open-source DevOps tool that helps in the detection of intruders. It also highlights malicious attacks against the system. It allows real-time traffic analysis and packet logging.

Key Features:

◈ Performs protocol analysis and content searching
◈ It allows signature-based detection of attacks by analyzing packets
◈ It offers real-time traffic analysis and packet logging
◈ Detects buffer overflows, stealth port scans, and OS fingerprinting attempts, etc.

9) Splunk:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Splunk is a tool to make machine data accessible, usable, and valuable to everyone. It delivers operational intelligence to DevOps teams. It helps companies to be more productive, competitive, and secure.

Key Features:

Data drive analytics with actionable insights
Next-generation monitoring and analytics solution
Delivers a single, unified view of different IT services
Extend the Splunk platform with purpose-built solutions for security

10) Nagios


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Nagios is another useful tool for DevOps. It helps DevOps teams to find, and correct problems with network & infrastructure.

Key Features:

◈ Nagios XI helps to monitors components like applications, services, OS, network protocols
◈ It provides complete monitoring of desktop and server operating systems
◈ It provides complete monitoring of Java Management Extensions
◈ It allows monitoring of all mission-critical infrastructure components on any operating system
◈ Its log management tool is industry leading.
◈ Network Analyzer helps identify bottlenecks and optimize bandwidth utilization.
◈ This tool simplifies the process of searching log data

11) Chef:


Chef is a useful DevOps tool for achieving speed, scale, and consistency. It is a Cloud based system. It can be used to ease out complex tasks and perform automation.

Features:

◈ Accelerate cloud adoption
◈ Effectively manage data centers
◈ It can manage multiple cloud environments
◈ It maintains high availability

12) Sumo Logic:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Sumo Logic helps organizations to analyze and make sense of log data. It combines security analytics with integrated threat intelligence for advanced security analytics.

Key Features:

◈ Build, run, and secure Azure Hybrid applications
◈ Cloud-native, machine data analytics service for log management and time series metrics
◈ Monitor, secure, troubleshoot cloud applications, and infrastructures
◈ It has a power of elastic cloud to scale infinitely
◈ Drive business value, growth and competitive advantage
◈ One platform for continuous real-time integration
◈ Remove friction from the application lifecycle

13) OverOps:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

OverOps is the DevOps tool that gives root-cause of a bug and informs about server crash to the team. It quickly identifies when and why code breaks in production.

Key Features:

◈ Detects production code breaks and delivers the source code
◈ Improve staff efficiency by reducing time wasted sifting through logs
◈ Offers the complete source code and variable to fix any error
◈ Proactively detects when deployment processes face errors
◈ It helps DevOps team to spend more time in delivering great features

14) Consul:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Consul is a DevOps tool. It is widely used for discovering and configuring services in any infrastructure. It is a perfect tool for modern, elastic infrastructures as it is useful for the DevOps community.

Key Features:

◈ It provides a robust API
◈ Applications can easily find the services they should depend upon using DNS or HTTP
◈ Make use of the hierarchical key or value store for dynamic configuration
◈ Provide Supports for multiple data centers

15) Docker:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Docker is a DevOps technology suite. It allows DevOps teams to build, ship, and run distributed applications. This tool allows users to assemble apps from components and work collaboratively.

Key Features:

◈ CaaS Ready platform running with built in orchestration
◈ Flexible image management with a private registry to store, manage images and configure image caches
◈ Isolates apps in containers to eliminate conflicts for enhancing security

16) Stackify Retrace:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Stackify is a lightweight DevOps tool. It shows real-time logs, errors queries, and more directly into the workstation. It is an ideal solution for intelligent orchestration for the software-defined data center.

Key Features:

◈ Detailed trace of all types of web request
◈ Eliminate messy configuration or code changes
◈ Provides an instant feedback to check what .NET or Java web apps are doing
◈ Allows to find and fix bugs before production
◈ Integrated container management with Docker Datacenter of all app resources and users in a unified web admin UI
◈ Flexible image management with a private registry to store and manage images
◈ It provides secure access and configures image caches
◈ Secure multi tenancy with granular Role Based Access Control
◈ Complete security with automatic TLS, integrated secrets management, security scanning and deployment policy
◈ Docker Certified Plugins Containers provide tested, certified and supported solutions

17) CFEngine:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

CFEngine is a DevOps tool for IT automation. It is an ideal tool for configuration management. It helps teams to automate large-scale complex infrastructure.

Key Features:

◈ Provides rapid solution with the execution time less than one second
◈ An open source configuration solution with an unmatched security record
◈ It conducted billions of compliance checks in large-scale production environments
◈ It allows deploying a model-based configuration change across 50,000 servers in very few minutes

18) Artifactory:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Artifactory is the enterprise-ready repository manager. It provides end-to-end, automated solution for tracking artifacts from development to production.

Features:

◈ It supports software packages created using any technology or language
◈ Supports secure, clustered, high-availability Docker registries
◈ Remote artifacts are cached locally for reuse this eliminates the need for downloading them repeatedly.

19) Capistrano:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Capistrano is another useful remote server automation tool for DevOps teams. This tool supports scripting and executing arbitrary tasks.

Features:

◈ Allows to deploy web application to any number of machines
◈ Helps to automate common tasks in software teams
◈ Interchangeable output formatters
◈ Allows to script arbitrary workflows over SSH
◈ Easy to add support for many source control management software
◈ Host and Role filters for partial deploys or cluster maintenance
◈ Recipes for the database integration and Rails asset pipelines

20) Monit:


Monit is an Open Source DevOps tool. It is designed for managing and monitoring UNIX systems. It conducts automatic maintenance, repair, and executes meaningful actions in error situations.

Features:

◈ Executes meaningful causal actions in error situations
◈ Monit helps to monitor daemon processes or similar programs running on localhost
◈ It helps to monitor files, directories, and file systems on localhost
◈ This DevOps tool allows network connections to various servers

21) Supervisor:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Supervisor is a useful DevOps tool. It allows teams to monitor and control processes on UNIX operating systems. It provides users a single place to start, stop, and monitor all the processes.

Features:

◈ Supervisor is configured using a simple INI-style config file which is easy to learn
◈ This tool provides users a single place to start, stop, and monitor all the processes
◈ It uses simple event notification to monitor programs written in any language
◈ It is tested and supported on Linux, Mac OS X, FreeBSD, Solaris, etc.
◈ It does not need compiler because it is written entirely in Python

22) Ansible:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Ansible is a leading DevOps tool. It is a simple way to automate IT for automating entire application lifecycle. It makes it easier for DevOps teams to scale automation and speed up productivity.

Key Features:

◈ It is easy to use open source deploy apps
◈ It helps to avoid complexity in the software development process
◈ IT automation eliminates repetitive tasks that allow teams to do more strategic work
◈ It is an ideal tool to manage complex deployments and speed up development process

23) Code Climate:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Code Climate is a DevOps tool that monitors the health of the code, from the command line to the cloud. It helps users to fix issues easily and allows the team to produce better code.

Features:

◈ It can easily integrate into any workflow
◈ It helps to identify fixes, and improve team's skills to produce maintainable code
◈ With the Code climate, it is easy to increase the code quality
◈ Allow tracking progress instantly

24) Icinga


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Icinga is a DevOps tool that consists of two branches in parallel: Icinga and Icinga2. It allows DevOps engineers to select best suits for their project.

Key Features:

◈ Monitor network services, host resources, and server components
◈ Notify through email, SMS, or phone call
◈ With the RESTful API of Icinga 2, it is certainly easy to update configurations
◈ When any issue occurs, the user will be notified. Using e-mail, text message or mobile message applications
◈ Apply rules to hosts and services for creating continuous monitoring environment
◈ Report with chart graphs, measure SLA and helps to identify trends

25) New Relic APM:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

New Relic APM is a useful DevOps tool. It gains end to end visibility across customer experience and dynamic infrastructure. It allows DevOps team reduce the time for monitoring applications.

Features:

◈ Monitor performance of External Services
◈ It allows full-stack alerting
◈ Organize, visualize, evaluate with in-depth analytics
◈ Provide a precise picture of dynamically changing systems.
◈ The external service's dashboard offers charts with response time
◈ Create customized queries on metric data and names
◈ Key Transactions monitor feature to manage and track all the important business transactions

26) Juju:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Juju is an open source application modeling DevOps tool. It deploys, configure, scales and operate software on public & private clouds. With Juju, it is possible to automate cloud infrastructure and deploy application architectures.

Key Features:

◈ DevOps engineers can easily handle configuration, management, maintenance, deployment, and scalability.
◈ It offers powerful GUI and command-line interface
◈ Deploy services to targeted cloud in seconds
◈ Provide detailed logs to resolve issues quickly

27) ProductionMap:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

ProductionMap is an Integrated Visual platform for DevOps engineers. It helps to make automation development fast and easy. This orchestration platform backed by dedicated to IT professionals.

Features:

◈ Allows users planning the automation process
◈ Java Script editor backed by a full Object Model
◈ Each execution is automatically documented
◈ The Admin can control map execution
◈ User can trigger an execution of a map from remote events

28) Scalyr:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Scalyr is a DevOps platform for high-speed server monitoring and log management. It's log aggregator module collects all application, web, process, and system logs

Features:

◈ Start monitoring and collecting data without need to worry about infrastructure
◈ Drop the Scalyr Agent on any server
◈ It allows to Import logs from Heroku, Amazon RDS, and Amazon CloudWatch, etc.
◈ Graphs allow visualizing log data and metrics to show breakdowns and percentiles
◈ Centralized log management and server monitoring
◈ Watch all the new events arrive in near real-time
◈ Search hundreds of GBs/sec across all the servers
◈ Just need to click once to switch between logs and graphs
◈ Turn complex log data into simple, clear, and highly interactive reports

29) Rudder:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Rudder is a DevOps solution for continuous configuration and auditing. It is easy to use web-driven solution for IT automation.

Key Features:

◈ Workflow offers various user options like non-expert users, expert users, and managers
◈ Automate common system administration tasks such as installation and configuration
◈ Enforce configuration over time
◈ Provide Inventory of all managed nodes
◈ Web interface for configuring and managing nodes
◈ Compliance reporting by configuration or by node

30) Puppet Enterprise:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Puppet Enterprise is a DevOps tool. It allows managing entire infrastructure as code without expanding the size of the team.

Features:

◈ Puppet enterprise tool eliminates manual work for software delivery process. It helps developer to deliver great software rapidly
◈ Model and manage entire environment
◈ Intelligent orchestration and visual workflows
◈ Real-time context-aware reporting
◈ Define and continually enforce infrastructure
◈ It inspects and reports on packages running across infrastructure
◈ Desired state conflict detection and remediation

31) Graylog:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

Graylog is a powerful log management and DevOps tool. It has many use cases for monitoring SSH logins and unusual activities. Its basic version is a free and open source.

Features:

◈ Automatically archive the data so that user don't need to do that frequently
◈ Graylog Enterprise also offers Audit Log capabilities.
◈ It records and stores actions taken by a user or administrator that make changes in the system
◈ Receive enterprise-grade support by allowing support requests directly from the engineers

32) UpGuard:


DevOps Tools, LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorials and Materials, LPI Certifications

UpGuard helps DevOps teams around the world to gain visibility into their technology.It integrates seamlessly with popular automation platforms such as Puppet, Chef, and Ansible.

Features:

◈ UpGuard helps businesses around the world to gain visibility into their technology
◈ This DevOps tool allows increasing in speed of software delivery. It is accomplished through the automation by numbers of processes and technologies.
◈ It allows users to trust a third-party with sensitive data
◈ The procedures used to govern assets are as important as the configurations themselves