Thursday, 22 March 2018

Objective: Linux Essentials Exam 010

LPI Tutorials and Materials, LPI Guides, LPI Certifications, LPI Learning

Topic 1: The Linux Community and a Career in open source (weight: 7)


1.1 Linux Evolution and Popular Operating Systems

Weight: 2

Description: Knowledge of Linux development and major distributions.

Key Knowledge Areas:

◈ open source Philosophy
◈ Distributions
◈ Embedded Systems

The following is a partial list of the used files, terms and utilities:

◈ Android
◈ Debian, Ubuntu (LTS)
◈ CentOS, openSUSE, Red Hat
◈ Linux Mint, Scientific Linux

1.2 Major open source Applications

Weight: 2

Description: Awareness of major applications as well as their uses and development.

Key Knowledge Areas:

◈ Desktop Applications
◈ Server Applications
◈ Development Languages
◈ Package Management Tools and repositories

Terms and Utilities:

◈ OpenOffice.org, LibreOffice, Thunderbird, Firefox, GIMP
◈ Apache HTTPD, NGINX, MySQL, NFS, Samba
◈ C, Java, Perl, shell, Python, Samba
◈ dpkg, apt-get, rpm, yum

1.3 Understanding open source Software and Licensing

Weight: 1

Description: Open communities and licensing open source Software for business.

Key Knowledge Areas:

◈ Licensing
◈ Free Software Foundation (FSF), open source Initiative (OSI)

Terms and Utilities:

◈ GPL, BSD, Creative Commons
◈ Free Software, open source Software, FOSS, FLOSS
◈ open source business models

1.4 ICT Skills and Working in Linux

Weight: 2

Description: Basic Information and Communication Technology (ICT) skills and working in Linux.

Key Knowledge Areas:

◈ Desktop Skills
◈ Getting to the Command Line
◈ Industry uses of Linux, Cloud Computing and Virtualization

Terms and Utilities:

◈ Using a browser, privacy concerns, configuration options, searching the web and saving content
◈ Terminal and Console
◈ Password issues
◈ Privacy issues and tools
◈ Use of common open source applications in presentations and projects

Topic 2: Finding Your Way on a Linux System (weight: 9)


2.1 Command Line Basics

Weight: 3

Description: Basics of using the Linux command line.

Key Knowledge Areas:

◈ Basic shell
◈ Command line syntax
◈ Variables
◈ Globbing
◈ Quoting

Terms and Utilities:

◈ Bash
◈ echo
◈ history
◈ PATH env variable
◈ export
◈ type

2.2 Using the Command Line to Get Help

Weight: 2

Description: Running help commands and navigation of the various help systems.

Key Knowledge Areas:

◈ Man
◈ Info

Terms and Utilities:

◈ man
◈ info
◈ Man pages
◈ /usr/share/doc/
◈ locate

2.3 Using Directories and Listing Files

Weight: 2

Description: Navigation of home and system directories and listing files in various locations.

Key Knowledge Areas:

◈ Files, directories
◈ Hidden files and directories
◈ Home
◈ Absolute and relative paths

Terms and Utilities:

◈ Common options for ls
◈ Recursive listings
◈ cd
◈ . and ..
◈ home and ~

2.4 Creating, Moving and Deleting Files

Weight: 2

Description: Create, move and delete files and directories under the home directory.

Key Knowledge Areas:

◈ Files and directories
◈ Case sensitivity
◈ Simple globbing and quoting

Terms and Utilities:

◈ mv, cp, rm, touch
◈ mkdir, rmdir

Topic 3: The Power of the Command Line (weight: 9)


3.1 Archiving Files on the Command Line

Weight:  2

Description: Archiving files in the user home directory.

Key Knowledge Areas:

◈ Files, directories
◈ Archives, compression

Terms and Utilities:

◈ tar
◈ Common tar options
◈ gzip, bzip2
◈ zip, unzip

3.2 Searching and Extracting Data from Files

Weight: 3

Description: Search and extract data from files in the home directory.

Key Knowledge Areas:

◈ Command line pipes
◈ I/O re-direction
◈ Basic Regular Expressions ., [  ], *, ?

Terms and Utilities:

◈ grep
◈ less
◈ cat, head, tail
◈ sort
◈ cut
◈ wc

3.3 Turning Commands into a Script

Weight: 4

Description: Turning repetitive commands into simple scripts.

Key Knowledge Areas:

◈ Basic shell scripting
◈ Awareness of common text editors

Terms and Utilities:

◈ #! (shebang)
◈ /bin/bash
◈ Variables
◈ Arguments
◈ for loops
◈ echo
◈ Exit status

Topic 4: The Linux Operating System (weight: 8)


4.1 Choosing an Operating System

Weight: 1

Description: Knowledge of major operating systems and Linux distributions.

Key Knowledge Areas:

◈ Windows, Mac, Linux differences
◈ Distribution life cycle management

Terms and Utilities:

◈ GUI versus command line, desktop configuration
◈ Maintenance cycles, Beta and Stable

4.2 Understanding Computer Hardware

Weight: 2

Description: Familiarity with the components that go into building desktop and server computers.

Key Knowledge Areas:

◈ Hardware

Terms and Utilities:

◈ Motherboards, processors, power supplies, optical drives, peripherals
◈ Hard drives and partitions, /dev/sd*
◈ Drivers

4.3 Where Data is Stored

Weight: 3

Description: Where various types of information are stored on a Linux system.

Key Knowledge Areas:

◈ Programs and configuration, packages and package databases
◈ Processes, memory addresses, system messaging and logging

Terms and Utilities:

◈ ps, top, free
◈ syslog, dmesg
◈ /etc/, /var/log/
◈ /boot/, /proc/, /dev/, /sys/

4.4 Your Computer on the Network

Weight: 2

Description: Querying vital networking configuration and determining the basic requirements for a computer on a Local Area Network (LAN).

Key Knowledge Areas:

◈ Internet, network, routers
◈ Querying DNS client configuration
◈ Querying Network configuration

Terms and Utilities:

◈ route, ip route show
◈ ifconfig, ip addr show
◈ netstat, ip route show
◈ /etc/resolv.conf, /etc/hosts
◈ IPv4, IPv6
◈ ping
◈ host

Topic 5: Security and File Permissions (weight: 7)


5.1 Basic Security and Identifying User Types

Weight: 2

Description: Various types of users on a Linux system.

Key Knowledge Areas:

◈ Root and Standard Users
◈ System users

Terms and Utilities:

◈ /etc/passwd, /etc/group
◈ id, who, w
◈ sudo, su

5.2 Creating Users and Groups

Weight: 2

Description: Creating users and groups on a Linux system.

Key Knowledge Areas:

◈ User and group commands
◈ User IDs

Terms and Utilities:

◈ /etc/passwd, /etc/shadow, /etc/group, /etc/skel/
◈ id, last
◈ useradd, groupadd
◈ passwd

5.3 Managing File Permissions and Ownership

Weight: 2

Description: Understanding and manipulating file permissions and ownership settings.

Key Knowledge Areas:

◈ File/directory permissions and owners
◈ Terms and Utilities:
◈ ls -l, ls -a
◈ chmod, chown

5.4 Special Directories and Files

Weight: 1

Description: Special directories and files on a Linux system including special permissions.

Key Knowledge Areas:

◈ Using temporary files and directories
◈ Symbolic links

Terms and Utilities:

◈ /tmp/, /var/tmp/ and Sticky Bit
◈ ls -d
◈ ln -s

Tuesday, 20 March 2018

Differentiating UNIX and Linux

UNIX and Linux, LPI Tutorials and Materials, LPI Guides, LPI Certifications

1. Introduction


The history of UNIX® dates back to 1969. Through the years, it has developed and evolved through a number of different versions and environments. Most modern UNIX variants known today are licensed versions of one of the original UNIX editions. Sun's Solaris, Hewlett-Packard's HP-UX, and IBM's AIX® are all flavors of UNIX that have their own unique elements and foundations. For example, Sun's Solaris is UNIX, but incorporates many tools and extensions designed to get the best out of Sun's own workstation and server hardware.

Linux® was born out of the desire to create a free software alternative to the commercial UNIX environments. Its history dates back to 1991, or further back to 1983, when the GNU project, whose original aims where to provide a free alternative to UNIX, was introduced. Linux runs on a much wider range of platforms than most UNIX environments, such as the Intel®/AMD led x86 platform. Most UNIX variants run on just one architecture.

Because of this history and the heritage of the two products, Linux and UNIX have a common foundation, but are also very different. Many of the tools, utilities, and free software products that are standard under Linux were originally developed as free alternatives to the versions available on UNIX. Linux often provides support for many different options and applications, picking the best (or most popular) functionality from the UNIX and free software environment.

An administrator or developer who supports Linux systems might find it uncomfortable to move to a commercial UNIX system. On the whole, the foundations of any UNIX-like operating system (tools, filesystem layout, programming APIs) are fairly standardized. However, some details of the systems show significant differences. The remainder of this article covers the details of these differences.

2. Technical differences


The developers of commercial editions of UNIX have a specific target audience and platform for their operating system. They also have a pretty good idea of what applications they want to support and optimize. Commercial UNIX vendors do everything they can to maintain consistency between different versions. They have published standards that they follow for their customers.

The development of GNU/Linux, on the other hand, is more diverse. Developers come from many different backgrounds, and therefore have different experiences and opinions. There has not been as strict of a standard set of tools, environments, and functionality within the Linux community. The Linux Standards Base (LSB) project was formed in an attempt to alleviate this problem, but it has not provided as much help as hoped.

This lack of standards results in noticeable inconsistencies within Linux. It might seem a benefit to some for developers to have the freedom to emulate the best parts of other operating systems, but it can be very confusing when certain elements of Linux emulate different UNIX variants. For example, device names within Linux might emulate AIX, while the filesystem tools seem more like the tools supplied with HP-UX. These sorts of inconsistencies even exist between different Linux distributions. For example, Gentoo and RedHat have different methods for keeping their systems current with the latest patches and software releases.

By comparison, each new release of an operating system comes with a well-documented range of new features and changes within the UNIX space. Commands, tools, and other elements are rarely changed, and often the same command line arguments and interfaces remain over many editions of the software. Where there are significant changes, a commercial UNIX vendor often provides a compatibility layer, or the ability to run the older version of the tool.

This consistency means that tools and applications can be used on new editions of the operating system without a large body of testing. It is much easier for a UNIX user or administrator to update their skills on what is otherwise an unchanged UNIX operating system than the migration or adaptation of skills that might be required between Linux distributions.

3. Hardware architecture


Most commercial versions of UNIX are coded for a single, or possibly a small handful, of hardware architectures. HP-UX is available on PA-RISC and Itanium machines. Solaris is available on SPARC and x86. AIX is only for power processors, and so forth.

Because of these limitations, the UNIX vendors can optimize their code for these architectures. They can take advantage of every feature. Since they know their supported devices, their drivers can be better optimized as well. They are also not restricted by the weak BIOS limitations of most PCs.

Linux, on the other hand, has historically been designed to be as compatible as possible. Not only is Linux available for dozens of architectures, but the number of I/O and other external devices that might be used are almost limitless. The developers cannot assume that a specific hardware feature will be installed and, so often, cannot optimize as well. One example is the memory management on Linux. Since it was originally developed on x86 hardware, it used the segmented memory model. It adapted to use paged mode memory over time, but still retains some segmented memory requirements. This has caused problems for architectures that do not support segmented memory. This is not an issue for UNIX vendors. They know exactly which hardware features they have available.

4. The kernel


The kernel is the core of any operating system. The source code is not freely available for any of the commercial versions of UNIX. Quite the opposite exists for Linux. As such, procedures for compiling and patching kernels and drivers are vastly different. With Linux and other open source operating systems, a patch can be released in source code form and end users can install it, or even verify and modify it if desired. These patches tend to be far less tested than patches from UNIX vendors. Since there is not a complete list of applications and environments that need to be tested on Linux, the Linux developers have to depend on the many eyes of end users and other developers to catch bugs.

Commercial UNIX vendors only release their kernels in binary form. Some release the kernel as a single monolithic package, while others are ablfe to dismantle the kernel and upgrade just a single module. Either way, it is still in binary form. If an update is required, the administrator has to wait for the vendor to release the patch in binary form, but they can be more secure knowing that the vendor has performed sufficient regression testing.

All commercial versions of UNIX have evolved to support some sort of module-based kernel. Drivers and certain features are available as separate components and can be loaded or unloaded from the kernel as needed. None is quite as open and flexible as the module architecture in Linux. However, with the flexibility and adaptability of Linux comes constant change. The Linux code base is constantly changing and the API can change at the whim of a developer. When a module or driver is written for a commercial version of UNIX, that code works far longer than the same driver written for Linux.

Filesystem support

One of the reasons Linux has become such a powerful tool is its immense compatibility with other operating systems. One of the most obvious features is the plethora of filesystems that are available. Most commercial version of UNIX supports two, or possibly three, different local filesystem types. Linux, however, supports almost all of the filesystems that are currently available on any operating system. Table 1 shows which filesystems are supported under which version of UNIX. You can mount each of these filesystems under Linux, although not all of them allow full read-write support.

Table 1. Filesystems that come standard with UNIX versions

System Filesystem 
AIX jfs, gpfs
HP-UX hfs, vxfs 
Solaris  ufs, zfs 
Irix  xfs 

Most commercial UNIX versions have at least some sort of journaling filesystem available. For instance, HP-UX uses hfs as its standard filesystem, but it also supports the journaling vxfs filesystems. Solaris is similar with ufs and zfs. Journaling filesystems are a critical component of any enterprise server environment. Linux was relatively late to support journaling filesystems, but now there are several options ranging from ports of commercial filesystems (xfs, jfs) to native Linux-only filesystems (ext3, reiserfs).

Other filesystem features include quota support, file access control lists, mirroring, snapshots, and resizing. These are are supported in some form or another on some of the Linux filesystems. Most of these features are not standardized on Linux. They might work one way on one filesystem, but another method is required on another filesystem. Some of these features are just not available on some Linux filesystems, and some require additional tools to be installed, such as a certain version of LVM or software raid package. Linux historically had difficulty reaching consensus on programming interfaces and standard tools, since there are so many filesystems that present these features so differently.

Since commercial versions of UNIX have a limited number of filesystems to support, their tools and methods are more standardized. For instance, since there is only one main filesystem on Irix, there is only one method used to set access control lists. This makes it much simpler for the end user as well as for vendor support.

5. Application availability


Most of the core applications are the same between UNIX and Linux. For instance, cp, ls, vi, and cc are commands that are available in both UNIX and Linux, and are very similar, if not apparently identical. The Linux versions tend to be based on the GNU version of these tools, whereas the current UNIX versions are based on the original UNIX tools. The tools on UNIX have had a very stable history and rarely change anymore.

This is not to say that commercial versions of UNIX cannot use the GNU tools. In fact, many commercial UNIX vendors include many of the GNU tools in their installations, or as free options. They are just not the standard tools in the customary locations. Certain free programs, such as emacs or Perl, do not have non-free counterparts. Most vendors offer these as pre-compiled packages that are either automatically installed or available as an optional component.

Free and open source applications are almost always assumed to be available and function on all Linux distributions. There are huge amounts of free software available for Linux. Many of these applications have been ported and are available in some form on commercial versions of UNIX.

When it comes to non-free and/or closed-source applications (CAD, financial, graphics design), however, Linux comes up short. While some software vendors have released versions of their programs for Linux, the majority seem to be delaying their releases until Linux adoption reaches a critical mass.

On the other hand, commercial versions of UNIX have historically had large amounts of support for enterprise-level applications, such as Oracle or SAP. Linux lacks in this regard, because of the difficulty of larger applications to be certified, whereas commercial versions of UNIX do not change very much from release to release. Linux can change wildly not only between different distributions, but sometimes between releases of the same distribution. This makes it very difficult for application vendors to understand the exact environment in which their tool will be used.

6. System administration


Although some Linux distributions come with a standard system management tool, such as SUSE's YaST, there is not a Linux-wide standard on tools for system management. Text files and command-line tools are available, but these can be cumbersome and sometimes difficult to remember. Each commercial version of UNIX has its own separate management interface. From this interface, aspects of the entire system can be manipulated and altered. One example of this is the System Administration Manager (SAM) on HP-UX.

From within SAM, there are modules where:

◈ Users or groups can be managed.
◈ Kernel parameters can be modified.
◈ Networking is configured.
◈ Disks are configured and initialized.
◈ X server configuration can be changed.

This tool is well-written and incorporates well with the back-end text files. There is no such tool for Linux. Even SUSE's YaST is not nearly as complete, or compatible.

One aspect of UNIX and Linux that appears to be different for almost every version of UNIX and Linux is the location of system initialization scripts. Luckily /sbin/init and /etc/inittab are in standard locations. But beyond that, all the startup scripts are in different locations. Table 2 lists the location of system initialization scripts for various UNIX and Linux distributions.

Table 2. Location of system initialization scripts on various UNIX versions

System Location
HP-UX /sbin/init.d
AIX /etc/rc.d/init.d
Irix /etc/init.d
Solaris /etc/init.d
Redhat  /etc/rc.d/init.d
SUSE  /etc/rc.d/init.d 
Debian  /etc/init.d 
Slackware /etc/rc.d 

Because of the many different distributions of Linux and the almost infinite number of application and version differences, package management on Linux has always been a little tricky. There are a range of different package management tools available. The correct tool depends on which Linux distribution you are using. Further confusion results from different distributions using the Redhat Package Manager (RPM) file format, but their packages remain incompatible. This fragmentation has led to a myriad of different options, and it is not always obvious which system is in use within a particular environment

On the other hand, UNIX vendors use standard package managers. Even though there are different applications and formats among the different commercial UNIX variants within a specific version, the application environment is consistent and stable. For instance, Solaris has used the same package management tools since its inception. It has been, and will most likely always be, the same tools to identify, add, or remove packages on Solaris.

Recalling that commercial UNIX vendors supply the hardware that accompanies their operating systems, they are able to introduce hardware features that are much harder for Linux to include. For instance, recent Linux versions have attempted to support hot-swap components in hardware (with varied success). Commercial UNIX versions have had these features for many years. There is also more advanced hardware monitoring on commercial UNIX versions. The vendors can write drivers and hooks into their operating system that can monitor hardware health, such as ECC memory failures or power supply parameters, or any other hardware component. This sort of support on Linux is very premature.

Commercial UNIX hardware also has far more advanced initial boot options. Before the operating system boots, there are many options to decide how to boot, check system health, or set hardware parameters. The BIOS that is standard in PCs has few, if any, of these features.

7. Support


One of the most obvious differences between Linux and UNIX is the cost perspective. Commercial UNIX vendors charge significant amounts of money to acquire and use their versions of UNIX on their optimized hardware. Linux distributions, on the other hand, charge relatively little, if anything, for their operating system.

If a commercial version of UNIX is purchased, the vendors usually provide technical support to make sure the system works as expected. Most Linux users do not have the luxury of a company to stand behind their systems. They depend on the support of email lists, forums, and various Linux users groups. These support tools are not only restricted to Linux. Many administrator and users of commercial versions of UNIX participate in these free support groups to find and provide help. Many people even claim the free support groups are more responsive than the commercial vendor's support systems.

Saturday, 17 March 2018

An Introduction to the Linux Terminal

Linux Terminal, Linux Tutorials and Materials, Linux Certifications

Introduction


This tutorial, which is the first in a series that teaches Linux basics to get new users on their feet, covers getting started with the terminal, the Linux command line, and executing commands. If you are new to Linux, you will want to familiarize yourself with the terminal, as it is the standard way to interact with a Linux server. Using the command line may seem like a daunting task but it is actually very easy if you start with the basics, and build your skills from there.

Let's get started by going over what a terminal emulator is.

Terminal Emulator


A terminal emulator is a program that allows the use of the terminal in a graphical environment. As most people use an OS with a graphical user interface (GUI) for their day-to-day computer needs, the use of a terminal emulator is a necessity for most Linux server users.

Here are some free, commonly-used terminal emulators by operating system:

◈ Mac OS X: Terminal (default), iTerm 2
◈ Windows: PuTTY
◈ Linux: Terminal, KDE Konsole, XTerm

Each terminal emulator has its own set of features, but all of the listed ones work great and are easy to use.

The Shell


In a Linux system, the shell is a command-line interface that interprets a user's commands and script files, and tells the server's operating system what to do with them. There are several shells that are widely used, such as Bourne shell (sh) and C shell (csh). Each shell has its own feature set and intricacies, regarding how commands are interpreted, but they all feature input and output redirection, variables, and condition-testing, among other things.

This tutorial was written using the Bourne-Again shell, usually referred to as bash, which is the default shell for most Linux distributions, including Ubuntu, CentOS, and RedHat.

The Command Prompt


When you first login to a server, you will typically be greeted by the Message of the Day (MOTD), which is typically an informational message that includes miscellaneous information such as the version of the Linux distribution that the server is running. After the MOTD, you will be dropped into the command prompt, or shell prompt, which is where you can issue commands to the server.

The information that is presented at the command prompt can be customized by the user, but here is an example of the default Ubuntu 14.04 command prompt:

sammy@webapp:~$

Here is a breakdown of the composition of the command prompt:

◈ sammy: The username of the current user
◈ webapp: The hostname of the server
◈ ~: The current directory. In bash, which is the default shell, the ~, or tilde, is a special character that expands to the path of the current user's home directory; in this case, it represents /home/sammy
◈ $: The prompt symbol. This denotes the end of the command prompt, after which the user's keyboard input will appear

Here is an example of what the command prompt might look like, if logged in as root and in the /var/log directory:

root@webapp:/var/log#

Note that the symbol that ends the command prompt is a #, which is the standard prompt symbol for root. In Linux, the root user is the superuser account, which is a special user account that can perform system-wide administrative functions--it is an unrestricted user that has permission to perform any task on a server.

Executing Commands


Commands can be issued at the command prompt by specifying the name of an executable file, which can be a binary program or a script. There are many standard Linux commands and utilities that are installed with the OS, that allow you navigate the file system, install and software packages, and configure the system and applications.

An instance of a running command is known as a process. When a command is executed in the foreground, which is the default way that commands are executed, the user must wait for the process to finish before being returned to the command prompt, at which point they can continue issuing more commands.

It is important to note that almost everything in Linux is case-sensitive, including file and directory names, commands, arguments, and options. If something is not working as expected, double-check the spelling and case of your commands!

We will run through a few examples that will cover the basics of executing commands.

Without Arguments or Options


To execute a command without any arguments or options, simply type in the name of the command and hit RETURN.

If you run a command like this, it will exhibit its default behavior, which varies from command to command. For example, if you run the cd command without any arguments, you will be returned to your current user's home directory. The ls command will print a listing of the current directory's files and directories. The ip command without any arguments will print a message that shows you how to use the ip command.

Try running the ls command with no arguments to list the files and directories in your current directory (there may be none):

ls

With Arguments


Many commands accept arguments, or parameters, which can affect the behavior of a command. For example, the most common way to use the cd command is to pass it a single argument that specifies which directory to change to. For example, to change to the /usr/bin directory, where many standard commands are installed, you would issue this command:

cd /usr/bin

The cd component is the command, and the first argument /usr/bin follows the command. Note how your command prompt's current path has updated.

If you would like, try running the ls command to see the files that are in your new current directory.

ls

With Options


Most commands accept options, also known as flags or switches, that modify the behavior of the command. As they are special arguments, options follow a command, and are indicated by a single - character followed by one or more options, which are represented by individual upper- or lower-case letters. Additionally, some options start with --, followed by a single, multi-character (usually a descriptive word) option.

For a basic example of how options work, let's look at the ls command. Here are a couple of common options that come in handy when using ls:

◈ -l: print a "long listing", which includes extra details such as permissions, ownership, file sizes, and timestamps
◈ -a: list all of a directory's files, including hidden ones (that start with .)
To use the -l flag with ls, use this command:

ls -l

Note that the listing includes the same files as before, but with additional information about each file.

As mentioned earlier, options can often be grouped together. If you want to use the -l and -a option together, you could run ls -l -a, or just combine them like in this command:

ls -la

Note that the listing includes the hidden . and .. directories in the listing, because of the -a option.

With Options and Arguments


Options and arguments can almost always be combined, when running commands.

For example, you could check the contents of /home, regardless of your current directory, by running this ls command:

ls -la /home

ls is the command, -la are the options, and /home is the argument that indicates which file or directory to list. This should print a detailed listing of the /home directory, which should contain the home directories of all of the normal users on the server.

Environment Variables


Environment variables are named values that are used to change how commands and processes are executed. When you first log in to a server, several environment variables will be set according to a few configuration files by default.

View All Environment Variables


To view all of the environment variables that are set for a particular terminal session, run the env command:

env

There will likely be a lot of output, but try and look for PATH entry:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

The PATH environment variable is a colon-delimited list of directories where the shell will look for executable programs or scripts when a command is issued. For example, the env command is located in /usr/bin, and we are able to execute it without specifying its fully-qualified location because its path is in the PATH environment variable.

View the Value of a Variable


The value of an environment variable can be retrieved by prefixing the variable name with a $. Doing so will expand the referenced variable to its value.

For example, to print out the value of the PATH variable, you may use the echo command:

echo $PATH

Or you could use the HOME variable, which is set to your user's home directory by default, to change to your home directory like this:

cd $HOME

If you try to access an environment variable that hasn't been set, it will be expanded to nothing; an empty string.

Setting Environment Variables


Now that you know how to view your environment variables, you should learn how to set them.

To set an environment variable, all you need to do is start with a variable name, followed immediately by an = sign, followed immediately by its desired value:

VAR=value

Note that if you set an existing variable, the original value will be overwritten. If the variable did not exist in the first place, it will be created.

Bash includes a command called export which exports a variable so it will be inherited by child processes. In simple terms, this allows you to use scripts that reference an exported environment variable from your current session. If you're still unclear on what this means, don't worry about it for now.

You can also reference existing variables when setting a variable. For example, if you installed an application to /opt/app/bin, you could add that directory to the end of your PATH environment variable with this command:

export PATH=$PATH:/opt/app/bin

Now verify that /opt/app/bin has been added to the end of your PATH variable with echo:

echo $PATH

Keep in mind that setting environment variables in this way only sets them for your current session. This means if you log out or otherwise change to another session, the changes you made to the environment will not be preserved. There is a way to permanently change environment variables, but this will be covered in a later tutorial.

Thursday, 15 March 2018

LPI Certifications


LPI Linux Essentials
LPI Linux Essentials

◈ What: Ability to use basic console line editor and demonstrate an understanding of processes, programs and components of the Linux Operating System.

◈ How: Pass the LPI 010 exam; 40 multiple-choice questions in 60 minutes.

◈ Cost: $110 USD (1 exam, certificate does not expire). Price may vary per region. Learn More

LPIC-OT 701: DevOps Tools Engineer
LPIC-OT 701: DevOps Tools Engineer

◈ What: Have a working knowledge of DevOps-related domains such as Software Engineering and Architecture, Container and Machine Deployment, Configuration Management and Monitoring.

◈ How: Pass LPI 701 exam; 60 multiple-choice and fill-in-the-blank questions in 90 minutes.

◈ Cost: $200 USD (1 exam, certification valid for 5 years). Price may vary per region. Learn More

LPIC-1 Certified Linux Administrator
LPIC-1 Certified Linux Administrator

◈ What: Ability to perform maintenance tasks with the command line, install and configure a computer running Linux and be able to configure basic networking.

◈ How: Pass LPI 101 and 102 exams; each exam is 60 multiple-choice and fill-in-the-blank questions in 90 minutes.

◈ Cost: $200 USD per exam (2 exams, certification valid for 5 years). Price may vary per region. Learn More

LPIC-2 Certified Linux Engineer
LPIC-2 Certified Linux Engineer

◈ What: Ability to administer small to medium–sized mixed networks.

◈ How: Pass LPI 201 and 202 exams; each exam is 60 multiple-choice and fill-in-the-blank questions in 90 minutes. Must also have active LPIC-1 certification.

◈ Cost: $200 USD per exam (2 exams, certification valid for 5 years). Price may vary per region. Learn More

LPIC-3 300: Linux Enterprise Professional Mixed Environment
LPIC-3 300: Linux Enterprise Professional Mixed Environment

◈ What: Ability to integrate Linux services in an enterprise-wide mixed environment.

◈ How: Pass LPI 300 exam; 60 multiple-choice and fill-in-the-blank questions in 90 minutes. Must also have active LPIC-2 certification.

◈ Cost: $200 USD (1 exam, certification valid for 5 years). Price may vary per region. Learn More

LPIC-3 303: Linux Enterprise Professional Security
LPIC-3 303: Linux Enterprise Professional Security

◈ What: Ability to secure and harden Linux-based servers, services and networks enterprise-wide.

◈ How: Pass LPI 303 exam; 60 multiple-choice and fill-in-the-blank questions in 90 minutes. Must also have active LPIC-2 certification.

◈ Cost: $200 USD (1 exam, certification valid for 5 years). Price may vary per region. Learn More

LPIC-3 304: Linux Enterprise Professional Virtualization and High Availability
LPIC-3 304: Linux Enterprise Professional Virtualization and High Availability

◈ What: Ability to plan and implement enterprise-wide virtualization and high availability setups using Linux-based technologies.

◈ How: Pass LPI 304 exam; 60 multiple-choice and fill-in-the-blank questions in 90 minutes. Must also have active LPIC-2 certification.

◈ Cost: $200 USD (1 exam, certification valid for 5 years). Price may vary per region. Learn More

Tuesday, 13 March 2018

LPIC-3 304: Virtualization and High Availability

LPIC-3, Virtualization and High Availability, 304 Exam

The LPIC-3 certification is the culmination of LPI’s multi-level professional certification program. LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Three separate LPIC-3 specialty certifications are available. Passing any one of the three exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 304: Virtualization and High Availability certification covers the administration of Linux systems enterprise-wide with an emphasis on Virtualization & High Availability.

Current Version: 2.0 (Exam code 304-200)

Prerequisites: The candidate must have an active LPIC-2 certification to receive LPIC-3 certification, but the LPIC-2 and LPIC-3 exams may be taken in any order

Requirements: Passing the 304 exam

Validity Period: 5 years

Languages: English, Japanese

LPIC-3, Virtualization and High Availability, 304 Exam

LPIC-3 Exam 304: Virtualization


Exam Objectives Version: Version 2.0

Exam Code: 304-200

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

Topic 330: Virtualization


330.1 Virtualization Concepts and Theory

Weight: 8

Description: Candidates should know and understand the general concepts, theory and terminology of Virtualization. This includes Xen, KVM and libvirt terminology.

Key Knowledge Areas:

◈ Terminology
◈ Pros and Cons of Virtualization
◈ Variations of Virtual Machine Monitors
◈ Migration of Physical to Virtual Machines
◈ Migration of Virtual Machines between Host systems
◈ Cloud Computing

The following is a partial list of the used files, terms and utilities:

◈ Hypervisor
◈ Hardware Virtual Machine (HVM)
◈ Paravirtualization (PV)
◈ Container Virtualization
◈ Emulation and Simulation
◈ CPU flags
◈ /proc/cpuinfo
◈ Migration (P2V, V2V)
◈ IaaS, PaaS, SaaS

330.2 Xen

Weight: 9

Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot Xen installations. The focus is on Xen version 4.x.

Key Knowledge Areas:

◈ Xen architecture, networking and storage
◈ Xen configuration
◈ Xen utilities
◈ Troubleshooting Xen installations
◈ Basic knowledge of XAPI
◈ Awareness of XenStore
◈ Awareness of Xen Boot Parameters
◈ Awareness of the xm utility

Terms and Utilities:

◈ Domain0 (Dom0), DomainU (DomU)
◈ PV-DomU, HVM-DomU
◈ /etc/xen/
◈ xl
◈ xl.cfg
◈ xl.conf
◈ xe
◈ xentop

330.3 KVM

Weight: 9

Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot KVM installations.

Key Knowledge Areas:

◈ KVM architecture, networking and storage
◈ KVM configuration
◈ KVM utilities
◈ Troubleshooting KVM installations

Terms and Utilities:

◈ Kernel modules: kvm, kvm-intel and kvm-amd
◈ /etc/kvm/
◈ /dev/kvm
◈ kvm
◈ KVM monitor
◈ qemu
◈ qemu-img

330.4 Other Virtualization Solutions

Weight: 3

Description: Candidates should have some basic knowledge and experience with alternatives to Xen and KVM.

Key Knowledge Areas:

◈ Basic knowledge of OpenVZ and LXC
◈ Awareness of other virtualization technologies
◈ Basic knowledge of virtualization provisioning tools

Terms and Utilities:

◈ OpenVZ
◈ VirtualBox
◈ LXC
◈ docker
◈ packer
◈ vagrant

330.5 Libvirt and Related Tools

Weight: 5

Description: Candidates should have basic knowledge and experience with the libvirt library and commonly available tools.

Key Knowledge Areas:

◈ libvirt architecture, networking and storage
◈ Basic technical knowledge of libvirt and virsh
◈ Awareness of oVirt

Terms and Utilities:

◈ libvirtd
◈ /etc/libvirt/
◈ virsh
◈ oVirt

330.6 Cloud Management Tools

Weight: 2

Description: Candidates should have basic feature knowledge of commonly available cloud management tools.

Key Knowledge Areas:

◈ Basic feature knowledge of OpenStack and CloudStack
◈ Awareness of Eucalyptus and OpenNebula

Terms and Utilities:

◈ OpenStack
◈ CloudStack
◈ Eucalyptus
◈ OpenNebula

Topic 334: High Availability Cluster Management


334.1 High Availability Concepts and Theory

Weight: 5

Description: Candidates should understand the properties and design approaches of high availability clusters.

Key Knowledge Areas:

◈ Understand the most important cluster architectures
◈ Understand recovery and cluster reorganization mechanisms
◈ Design an appropriate cluster architecture for a given purpose
◈ Application aspects of high availability
◈ Operational considerations of high availability

Terms and Utilities:

◈ Active/Passive Cluster, Active/Active Cluster
◈ Failover Cluster, Load Balanced Cluster
◈ Shared-Nothing Cluster, Shared-Disk Cluster
◈ Cluster resources
◈ Cluster services
◈ Quorum
◈ Fencing
◈ Split brain
◈ Redundancy
◈ Mean Time Before Failure (MTBF)
◈ Mean Time To Repair (MTTR)
◈ Service Level Agreement (SLA)
◈ Disaster Recovery
◈ Replication
◈ Session handling

334.2 Load Balanced Clusters

Weight: 6

Description: Candidates should know how to install, configure, maintain and troubleshoot LVS. This includes the configuration and use of keepalived and ldirectord. Candidates should further be able to install, configure, maintain and troubleshoot HAProxy.

Key Knowledge Areas:

◈ Understanding of LVS / IPVS
◈ Basic knowledge of VRRP
◈ Configuration of keepalived
◈ Configuration of ldirectord
◈ Backend server network configuration
◈ Understanding of HAProxy
◈ Configuration of HAProxy

Terms and Utilities:

◈ ipvsadm
◈ syncd
◈ LVS Forwarding (NAT, Direct Routing, Tunneling, Local Node)
◈ connection scheduling algorithms
◈ keepalived configuration file
◈ ldirectord configuration file
◈ genhash
◈ HAProxy configuration file
◈ load balancing algorithms
◈ ACLs

334.3 Failover Clusters

Weight: 6

Description: Candidates should have experience in the installation, configuration, maintenance and troubleshooting of a Pacemaker cluster. This includes the use of Corosync. The focus is on Pacemaker 1.1 for Corosync 2.x.

Key Knowledge Areas:

◈ Pacemaker architecture and components (CIB, CRMd, PEngine, LRMd, DC, STONITHd)
◈ Pacemaker cluster configuration
◈ Resource classes (OCF, LSB, Systemd, Upstart, Service, STONITH, Nagios)
◈ Resource rules and constraints (location, order, colocation)
◈ Advanced resource features (templates, groups, clone resources, multi-state resources)
◈ Pacemaker management using pcs
◈ Pacemaker management using crmsh
◈ Configuration and Management of corosync in conjunction with Pacemaker
◈ Awareness of other cluster engines (OpenAIS, Heartbeat, CMAN)

Terms and Utilities:

◈ pcs
◈ crm
◈ crm_mon
◈ crm_verify
◈ crm_simulate
◈ crm_shadow
◈ crm_resource
◈ crm_attribute
◈ crm_node
◈ crm_standby
◈ cibadmin
◈ corosync.conf
◈ authkey
◈ corosync-cfgtool
◈ corosync-cmapctl
◈ corosync-quorumtool
◈ stonith_admin

334.4 High Availability in Enterprise Linux Distributions

Weight: 1

Description: Candidates should be aware of how enterprise Linux distributions integrate High Availability technologies.

Key Knowledge Areas:

◈ Basic knowledge of Red Hat Enterprise Linux High Availability Add-On
◈ Basic knowledge of SUSE Linux Enterprise High Availability Extension

Terms and Utilities:

◈ Distribution specific configuration tools
◈ Integration of cluster engines, load balancers, storage technology, cluster filesystems, etc.

Topic 335: High Availability Cluster Storage


335.1 DRBD / cLVM

Weight: 3

Description: Candidates are expected to have the experience and knowledge to install, configure, maintain and troubleshoot DRBD devices. This includes integration with Pacemaker. DRBD configuration of version 8.4.x is covered. Candidates are further expected to be able to manage LVM configuration within a shared storage cluster.

Key Knowledge Areas:

◈ Understanding of DRBD resources, states and replication modes
◈ Configuration of DRBD resources, networking, disks and devices
◈ Configuration of DRBD automatic recovery and error handling
◈ Management of DRBD using drbdadm
◈ Basic knowledge of drbdsetup and drbdmeta
◈ Integration of DRBD with Pacemaker
◈ cLVM
◈ Integration of cLVM with Pacemaker

Terms and Utilities:

◈ Protocol A, B and C
◈ Primary, Secondary
◈ Three-way replication
◈ drbd kernel module
◈ drbdadm
◈ drbdsetup
◈ drbdmeta
◈ /etc/drbd.conf
◈ /proc/drbd
◈ LVM2
◈ clvmd
◈ vgchange, vgs

335.2 Clustered File Systems

Weight: 3

Description: Candidates should know how to install, maintain and troubleshoot installations using GFS2 and OCFS2. This includes integration with Pacemaker as well as awareness of other clustered filesystems available in a Linux environment.

Key Knowledge Areas:

◈ Understand the principles of cluster file systems
◈ Create, maintain and troubleshoot GFS2 file systems in a cluster
◈ Create, maintain and troubleshoot OCFS2 file systems in a cluster
◈ Integration of GFS2 and OCFS2 with Pacemaker
◈ Awareness of the O2CB cluster stack
◈ Awareness of other commonly used clustered file systems

Terms and Utilities:

◈ Distributed Lock Manager (DLM)
◈ mkfs.gfs2
◈ mount.gfs2
◈ fsck.gfs2
◈ gfs2_grow
◈ gfs2_edit
◈ gfs2_jadd
◈ mkfs.ocfs2
◈ mount.ocfs2
◈ fsck.ocfs2
◈ tunefs.ocfs2
◈ mounted.ocfs2
◈ o2info
◈ o2image
◈ CephFS
◈ GlusterFS
◈ AFS

Saturday, 10 March 2018

LPIC-3 303: Security

LPIC-3 Security, Exam 303, LPI Certifications, LPI Guides, LPI Learning

The LPIC-3 certification is the culmination of LPI’s multi-level professional certification program. LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Three separate LPIC-3 specialty certifications are available. Passing any one of the three exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 303: Security certification covers the administration of Linux systems enterprise-wide with an emphasis on security.

Current Version: 2.0 (Exam code 303-200)

Prerequisites: The candidate must have an active LPIC-2 certification to receive LPIC-3 certification, but the LPIC-2 and LPIC-3 exams may be taken in any order

Requirements: Passing the 303 exam

Validity Period: 5 years

Languages: English, Japanese

LPIC-3 Security, Exam 303, LPI Certifications, LPI Guides, LPI Learning

Exam Objectives Version: Version 2.0

Exam Code: 303-200

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

LPIC-3 Exam 303: Security


Topic 325: Cryptography


325.1 X.509 Certificates and Public Key Infrastructures

Weight: 5

Description: Candidates should understand X.509 certificates and public key infrastructures. They should know how to configure and use OpenSSL to implement certification authorities and issue SSL certificates for various purposes.

Key Knowledge Areas:

◈ Understand X.509 certificates, X.509 certificate lifecycle, X.509 certificate fields and X.509v3 certificate extensions
◈ Understand trust chains and public key infrastructures
◈ Generate and manage public and private keys
◈ Create, operate and secure a certification authority
◈ Request, sign and manage server and client certificates
◈ Revoke certificates and certification authorities

The following is a partial list of the used files, terms and utilities:

◈ openssl, including relevant subcommands
◈ OpenSSL configuration
◈ PEM, DER, PKCS
◈ CSR
◈ CRL
◈ OCSP

325.2 X.509 Certificates for Encryption, Signing and Authentication

Weight: 4

Description: Candidates should know how to use X.509 certificates for both server and client authentication. Candidates should be able to implement user and server authentication for Apache HTTPD. The version of Apache HTTPD covered is 2.4 or higher.

Key Knowledge Areas:

◈ Understand SSL, TLS and protocol versions
◈ Understand common transport layer security threats, for example Man-in-the-Middle
◈ Configure Apache HTTPD with mod_ssl to provide HTTPS service, including SNI and HSTS
◈ Configure Apache HTTPD with mod_ssl to authenticate users using certificates
◈ Configure Apache HTTPD with mod_ssl to provide OCSP stapling
◈ Use OpenSSL for SSL/TLS client and server tests

Terms and Utilities:

◈ Intermediate certification authorities
◈ Cipher configuration (no cipher-specific knowledge)
◈ httpd.conf
◈ mod_ssl
◈ openssl

325.3 Encrypted File Systems

Weight: 3

Description: Candidates should be able to setup and configure encrypted file systems.

Key Knowledge Areas:

◈ Understand block device and file system encryption
◈ Use dm-crypt with LUKS to encrypt block devices
◈ Use eCryptfs to encrypt file systems, including home directories and
◈ PAM integration
◈ Be aware of plain dm-crypt and EncFS

Terms and Utilities:

◈ cryptsetup
◈ cryptmount
◈ /etc/crypttab
◈ ecryptfsd
◈ ecryptfs-* commands
◈ mount.ecryptfs, umount.ecryptfs
◈ pam_ecryptfs

325.4 DNS and Cryptography

Weight: 5

Description: Candidates should have experience and knowledge of cryptography in the context of DNS and its implementation using BIND. The version of BIND covered is 9.7 or higher.

Key Knowledge Areas:

◈ Understanding of DNSSEC and DANE
◈ Configure and troubleshoot BIND as an authoritative name server serving DNSSEC secured zones
◈ Configure BIND as an recursive name server that performs DNSSEC validation on behalf of its clients
◈ Key Signing Key, Zone Signing Key, Key Tag
◈ Key generation, key storage, key management and key rollover
◈ Maintenance and re-signing of zones
◈ Use DANE to publish X.509 certificate information in DNS
◈ Use TSIG for secure communication with BIND

Terms and Utilities:

◈ DNS, EDNS, Zones, Resource Records
◈ DNS resource records: DS, DNSKEY, RRSIG, NSEC, NSEC3, NSEC3PARAM, TLSA
◈ DO-Bit, AD-Bit
◈ TSIG
◈ named.conf
◈ dnssec-keygen
◈ dnssec-signzone
◈ dnssec-settime
◈ dnssec-dsfromkey
◈ rndc
◈ dig
◈ delv
◈ openssl

Topic 326: Host Security


326.1 Host Hardening

Weight: 3

Description: Candidates should be able to secure computers running Linux against common threats. This includes kernel and software configuration.

Key Knowledge Areas:

◈ Configure BIOS and boot loader (GRUB 2) security
◈ Disable useless software and services
◈ Use sysctl for security related kernel configuration, particularly ASLR, Exec-Shield and IP / ICMP configuration
◈ Exec-Shield and IP / ICMP configuration
◈ Limit resource usage
◈ Work with chroot environments
◈ Drop unnecessary capabilities
◈ Be aware of the security advantages of virtualization

Terms and Utilities:

◈ grub.cfg
◈ chkconfig, systemctl
◈ ulimit
◈ /etc/security/limits.conf
◈ pam_limits.so
◈ chroot
◈ sysctl
◈ /etc/sysctl.conf

326.2 Host Intrusion Detection

Weight: 4

Description: Candidates should be familiar with the use and configuration of common host intrusion detection software. This includes updates and maintenance as well as automated host scans.

Key Knowledge Areas:

◈ Use and configure the Linux Audit system
◈ Use chkrootkit
◈ Use and configure rkhunter, including updates
◈ Use Linux Malware Detect
◈ Automate host scans using cron
◈ Configure and use AIDE, including rule management
◈ Be aware of OpenSCAP

Terms and Utilities:

◈ auditd
◈ auditctl
◈ ausearch, aureport
◈ auditd.conf
◈ auditd.rules
◈ pam_tty_audit.so
◈ chkrootkit
◈ rkhunter
◈ /etc/rkhunter.conf
◈ maldet
◈ conf.maldet
◈ aide
◈ /etc/aide/aide.conf

326.3 User Management and Authentication

Weight: 5

Description: Candidates should be familiar with management and authentication of user accounts. This includes configuration and use of NSS, PAM, SSSD and Kerberos for both local and remote directories and authentication mechanisms as well as enforcing a password policy.

Key Knowledge Areas:

◈ Understand and configure NSS
◈ Understand and configure PAM
◈ Enforce password complexity policies and periodic password changes
◈ Lock accounts automatically after failed login attempts
◈ Configure and use SSSD
◈ Configure NSS and PAM for use with SSSD
◈ Configure SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains
◈ Kerberos and local domains
◈ Obtain and manage Kerberos tickets

Terms and Utilities:

◈ nsswitch.conf
◈ /etc/login.defs
◈ pam_cracklib.so
◈ chage
◈ pam_tally.so, pam_tally2.so
◈ faillog
◈ pam_sss.so
◈ sssd
◈ sssd.conf
◈ sss_* commands
◈ krb5.conf
◈ kinit, klist, kdestroy

326.4 FreeIPA Installation and Samba Integration

Weight: 4

Description: Candidates should be familiar with FreeIPA v4.x. This includes installation and maintenance of a server instance with a FreeIPA domain as well as integration of FreeIPA with Active Directory.

Key Knowledge Areas:

◈ Understand FreeIPA, including its architecture and components
◈ Understand system and configuration prerequisites for installing FreeIPA
◈ Install and manage a FreeIPA server and domain
◈ Understand and configure Active Directory replication and Kerberos cross-realm trusts
◈ Be aware of sudo, autofs, SSH and SELinux integration in FreeIPA

Terms and Utilities:

◈ 389 Directory Server, MIT Kerberos, Dogtag Certificate System, NTP, DNS, SSSD, certmonger
◈ ipa, including relevant subcommands
◈ ipa-server-install, ipa-client-install, ipa-replica-install
◈ ipa-replica-prepare, ipa-replica-manage

Topic 327: Access Control


327.1 Discretionary Access Control

Weight: 3

Description: Candidates are required to understand Discretionary Access Control and know how to implement it using Access Control Lists. Additionally, candidates are required to understand and know how to use Extended Attributes.

Key Knowledge Areas:

◈ Understand and manage file ownership and permissions, including SUID and SGID
◈ Understand and manage access control lists
◈ Understand and manage extended attributes and attribute classes

Terms and Utilities:

◈ getfacl
◈ setfacl
◈ getfattr
◈ setfattr

327.2 Mandatory Access Control

Weight: 4

Description: Candidates should be familiar with Mandatory Access Control systems for Linux. Specifically, candidates should have a thorough knowledge of SELinux. Also, candidates should be aware of other Mandatory Access Control systems for Linux. This includes major features of these systems but not configuration and use.

Key Knowledge Areas:

◈ Understand the concepts of TE, RBAC, MAC and DAC
◈ Configure, manage and use SELinux
◈ Be aware of AppArmor and Smack

Terms and Utilities:

◈ getenforce, setenforce, selinuxenabled
◈ getsebool, setsebool, togglesebool
◈ fixfiles, restorecon, setfiles
◈ newrole, runcon
◈ semanage
◈ sestatus, seinfo
◈ apol
◈ seaudit, seaudit-report, audit2why, audit2allow
◈ /etc/selinux/*

327.3 Network File Systems

Weight: 3

Description: Candidates should have experience and knowledge of security issues in use and configuration of NFSv4 clients and servers as well as CIFS client services. Earlier versions of NFS are not required knowledge.

Key Knowledge Areas:

◈ Understand NFSv4 security issues and improvements
◈ Configure NFSv4 server and clients
◈ Understand and configure NFSv4 authentication mechanisms (LIPKEY, SPKM, Kerberos)
◈ Understand and use NFSv4 pseudo file system
◈ Understand and use NFSv4 ACLs
◈ Configure CIFS clients
◈ Understand and use CIFS Unix Extensions
◈ Understand and configure CIFS security modes (NTLM, Kerberos)
◈ Understand and manage mapping and handling of CIFS ACLs and SIDs in a Linux system

Terms and Utilities:

◈ /etc/exports
◈ /etc/idmap.conf
◈ nfs4acl
◈ mount.cifs parameters related to ownership, permissions and security modes
winbind
◈ getcifsacl, setcifsacl

Topic 328: Network Security


328.1 Network Hardening

Weight: 4

Description: Candidates should be able to secure networks against common threats. This includes verification of the effectiveness of security measures.

Key Knowledge Areas:

◈ Configure FreeRADIUS to authenticate network nodes
◈ Use nmap to scan networks and hosts, including different scan methods
◈ Use Wireshark to analyze network traffic, including filters and statistics
◈ Identify and deal with rogue router advertisements and DHCP messages

Terms and Utilities:

◈ radiusd
◈ radmin
◈ radtest, radclient
◈ radlast, radwho
◈ radiusd.conf
◈ /etc/raddb/*
◈ nmap
◈ wireshark
◈ tshark
◈ tcpdump
◈ ndpmon

328.2 Network Intrusion Detection

Weight: 4

Description: Candidates should be familiar with the use and configuration of network security scanning, network monitoring and network intrusion detection software. This includes updating and maintaining the security scanners.

Key Knowledge Areas:

◈ Implement bandwidth usage monitoring
◈ Configure and use Snort, including rule management
◈ Configure and use OpenVAS, including NASL

Terms and Utilities:

◈ ntop
◈ Cacti
◈ snort
◈ snort-stat
◈ /etc/snort/*
◈ openvas-adduser, openvas-rmuser
◈ openvas-nvt-sync
◈ openvassd
◈ openvas-mkcert
◈ /etc/openvas/*

328.3 Packet Filtering

Weight: 5

Description: Candidates should be familiar with the use and configuration of packet filters. This includes netfilter, iptables and ip6tables as well as basic knowledge of nftables, nft and ebtables.

Key Knowledge Areas:

◈ Understand common firewall architectures, including DMZ
◈ Understand and use netfilter, iptables and ip6tables, including standard modules, tests and targets
◈ Implement packet filtering for both IPv4 and IPv6
◈ Implement connection tracking and network address translation
◈ Define IP sets and use them in netfilter rules
◈ Have basic knowledge of nftables and nft
◈ Have basic knowledge of ebtables
◈ Be aware of conntrackd

Terms and Utilities:

◈ iptables
◈ ip6tables
◈ iptables-save, iptables-restore
◈ ip6tables-save, ip6tables-restore
◈ ipset
◈ nft
◈ ebtables

328.4 Virtual Private Networks

Weight: 4

Description: Candidates should be familiar with the use of OpenVPN and IPsec.

Key Knowledge Areas:

◈ Configure and operate OpenVPN server and clients for both bridged and routed VPN networks
◈ Configure and operate IPsec server and clients for routed VPN networks using IPsec-Tools / racoon
◈ Awareness of L2TP

Terms and Utilities:

◈ /etc/openvpn/*
◈ openvpn server and client
◈ setkey
◈ /etc/ipsec-tools.conf
◈ /etc/racoon/racoon.conf