Wednesday 28 February 2018

LPIC-1: System Administrator Exam-101

LPIC-1 Certifications, LPIC-1: System Administrator, LPI Guides

The world’s largest and most recognized Linux Certification


LPIC-1 is the first certification in LPI’s multi-level Linux professional certification program. The LPIC-1 will validate the candidate's ability to perform maintenance tasks on the command line, install and configure a computer running Linux and configure basic networking.

The LPIC-1 is designed to reflect current research and validate a candidate's proficiency in real world system administration. The objectives are tied to real-world job skills, which we determine through job task analysis surveying during exam development.

Current Version: 4.0

Prerequisites: There are no prerequisites for this certification

Requirements: Passing exams 101 and 102

Validity Period: 5 years

Languages: English, German, Japanese, Portuguese, Chinese (Simplified) and Chinese (Traditional). Exams in following languages will be released in 2019: Italian, Spanish, French.

To become LPIC-1 certified the candidate must be able to:

◈ understand the architecture of a Linux system;
◈ install and maintain a Linux workstation, including X11 and setup it up as a network client;
◈ work at the Linux command line, including common GNU and Unix commands;
◈ handle files and access permissions as well as system security; and
◈ perform easy maintenance tasks: help users, add users to a larger system, backup and restore, shutdown and reboot.

LPIC-1 Certifications, LPIC-1: System Administrator, LPI Guides

LPIC-1 Exam 101


Exam Objectives Version: Version 4.0

Exam Codes: 101-400  or  LX0-103 (these exams are identical; passing either exam will count as the 101 exam toward your LPIC-1)

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam.

Topic 101: System Architecture


101.1 Determine and configure hardware settings

Weight: 2

Description: Candidates should be able to determine and configure fundamental system hardware.

Key Knowledge Areas:

◈ Tools and utilities to list various hardware information (e.g. lsusb, lspci, etc.)
◈ Tools and utilities to manipulate USB devices
◈ Conceptual understanding of sysfs, udev, dbus

The following is a partial list of the used files, terms and utilities:

◈ /sys/
◈ /proc/
◈ /dev/
◈ modprobe
◈ lsmod
◈ lspci
◈ lsusb

101.2 Boot the system

Weight: 3

Description: Candidates should be able to guide the system through the booting process.

Key Knowledge Areas:

◈ Provide common commands to the boot loader and options to the kernel at boot time
◈ Demonstrate knowledge of the boot sequence from BIOS to boot completion
◈ Understanding of SysVinit and systemd
◈ Awareness of Upstart
◈ Check boot events in the log files

Terms and Utilities:

◈ dmesg
◈ BIOS
◈ bootloader
◈ kernel
◈ initramfs
◈ init
◈ SysVinit
◈ systemd

101.3 Change runlevels / boot targets and shutdown or reboot system

Weight: 3

Description: Candidates should be able to manage the SysVinit runlevel or systemd boot target of the system. This objective includes changing to single user mode, shutdown or rebooting the system. Candidates should be able to alert users before switching runlevels / boot targets and properly terminate processes. This objective also includes setting the default SysVinit runlevel or systemd boot target. It also includes awareness of Upstart as an alternative to SysVinit or systemd.

Key Knowledge Areas:

◈ Set the default runlevel or boot target
◈ Change between runlevels / boot targets including single user mode
◈ Shutdown and reboot from the command line
◈ Alert users before switching runlevels / boot targets or other major system events
◈ Properly terminate processes

Terms and Utilities:

◈ /etc/inittab
◈ shutdown
◈ init
◈ /etc/init.d/
◈ telinit
◈ systemd
◈ systemctl
◈ /etc/systemd/
◈ /usr/lib/systemd/
◈ wall

Topic 102: Linux Installation and Package Management


102.1 Design hard disk layout

Weight: 2

Description: Candidates should be able to design a disk partitioning scheme for a Linux system.

Key Knowledge Areas:

◈ Allocate filesystems and swap space to separate partitions or disks
◈ Tailor the design to the intended use of the system
◈ Ensure the /boot partition conforms to the hardware architecture requirements for booting
◈ Knowledge of basic features of LVM

Terms and Utilities:

◈ / (root) filesystem
◈ /var filesystem
◈ /home filesystem
◈ /boot filesystem
◈ swap space
◈ mount points
◈ partitions

102.2 Install a boot manager

Weight: 2

Description: Candidates should be able to select, install and configure a boot manager.

Key Knowledge Areas:

◈ Providing alternative boot locations and backup boot options
◈ Install and configure a boot loader such as GRUB Legacy
◈ Perform basic configuration changes for GRUB 2
◈ Interact with the boot loader

The following is a partial list of the used files, terms and utilities:

◈ menu.lst, grub.cfg and grub.conf
◈ grub-install
◈ grub-mkconfig
◈ MBR

102.3 Manage shared libraries

Weight: 1

Description: Candidates should be able to determine the shared libraries that executable programs depend on and install them when necessary.

Key Knowledge Areas:

◈ Identify shared libraries
◈ Identify the typical locations of system libraries
◈ Load shared libraries

Terms and Utilities:

◈ ldd
◈ ldconfig
◈ /etc/ld.so.conf
◈ LD_LIBRARY_PATH

102.4 Use Debian package management

Weight: 3

Description: Candidates should be able to perform package management using the Debian package tools.

Key Knowledge Areas:

◈ Install, upgrade and uninstall Debian binary packages
◈ Find packages containing specific files or libraries which may or may not be installed
◈ Obtain package information like version, content, dependencies, package integrity and installation status (whether or not the package is installed)

Terms and Utilities:

◈ /etc/apt/sources.list
◈ dpkg
◈ dpkg-reconfigure
◈ apt-get
◈ apt-cache
◈ aptitude

102.5 Use RPM and YUM package management

Weight: 3

Description: Candidates should be able to perform package management using RPM and YUM tools.

Key Knowledge Areas:

◈ Install, re-install, upgrade and remove packages using RPM and YUM
◈ Obtain information on RPM packages such as version, status, dependencies, integrity and signatures
◈ Determine what files a package provides, as well as find which package a specific file comes from

Terms and Utilities:

◈ rpm
◈ rpm2cpio
◈ /etc/yum.conf
◈ /etc/yum.repos.d/
◈ yum
◈ yumdownloader

Topic 103: GNU and Unix Commands


103.1 Work on the command line

Weight: 4

Description: Candidates should be able to interact with shells and commands using the command line. The objective assumes the Bash shell.

Key Knowledge Areas:

◈ Use single shell commands and one line command sequences to perform basic tasks on the command line
◈ Use and modify the shell environment including defining, referencing and exporting environment variables
◈ Use and edit command history
◈ Invoke commands inside and outside the defined path

Terms and Utilities:

◈ bash
◈ echo
◈ env
◈ export
◈ pwd
◈ set
◈ unset
◈ man
◈ uname
◈ history
◈ .bash_history

103.2 Process text streams using filters

Weight: 3

Description: Candidates should should be able to apply filters to text streams.

Key Knowledge Areas:

◈ Send text files and output streams through text utility filters to modify the output using standard UNIX commands found in the GNU textutils package

Terms and Utilities:

◈ cat
◈ cut
◈ expand
◈ fmt
◈ head
◈ join
◈ less
◈ nl
◈ od
◈ paste
◈ pr
◈ sed
◈ sort
◈ split
◈ tail
◈ tr
◈ unexpand
◈ uniq
◈ wc

103.3 Perform basic file management

Weight: 4

Description: Candidates should be able to use the basic Linux commands to manage files and directories.

Key Knowledge Areas:

◈ Copy, move and remove files and directories individually
◈ Copy multiple files and directories recursively
◈ Remove files and directories recursively
◈ Use simple and advanced wildcard specifications in commands
◈ Using find to locate and act on files based on type, size, or time
◈ Usage of tar, cpio and dd

Terms and Utilities:

◈ cp
◈ find
◈ mkdir
◈ mv
◈ ls
◈ rm
◈ rmdir
◈ touch
◈ tar
◈ cpio
◈ dd
◈ file
◈ gzip
◈ gunzip
◈ bzip2
◈ xz
◈ file globbing

103.4 Use streams, pipes and redirects

Weight: 4

Description: Candidates should be able to redirect streams and connect them in order to efficiently process textual data. Tasks include redirecting standard input, standard output and standard error, piping the output of one command to the input of another command, using the output of one command as arguments to another command and sending output to both stdout and a file.

Key Knowledge Areas:

◈ Redirecting standard input, standard output and standard error
◈ Pipe the output of one command to the input of another command
◈ Use the output of one command as arguments to another command
◈ Send output to both stdout and a file

Terms and Utilities:

◈ tee
◈ xargs

103.5 Create, monitor and kill processes

Weight: 4

Description: Candidates should be able to perform basic process management.

Key Knowledge Areas:

◈ Run jobs in the foreground and background
◈ Signal a program to continue running after logout
◈ Monitor active processes
◈ Select and sort processes for display
◈ Send signals to processes

Terms and Utilities:

◈ &
◈ bg
◈ fg
◈ jobs
◈ kill
◈ nohup
◈ ps
◈ top
◈ free
◈ uptime
◈ pgrep
◈ pkill
◈ killall
◈ screen

103.6 Modify process execution priorities

Weight: 2

Description: Candidates should should be able to manage process execution priorities.

Key Knowledge Areas:

◈ Know the default priority of a job that is created
◈ Run a program with higher or lower priority than the default
◈ Change the priority of a running process

Terms and Utilities:

◈ nice
◈ ps
◈ renice
◈ top

103.7 Search text files using regular expressions

Weight: 2

Description: Candidates should be able to manipulate files and text data using regular expressions. This objective includes creating simple regular expressions containing several notational elements. It also includes using regular expression tools to perform searches through a filesystem or file content.

Key Knowledge Areas:

Create simple regular expressions containing several notational elements
Use regular expression tools to perform searches through a filesystem or file content

Terms and Utilities:

◈ grep
◈ egrep
◈ fgrep
◈ sed
◈ regex(7)

103.8 Perform basic file editing operations using vi

Weight: 3

Description: Candidates should be able to edit text files using vi. This objective includes vi navigation, basic vi modes, inserting, editing, deleting, copying and finding text.

Key Knowledge Areas:

◈ Navigate a document using vi
◈ Use basic vi modes
◈ Insert, edit, delete, copy and find text

Terms and Utilities:

◈ vi
◈ /, ?
◈ h,j,k,l
◈ i, o, a
◈ c, d, p, y, dd, yy
◈ ZZ, :w!, :q!, :e!

Topic 104: Devices, Linux Filesystems, Filesystem Hierarchy Standard


104.1 Create partitions and filesystems

Weight: 2

Description: Candidates should be able to configure disk partitions and then create filesystems on media such as hard disks. This includes the handling of swap partitions.

Key Knowledge Areas:

◈ Manage MBR partition tables
◈ Use various mkfs commands to create various filesystems such as:
◈ ext2/ext3/ext4
◈ XFS
◈ VFAT
◈ Awareness of ReiserFS and Btrfs
◈ Basic knowledge of gdisk and parted with GPT

Terms and Utilities:

◈ fdisk
◈ gdisk
◈ parted
◈ mkfs
◈ mkswap

104.2 Maintain the integrity of filesystems

Weight: 2

Description: Candidates should be able to maintain a standard filesystem, as well as the extra data associated with a journaling filesystem.

Key Knowledge Areas:

◈ Verify the integrity of filesystems
◈ Monitor free space and inodes
◈ Repair simple filesystem problems

Terms and Utilities:

◈ du
◈ df
◈ fsck
◈ e2fsck
◈ mke2fs
◈ debugfs
◈ dumpe2fs
◈ tune2fs
◈ XFS tools (such as xfs_metadump and xfs_info)

104.3 Control mounting and unmounting of filesystems

Weight: 3

Description: Candidates should be able to configure the mounting of a filesystem.

Key Knowledge Areas:

◈ Manually mount and unmount filesystems
◈ Configure filesystem mounting on bootup
◈ Configure user mountable removable filesystems

Terms and Utilities:

◈ /etc/fstab
◈ /media/
◈ mount
◈ umount

104.4 Manage disk quotas

Weight: 1

Description: Candidates should be able to manage disk quotas for users.

Key Knowledge Areas:

◈ Set up a disk quota for a filesystem
◈ Edit, check and generate user quota reports

Terms and Utilities:

◈ quota
◈ edquota
◈ repquota
◈ quotaon

104.5 Manage file permissions and ownership

Weight: 3

Description: Candidates should be able to control file access through the proper use of permissions and ownerships.

Key Knowledge Areas:

◈ Manage access permissions on regular and special files as well as directories
◈ Use access modes such as suid, sgid and the sticky bit to maintain security
◈ Know how to change the file creation mask
◈ Use the group field to grant file access to group members

Terms and Utilities:

◈ chmod
◈ umask
◈ chown
◈ chgrp

104.6 Create and change hard and symbolic links

Weight: 2

Description: Candidates should be able to create and manage hard and symbolic links to a file.

Key Knowledge Areas:

◈ Create links
◈ Identify hard and/or soft links
◈ Copying versus linking files
◈ Use links to support system administration tasks

Terms and Utilities:

◈ ln
◈ ls

104.7 Find system files and place files in the correct location

Weight: 2

Description: Candidates should be thoroughly familiar with the Filesystem Hierarchy Standard (FHS), including typical file locations and directory classifications.

Key Knowledge Areas:

◈ Understand the correct locations of files under the FHS
◈ Find files and commands on a Linux system
◈ Know the location and purpose of important file and directories as defined in the FHS

Terms and Utilities:

◈ find
◈ locate
◈ updatedb
◈ whereis
◈ which
◈ type
◈ /etc/updatedb.conf

Saturday 24 February 2018

LPIC-OT Exam 701: DevOps Tools Engineer

LPIC Tutorials and Materials, LPIC Certifications, LPI Guides, LPI Guides, DevOps Tools Engineer, LPIC-OT

Businesses across the globe are increasingly implementing DevOps practices to optimize daily systems administration and software development tasks. As a result, businesses across industries are hiring IT professionals that can effectively apply DevOps to reduce delivery time and improve quality in the development of new software products.

To meet this growing need for qualified professionals, LPI developed the Linux Professional Institute DevOps Tools Engineer certification which verifies the skills needed to use the tools that enhance collaboration in workflows throughout system administration and software development.

In developing the Linux Professional Institute DevOps Tools Engineer certification, LPI reviewed the DevOps tools landscape and defined a set of essential skills when applying DevOps. As such, the certification exam focuses on the practical skills required to work successfully in a DevOps environment -- focusing on the skills needed to use the most prominent DevOps tools. The result is a certification that covers the intersection between development and operations, making it relevant for all IT professionals working in the field of DevOps.

Current Version: 1.0 (Exam code 701-100)

Objectives: 701-100

Prerequisites: There are no prerequisites for this certification.

Requirements: Pass the Linux Professional Institute DevOps Tools Engineer exam. The 90-minute exam consists of 60 multiple choice and fill-in-the-blank questions.

Validity Period: 5 years

Languages: English, Japanese

To receive the LPIC-OT DevOps Tools Engineer Certification the candidate must:

◈ Have a working knowledge of DevOps-related domains such as Software Engineering and Architecture, Container and Machine Deployment, Configuration Management and Monitoring.

◈ Have proficiency in prominent free and open source utilities such as Docker, Vagrant, Ansible, Puppet, Git, and Jenkins.

Exam Objectives Version: Version 1.0

Exam Code: 701-100

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

LPIC Tutorials and Materials, LPIC Certifications, LPI Guides, LPI Guides, DevOps Tools Engineer, LPIC-OT

Topic 701: Software Engineering


701.1 Modern Software Development (weight: 6) 

Weight: 6

Description: Candidates should be able to design software solutions suitable for modern runtime environments. Candidates should understand how services handle data persistence, sessions, status information, transactions, concurrency, security, performance, availability, scaling, load balancing, messaging, monitoring and APIs. Furthermore, candidates should understand the implications of agile and DevOps on software development.

Key Knowledge Areas:

◈ Understand and design service based applications
◈ Understand common API concepts and standards
◈ Understand aspects of data storage, service status and session handling
◈ Design software to be run in containers
◈ Design software to be deployed to cloud services
◈ Awareness of risks in the migration and integration of monolithic legacy software
◈ Understand common application security risks and ways to mitigate them
◈ Understand the concept of agile software development
◈ Understand the concept of DevOps and its implications to software developers and operators

The following is a partial list of the used files, terms and utilities:

◈ REST, JSON
◈ Service Orientated Architectures (SOA)
◈ Microservices
◈ Immutable servers
◈ Loose coupling
◈ Cross site scripting, SQL injections, verbose error reports, API authentication, consistent enforcement of transport encryption
◈ CORS headers and CSRF tokens
◈ ACID properties and CAP theorem

701.2 Standard Components and Platforms for Software (weight: 2)

Weight: 2

Description: Candidates should understand services offered by common cloud platforms. They should be able to include these services in their application architectures and deployment toolchains and understand the required service configurations. OpenStack service components are used as a reference implementation.

Key Knowledge Areas:

◈ Features and concepts of object storage
◈ Features and concepts of relational and NoSQL databases
◈ Features and concepts of message brokers and message queues
◈ Features and concepts of big data services
◈ Features and concepts of application runtimes / PaaS
◈ Features and concepts of content delivery networks

The following is a partial list of the used files, terms and utilities:

◈ OpenStack Swift
◈ OpenStack Trove
◈ OpenStack Zaqar
◈ CloudFoundry
◈ OpenShift

701.3 Source Code Management (weight: 5)

Weight: 5

Description: Candidates should be able to use Git to manage and share source code. This includes creating and contributing to a repository as well as the usage of tags, branches and remote repositories. Furthermore, the candidate should be able to merge files and resolve merging conflicts.

Key Knowledge Areas:

◈ Understand Git concepts and repository structure
◈ Manage files within a Git repository
◈ Manage branches and tags
◈ Work with remote repositories and branches as well as submodules
◈ Merge files and branches
◈ Awareness of SVN and CVS, including concepts of centralized and distributed SCM solutions

The following is a partial list of the used files, terms and utilities:

◈ git
◈ .gitignore

701.4 Continuous Integration and Continuous Delivery (weight: 5)

Weight: 5

Description: Candidates should understand the principles and components of a continuous integration and continuous delivery pipeline. Candidates should be able to implement a CI/CD pipeline using Jenkins, including triggering the CI/CD pipeline, running unit, integration and acceptance tests, packaging software and handling the deployment of tested software artifacts. This objective covers the feature set of Jenkins version 2.0 or later.

Key Knowledge Areas:

◈ Understand the concepts of Continuous Integration and Continuous Delivery
◈ Understand the components of a CI/CD pipeline, including builds, unit, integration and acceptance tests, artifact management, delivery and deployment
◈ Understand deployment best practices
◈ Understand the architecture and features of Jenkins, including Jenkins Plugins, Jenkins API, notifications and distributed builds
◈ Define and run jobs in Jenkins, including parameter handling
◈ Fingerprinting, artifacts and artifact repositories
◈ Understand how Jenkins models continuous delivery pipelines and implement a declarative continuous delivery pipeline in Jenkins
◈ Awareness of possible authentication and authorization models
◈ Understanding of the Pipeline Plugin
◈ Understand the features of important Jenkins modules such as Copy Artifact Plugin, Fingerprint Plugin, Docker Pipeline, Docker Build and Publish plugin, Git Plugin, Credentials Plugin
◈ Awareness of Artifactory and Nexus

The following is a partial list of the used files, terms and utilities:

◈ Step, Node, Stage
◈ Jenkins SDL
◈ Jenkinsfile
◈ Declarative Pipeline
◈ Blue-green and canary deployment

Topic 702: Container Management


702.1 Container Usage (weight: 7)

Weight: 7

Description: Candidates should be able to build, share and operate Docker containers. This includes creating Dockerfiles, using a Docker registry, creating and interacting with containers as well as connecting containers to networks and storage volumes. This objective covers the feature set of Docker version 17.06 or later.

Key Knowledge Areas:

◈ Understand the Docker architecture
◈ Use existing Docker images from a Docker registry
◈ Create Dockerfiles and build images from Dockerfiles
◈ Upload images to a Docker registry
◈ Operate and access Docker containers
◈ Connect container to Docker networks
◈ Use Docker volumes for shared and persistent container storage

The following is a partial list of the used files, terms and utilities:

◈ docker
◈ Dockerfile
◈ .dockerignore

702.2 Container Deployment and Orchestration (weight: 5)

Weight: 5

Description: Candidates should be able to run and manage multiple containers that work together to provide a service. This includes the orchestration of Docker containers using Docker Compose in conjunction with an existing Docker Swarm cluster as well as using an existing Kubernetes cluster. This objective covers the feature sets of Docker Compose version 1.14 or later, Docker Swarm included in Docker 17.06 or later and Kubernetes 1.6 or later.

Key Knowledge Areas:

◈ Understand the application model of Docker Compose
◈ Create and run Docker Compose Files (version 3 or later)
◈ Understand the architecture and functionality of Docker Swarm mode
◈ Run containers in a Docker Swarm, including the definition of services, stacks and the usage of secrets
◈ Understand the architecture and application model Kubernetes
◈ Define and manage a container-based application for Kubernetes, including the definition of ◈ Deployments, Services, ReplicaSets and Pods

The following is a partial list of the used files, terms and utilities:

◈ docker-compose
◈ docker
◈ kubectl

702.3 Container Infrastructure (weight: 4)

Weight: 4

Description: Candidates should be able to set up a runtime environment for containers. This includes running containers on a local workstation as well as setting up a dedicated container host. Furthermore, candidates should be aware of other container infrastructures, storage, networking and container specific security aspects. This objective covers the feature set of Docker version 17.06 or later and Docker Machine 0.12 or later.

Key Knowledge Areas:

◈ Use Docker Machine to setup a Docker host
◈ Understand Docker networking concepts, including overlay networks
◈ Create and manage Docker networks
◈ Understand Docker storage concepts
◈ Create and manage Docker volumes
◈ Awareness of Flocker and flannel
◈ Understand the concepts of service discovery
◈ Basic feature knowledge of CoreOS Container Linux, rkt and etcd
◈ Understand security risks of container virtualization and container images and how to mitigate them

The following is a partial list of the used files, terms and utilities:

◈ docker-machine

Topic 703: Machine Deployment


703.1 Virtual Machine Deployment (weight: 4)

Weight: 4

Description: Candidates should be able to automate the deployment of a virtual machine with an operating system and a specific set of configuration files and software.

Key Knowledge Areas:

◈ Understand Vagrant architecture and concepts, including storage and networking
◈ Retrieve and use boxes from Atlas
◈ Create and run Vagrantfiles
◈ Access Vagrant virtual machines
◈ Share and synchronize folder between a Vagrant virtual machine and the host system
◈ Understand Vagrant provisioning, including File, Shell, Ansible and Docker
◈ Understand multi-machine setup

The following is a partial list of the used files, terms and utilities:

◈ vagrant
◈ Vagrantfile

703.2 Cloud Deployment (weight: 2)

Weight: 2

Description: Candidates should be able to configure IaaS cloud instances and adjust them to match their available hardware resources, specifically, disk space and volumes. Additinally, candidates should be able to configure instances to allow secure SSH logins and prepare the instances to be ready for a configuration management tool such as Ansible.

Key Knowledge Areas:

◈ Understanding the features and concepts of cloud-init, including user-data and initializing and configuring cloud-init
◈ Use cloud-init to create, resize and mount file systems, configure user accounts, including login credentials such as SSH keys and install software packages from the distribution’s repository
◈ Understand the features and implications of IaaS clouds and virtualization for a computing instance, such as snapshotting, pausing, cloning and resource limits.

703.3 System Image Creation (weight: 2)

Weight: 2

Description: Candidates should be able to create images for containers, virtual machines and IaaS cloud instances.

Key Knowledge Areas:

◈ Understand the functionality and features of Packer
◈ Create and maintain template files
◈ Build images from template files using different builders

The following is a partial list of the used files, terms and utilities:

◈ packer

Topic 704: Configuration Management


704.1 Ansible (weight: 8)

Weight: 8

Description: Candidates should be able to use Ansible to ensure a target server is in a specific state regarding its configuration and installed software. This objective covers the feature set of Ansible version 2.2 or later.

Key Knowledge Areas:

◈ Understand the principles of automated system configuration and software installation
◈ Create and maintain inventory files
◈ Understand how Ansible interacts with remote systems
◈ Manage SSH login credentials for Ansible, including using unprivileged login accounts
◈ Create, maintain and run Ansible playbooks, including tasks, handlers, conditionals, loops and registers
◈ Set and use variables
◈ Maintain secrets using Ansible vaults
◈ Write Jinja2 templates, including using common filters, loops and conditionals
◈ Understand and use Ansible roles and install Ansible roles from Ansible Galaxy
◈ Understand and use important Ansible tasks, including file, copy, template, ini_file, lineinfile, patch, replace, user, group, command, shell, service, systemd, cron, apt, debconf, yum, git, and debug
◈ Awareness of dynamic inventory
◈ Awareness of Ansibles features for non-Linux systems
◈ Awareness of Ansible containers

The following is a partial list of the used files, terms and utilities:

◈ ansible.cfg
◈ ansible-playbook
◈ ansible-vault
◈ ansible-galaxy
◈ ansible-doc

704.2 Other Configuration Management Tools (weight: 2)

Weight: 2

Description: Candidates should understand the main features and principles of important configuration management tools other than Ansible.

Key Knowledge Areas:

◈ Basic feature and architecture knowledge of Puppet.
◈ Basic feature and architecture knowledge of Chef.

The following is a partial list of the used files, terms and utilities:

◈ Manifest, Class, Recipe, Cookbook
◈ puppet
◈ chef
◈ chef-solo
◈ chef-client
◈ chef-server-ctl
◈ knife

Topic 705: Service Operations


705.1 IT Operations and Monitoring (weight: 4)

Weight: 4

Description: Candidates should understand how IT infrastructure is involved in delivering a service. This includes knowledge about the major goals of IT operations, understanding functional and nonfunctional properties of an IT services and ways to monitor and measure them using Prometheus. Furthermore candidates should understand major security risks in IT infrastructure. This objective covers the feature set of Prometheus 1.7 or later.

Key Knowledge Areas:

◈ Understand goals of IT operations and service provisioning, including nonfunctional properties such as availability, latency, responsiveness
◈ Understand and identify metrics and indicators to monitor and measure the technical functionality of a service
◈ Understand and identify metrics and indicators to monitor and measure the logical functionality of a service
◈ Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana
◈ Monitor containers and microservices using Prometheus
◈ Understand the principles of IT attacks against IT infrastructure
◈ Understand the principles of the most important ways to protect IT infrastructure
◈ Understand core IT infrastructure components and their the role in deployment

The following is a partial list of the used files, terms and utilities:

◈ Prometheus, Node exporter, Pushgateway, Altermanager, Grafana
◈ Service exploits, brute force attacks, and denial of service attacks
◈ Security updates, packet filtering and application gateways
◈ Virtualization hosts, DNS and load balancers

705.2 Log Management and Analysis (weight: 4)

Weight: 4

Description: Candidates should understand the role of log files in operations and troubleshooting. They should be able to set up centralized logging infrastructure based on Logstash to collect and normalize log data. Furthermore, candidates should understand how Elasticsearch and Kibana help to store and access log data.

Key Knowledge Areas:

◈ Understand how application and system logging works
◈ Understand the architecture and functionality of Logstash, including the lifecycle of a log message and Logstash plugins
◈ Understand the architecture and functionality of Elasticsearch and Kibana in the context of log data management (Elastic Stack)
◈ Configure Logstash to collect, normalize, transform and store log data
◈ Configure syslog and Filebeat to send log data to Logstash
◈ Configure Logstash to send email alerts
◈ Understand application support for log management

The following is a partial list of the used files, terms and utilities:

◈ logstash
◈ input, filter, output
◈ grok filter
◈ Log files, metrics
◈ syslog.conf
◈ /etc/logstash/logstash.yml
◈ /etc/filebeat/filebeat.yml

Friday 23 February 2018

LPI Linux Essentials

LPI Linux Essentials, LPI Certifications, LPI Guides, LPI Learning, LPI

Show employers that you have the foundational skills required for your next job or promotion.

Linux adoption continues to rise world-wide as individual users, government entities and industries ranging from automotive to space exploration embrace open source technologies. This expansion of open source in enterprise is redefining traditional Information and Communication Technology (ICT) job roles to require more Linux skills. Whether you’re starting your career in open source, or looking for advancement, independently verifying your skill set can help you stand out to hiring managers or your management team.

The Linux Essentials Professional Development Certificate (PDC) also serves as an ideal stepping-stone to the more advanced LPIC Professional Certification track for Linux Systems Administrators.

Current Version: 1.6 (Exam code 010-160)

Objectives: 010-160

Prerequisites: There are no prerequisites for this certification

Requirements: Passing the Linux Essentials 010 exam

Validity Period: Lifetime

Languages: English, German, Japanese, Dutch, Portuguese (Brazilian), Chinese (Simplified), Chinese (Traditional). Exams in following languages will be released in 2019: Italian, Spanish, French.

To receive the Linux Essentials Certificate the candidate must:

◈ have an understanding of the Linux and open source industry and knowledge of the most popular open source Applications;
◈ understand the major components of the Linux operating system, and have the technical proficiency to work on the Linux command line; and
◈ have a basic understanding of security and administration related topics such as user/group management, working on the command line, and permissions.

Exam 010 Objectives


LPI Linux Essentials, LPI Certifications, LPI Guides, LPI Learning, LPI
Linux Essentials Exam 010

Exam Objectives Version: Version 1.6

Exam Code: 010-160

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

Topic 1: The Linux Community and a Career in Open Source


1.1 Linux Evolution and Popular Operating Systems

Weight: 2

Description: Knowledge of Linux development and major distributions.

Key Knowledge Areas:

◈ Distributions
◈ Embedded Systems
◈ Linux in the Cloud

The following is a partial list of the used files, terms and utilities:

◈ Debian, Ubuntu (LTS)
◈ CentOS, openSUSE, Red Hat, SUSE
◈ Linux Mint, Scientific Linux
◈ Raspberry Pi, Raspbian
◈ Android

1.2 Major open source Applications

Weight: 2

Description: Awareness of major applications as well as their uses and development.

Key Knowledge Areas:

◈ Desktop Applications
◈ Server Applications
◈ Development Languages
◈ Package Management Tools and repositories

Terms and Utilities:

◈ OpenOffice.org, LibreOffice, Thunderbird, Firefox, GIMP
◈ Apache HTTPD, NGINX, MySQL, NFS, Samba
◈ C, Java, Perl, shell, Python, Samba
◈ dpkg, apt-get, rpm, yum

1.3 Open Source Software and Licensing

Weight: 1

Description: Open communities and licensing open source Software for business.

Key Knowledge Areas:

◈ Open source philosophy
◈ Open source licensing
◈ Free Software Foundation (FSF), Open Source Initiative (OSI)

The following is a partial list of the used files, terms and utilities:

◈ Copyleft, Permissive
◈ GPL, BSD, Creative Commons
◈ Free Software, Open Source Software, FOSS, FLOSS
◈ Open source business models

1.4 ICT Skills and Working in Linux

Weight: 2

Description: Basic Information and Communication Technology (ICT) skills and working in Linux.

Key Knowledge Areas:

◈ Desktop Skills
◈ Getting to the Command Line
◈ Industry uses of Linux, Cloud Computing and Virtualization

Terms and Utilities:

◈ Using a browser, privacy concerns, configuration options, searching the web and saving content
◈ Terminal and Console
◈ Password issues
◈ Privacy issues and tools
◈ Use of common open source applications in presentations and projects

Topic 2: Finding Your Way on a Linux System


2.1 Command Line Basics

Weight: 3

Description: Basics of using the Linux command line.

Key Knowledge Areas:

◈ Basic shell
◈ Command line syntax
◈ Variables
◈ Quoting

Terms and Utilities:

◈ Bash
◈ echo
◈ history
◈ PATH env variable
◈ export
◈ type

2.2 Using the Command Line to Get Help

Weight: 2

Description: Running help commands and navigation of the various help systems.

Key Knowledge Areas:

◈ Man Pages
◈ Info Pages

Terms and Utilities:

◈ man
◈ info
◈ Man pages
◈ /usr/share/doc/
◈ locate

2.3 Using Directories and Listing Files

Weight: 2

Description: Navigation of home and system directories and listing files in various locations.

Key Knowledge Areas:

◈ Files, directories
◈ Hidden files and directories
◈ Home directories
◈ Absolute and relative paths

Terms and Utilities:

◈ Common options for ls
◈ Recursive listings
◈ cd
◈ . and ..
◈ home and ~

2.4 Creating, Moving and Deleting Files

Weight: 2

Description: Create, move and delete files and directories under the home directory.

Key Knowledge Areas:

◈ Files and directories
◈ Case sensitivity
◈ Simple globbing

Terms and Utilities:

◈ mv, cp, rm, touch
◈ mkdir, rmdir

Topic 3: The Power of the Command Line (weight: 9)


3.1 Archiving Files on the Command Line

Weight:  2

Description: Archiving files in the user home directory.

Key Knowledge Areas:

◈ Files, directories
◈ Archives, compression

Terms and Utilities:

◈ tar
◈ Common tar options
◈ gzip, bzip2
◈ zip, unzip

3.2 Searching and Extracting Data from Files

Weight: 3

Description: Search and extract data from files in the home directory.

Key Knowledge Areas:

◈ Command line pipes
◈ I/O re-direction
◈ Basic Regular Expressions using ., [ ], *, and ?

Terms and Utilities:

◈ grep
◈ less
◈ cat, head, tail
◈ sort
◈ cut
◈ wc

3.3 Turning Commands into a Script

Weight: 4

Description: Turning repetitive commands into simple scripts.

Key Knowledge Areas:

◈ Basic shell scripting
◈ Awareness of common text editors

Terms and Utilities:

◈ #! (shebang)
◈ /bin/bash
◈ Variables
◈ Arguments
◈ for loops
◈ echo
◈ Exit status

Topic 4: The Linux Operating System (weight: 8)


4.1 Choosing an Operating System

Weight: 1

Description: Knowledge of major operating systems and Linux distributions.

Key Knowledge Areas:

◈ Differences between Windows, OS X and Linux
◈ Distribution life cycle management

Terms and Utilities:

◈ GUI versus command line, desktop configuration
◈ Maintenance cycles, Beta and Stable

4.2 Understanding Computer Hardware

Weight: 2

Description: Familiarity with the components that go into building desktop and server computers.

Key Knowledge Areas:

◈ Hardware

Terms and Utilities:

◈ Motherboards, processors, power supplies, optical drives, peripherals
◈ Hard drives and partitions, /dev/sd*
◈ Drivers

4.3 Where Data is Stored

Weight: 3

Description: Where various types of information are stored on a Linux system.

Key Knowledge Areas:

◈ Programs and configuration
◈ Processes
◈ Memory addresses
◈ System messaging
◈ Logging

Terms and Utilities:

◈ ps, top, free
◈ syslog, dmesg
◈ /etc/, /var/log/
◈ /boot/, /proc/, /dev/, /sys/

4.4 Your Computer on the Network

Weight: 2

Description: Querying vital networking configuration and determining the basic requirements for a computer on a Local Area Network (LAN).

Key Knowledge Areas:

◈ Internet, network, routers
◈ Querying DNS client configuration
◈ Querying Network configuration

Terms and Utilities:

◈ route, ip route show
◈ ifconfig, ip addr show
◈ netstat, ip route show
◈ /etc/resolv.conf, /etc/hosts
◈ IPv4, IPv6
◈ ping
◈ host

Topic 5: Security and File Permissions (weight: 7)


5.1 Basic Security and Identifying User Types

Weight: 2

Description: Various types of users on a Linux system.

Key Knowledge Areas:

◈ Root and Standard Users
◈ System users

Terms and Utilities:

◈ /etc/passwd, /etc/group
◈ id, who, w
◈ sudo, su

5.2 Creating Users and Groups

Weight: 2

Description: Creating users and groups on a Linux system.

Key Knowledge Areas:

◈ User and group commands
◈ User IDs

Terms and Utilities:

◈ /etc/passwd, /etc/shadow, /etc/group, /etc/skel/
◈ useradd, groupadd
◈ passwd

5.3 Managing File Permissions and Ownership

Weight: 2

Description: Understanding and manipulating file permissions and ownership settings.

Key Knowledge Areas:

◈ File/directory permissions and owners

Terms and Utilities:

◈ ls -l, ls -a
◈ chmod, chown

5.4 Special Directories and Files

Weight: 1

Description: Special directories and files on a Linux system including special permissions.

Key Knowledge Areas:

◈ Using temporary files and directories
◈ Symbolic links

Terms and Utilities:

◈ /tmp/, /var/tmp/ and Sticky Bit
◈ ls -d
◈ ln -s

Saturday 17 February 2018

How To: Use pwd Command In Linux / UNIX

How do I use the pwd command in Linux or Unix like operating systems? How can I use pwd command in UNIX or Linux shell scripts for automation purpose?

The pwd is an acronym for print working directory. The pwd command is considered as one of the most frequently used commands on Linux, AIX, HP-UX, *BSD, and other UNIX like operating systems along with the ls, and cd commands. It can be used for the following purposes under Apple OS X or UNIX or Linux operating systems:

Find the full path to the current directory.

=> Store the full path to the current directory in the shell variable.

=> Verify the absolute path.

=> Verify the physical path i.e exclude symbolic links.

The current directory


The current directory is nothing but the directory in which you are currently operating while using bash or ksh or zsh or tcsh/csh shell. You need to open a terminal (GUI) or login on a console to use a command line.

Syntax


The syntax is:

pwd
pwd [options]
var=$(pwd)
echo "The current working directory $var."

Examples


To print current working directory, enter:

$ pwd

Sample outputs:

/home/vivek

In this example, /home/vivek is your current directory. The full path of any directory under Unix like operating systems always stats with a forward slash. In short:

1. / – Forward slash – The root directory on your system or the file system.
2. home – Sub-directory
3. vivek – Sub-directory

To store current directory in a shell variable called x, enter:

x=$(pwd)

To print the current directory either use printf command or echo command:

echo "The current working directory : $x"
OR
printf "The current working directory : %s" $x

A typical Linux/Unix shell session with pwd


Most Unix users use the pwd command along with ls and cd commands:

## Where am I?
pwd

## List the contents of the current directory
ls
ls -l

# Change the current directory to Videos
cd Videos
pwd

Sample outputs:

Linux Tutorials and Materials, LPI Certifications, LPI Guides, LPI Learning

Fig.01: A typical shell user session with pwd, ls, and cd commands.

In this above examples, the pwd command is used for confirming that the current directory has actually been changed.

Shell pwd vs /bin/pwd


Your shell may have its own version of pwd, which usually supersedes the version described below. To see all locations containing an executable named pwd, enter:

$ type -a pwd

Sample outputs:

pwd is a shell builtin
pwd is /bin/pwd

By typing pwd, you end up using the shell builtin provided by bash or ksh:

pwd

To use the binary version, type full path /bin/pwd:

/bin/pwd

Please note that both commands print the current/working directory. However, /bin/pwd has few more options as described below.

pwd options


To display the logical current working directory, enter:

$ pwd -L

The -L option cause pwd to use $PWD from environment, even if it contains symlinks. If the contents of the environment variable PWD provide an absolute name of the current directory with no . or .. components, but possibly with symbolic links, then output those contents. Otherwise, fall back to default -P handling:

$ pwd -P

Display the physical current working directory (all symbolic links resolved). For example, ~/bin/ is symbolic link:
$ pwd
$ ls -l ~/bin/

Sample outputs:

lrwxrwxrwx 1 vivek vivek 35 May 13  2012 /home/vivek/bin -> /home/vivek/realdata/scripts/utils/

cd to ~/bin/ and verify the current working directory with pwd:

$ cd ~/bin/
$ pwd

Sample outputs:

/home/vivek/bin

To see actual physical current working directory and avoid avoid all symlink called /home/vivek/bin, enter:

$ pwd -P

Sample outputs:

/home/vivek/realdata/scripts/utils

/bin/pwd options

The /bin/pwd version of pwd command has a two more additional options. To display pwd command version, enter:

$ /bin/pwd --version

Sample outputs:

pwd (GNU coreutils) 8.5
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by Jim Meyering.

To see information about pwd, enter:

$ /bin/pwd --help

Sample outputs:

Usage: /bin/pwd [OPTION]...
Print the full filename of the current working directory.

  -L, --logical   use PWD from environment, even if it contains symlinks
  -P, --physical  avoid all symlinks
      --help     display this help and exit
      --version  output version information and exit

NOTE: your shell may have its own version of pwd, which usually supersedes
the version described here.  Please refer to your shell's documentation
for details about the options it supports.

Report pwd bugs to bug-coreutils@gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
For complete documentation, run: info coreutils 'pwd invocation'

Shell script example


A basic version:

#!/bin/bash
## Get the working dir
_d="$(pwd)"

## cd to target
cd /nas03/nixcraft/images/today

## do something
echo "Uploading data to cdn..."

## get back to old dir
cd "$_d"

A complete working example that uses the pwd command to inform user about the current working directory before setting up the directory permissions.

#!/bin/bash
# Purpose: Set secure read-only permission for web-server DocumentRoot
# Author: nixCraft < webmaster@cyberciti.biz> under GPL v2.x+
# Usage: ./script
#        ./script /var/www
#-------------------------------------------------------   
## Get dir name from command line args
# if $1 not passed, fall back to current directory
_dir="${1:-.}"

## get the current working dir
_pwd="$(pwd)"

## Permissions
_dp="0544"
_fp="0444"

# Change me to Apache/Lighttpd/Nginx user:group names
_user="www-data"
_group="www-date"

## Die if _dir not found
[ ! -d "$_dir" ] && { echo "Directory $_dir not found."; exit 1; }

echo "Chaning file permissions for webserver directory and files to restrctive read-only mode for \"$_dir\""
read -p "Your current working directory is ${_pwd}. Are you sure (y / n) ?" ans

if [ "$ans" == "y" ]
then
     echo "Working on $_dir, please wait..."
     chown -R ${_user}:${_group} "$_dir"
     chmod -R $_fp "$_dir"
     find "$_dir" -type d -print0  | xargs -0 -I {} chmod $_dp "{}"
fi

Sample outputs:

Linux Tutorials and Materials, LPI Certifications, LPI Guides, LPI Learning
Fig.02: Sample shell script session (click to enlarge)

A note about the bash/ksh working directory shell variables


The bash and ksh (and other shells) set the following environment variable while using the cd command:

1. OLDPWD – The previous working directory as set by the cd command.
2. PWD – The current working directory as set by the cd command.

To print environment variable, enter:

$ echo "$PWD $OLDPWD"

To use environment variable, enter:

$ pwd
/home/accounts/office/b/bom/f/2/2008/10/
$ cd /home/sales/pdfs
$ ls
$ vi sample.broacher.txt
# go back to /home/accounts/office/b/bom/f/2/2008/10/
$ cd "$OLDPWD"
$ pwd

Friday 16 February 2018

How to unzip a zip file using the Linux and Unix bash shell terminal

LPI Tutorials and Materials, LPI Guides, LPI Certifications, LPI Learning

You can use the unzip or tar command to extract (unzip) the file on Linux or Unix-like operating system. Unzip is a program to unpack, list, test, and compressed (extract) files and it may not be installed by default.

Use tar command to unzip a zip file

The syntax is:

tar xvf {file.zip}
tar -xvf {file.zip}

Use the following syntax if you want to extract/unzip to a particular destination directory:

tar xvf {file.zip} -C /dest/directory/
tar -xvf {file.zip} -C /dest/directory/

For example, unzip a zip file named master.zip using tar command:

tar xvf master.zip

To unzip a zip file named master.zip using tar command to a /tmp/data/ directory:

tar xvf master.zip -C /tmp/data/
ls -l /tmp/data/
cd /tmp/data/
ls -l

Sample session:

LPI Tutorials and Materials, LPI Guides, LPI Certifications, LPI Learning

Fig.01: How to use a tar command to unzip a file on Linux/Unix-like terminal

Use unzip command to unzip a zip file

The syntax is:

unzip {file.zip}

Use the following syntax if you want to extract/unzip to a particular destination directory:

unzip -d /dest/directory/ {file.zip}

For example, unzip a zip file named master.zip using zip command:

unzip master.zip

To unzip a zip file named master.zip using zip command to a /tmp/data/ directory:

unzip -d /tmp/data/ master.zip

Sample session:

LPI Tutorials and Materials, LPI Guides, LPI Certifications, LPI Learning

Fig.02: How to unzip a zip file from the Terminal using unzip command

A note about bash: unzip: command not found

If the unzip command NOT installed on your Linux or Unix box, then run any one of the following commands as per your Linux distribution to install the unzip command.

Install unzip on Debian/Ubuntu Linux


Use the apt-get command or apt command to install unzip command:

sudo apt-get install unzip

OR

sudo apt install unzip

Install unzip on Arch Linux


Use the pacman command to install unzip command:

pacman -S unzip

Install unzip on CentOS/RHEL/Scientific/Oracle Linux


Use the yum command to install unzip command:

yum install unzip

Install unzip on Fedora Linux


Use the dnf command to install unzip command:

dnf install unzip

Install unzip on Suse/OpenSUSE Linux


Use the dnf command to install unzip command:

zypper install unzip

Install unzip on FreeBSD unix


To install the unzip port, run:

# cd /usr/ports/archivers/unzip/ && make install clean

To add the package rung pkg command:

# pkg install unzip

Install unzip on OpenBSD unix


Type the following pkg_add command to install unzip package:

# pkg_add -v unzip

Wednesday 14 February 2018

Creating a high availability setup for Linux on Power

This article describes high availability (HA), disaster recovery (DR), and fail-over for Linux on Power virtual machines (VMs) or logical partitions (LPARs). The solution described in this article works for all Linux distributions available for IBM® POWER8® and later processor-based servers. Open sources used in this solution are Distributed Replicated Block Device (DRBD) and heartbeat, which are available for all supported distributions. We have used Ubuntu v16.04, supported on IBM Power® servers, to explain and verify the solution.

We are using DRBD for this solution, as it is a software-based, shared-nothing, replicated storage solution mirroring the content of block devices (such as hard disks, partitions, logical volumes and so on) between hosts. DRBD mirrors data in real time. Replication occurs continuously while applications modify the data on the device transparently. Mirroring happens synchronously or asynchronously. With synchronous mirroring, applications are notified of write completions after the write operations have been carried out on all (connected) hosts. With asynchronous mirroring, applications are notified of write completions when the write operations have completed locally, which usually is before they have propagated to the other hosts.

Heartbeat is an open source program that provides cluster infrastructure capabilities—cluster membership and messaging - to client servers, which is a critical component in a HA server infrastructure. Heartbeat is typically used in conjunction with a cluster manager, such as DRBD, to achieve a complete HA setup.

This article demonstrates how to create a HA cluster with two nodes by using DRBD, heartbeat, and a floating IP.

Goal


After reading through this article you can successfully set up a HA environment consisting of two Ubuntu 16.04 servers in an active/passive configuration. This can be accomplished by pointing a floating IP, which is how our users will access their services or websites, to point to the primary or active server unless a failure is detected. In the event of a failure, the heartbeat service detects that the primary server is unavailable, and the secondary server will automatically run a script to reassign the floating IP to itself. Thus, subsequent network traffic to the floating IP will be directed to your secondary server, which will act as the active server until the primary server becomes available again (at which point, the primary server will reassign the floating IP to itself. We can disable the primary node to take over the role by disabling the Auto Fallback option).

Requirement


We need the following setup to be in place before we proceed with failover:

◈ Two servers/VMs installed with Ubuntu 16.04 installed. These will act as the primary and secondary servers for the application and web services.
◈ One floating IP that will act as IP for the application and web services
◈ One additional disk for each VMs, for installing the application and web services. Need not be shared.

Installing DRBD


First, we need to install DRBD on both the servers and create resource groups on free disks, which need not be shared between VMs. In this way, we can use local disks also for this solution and storage area network (SAN) disks are not required.

We can install DRBD packages along with its dependent packages. You need to run the following commands to install DRBD on the servers.

apt-get install drbd* -y

Figure 1. Install DRBD

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

apt-get install "linux-image-extra-`uname -r`"

This is a dependent package to enable the kernel module of DRBD

Figure 2. Install kernel extra packages

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Installing heartbeat


The next step is to install heartbeat on both servers. The simplest way to install heartbeat is to use the apt-get command.

apt-get install heartbeat

Figure 3. Install heartbeat package

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

After successfully installing the heartbeat package, you need to configure it for high availability.

Now that we have installed the packages required for DRDB and HA, we can begin with DR and HA configuration. First, we will configure DRBD and then configure heartbeat.

Configuring DRBD


To configure DRBD, you need a storage resource (a disk, directory, or a mount point), which will be defined as a DRBD resource group (in our example referred as r0). This resource contains all the data, which need to be moved from the primary to the secondary node when failover happens.

We need to define the resource group r0, under the /etc/drbd.d/r0.res file. The r0.res file should look as shown below:

resource r0 {
     device    /dev/drbd1;
     disk      /dev/sdc;
     meta-disk internal;
     on drbdnode1 {
          address   172.29.160.151:7789;
     }
     on drbdnode2 {
          address   172.29.160.51:7789;
     }
}

We need to define the resource as a device name /dev/drbd1, and in our case, we are using a disk, /dev/sdc, as the storage device. Note that all the nodes involved in HA setup (in our case 2), need to be defined in the r0.res file.

After the file is created on both the participating nodes, we need to run the following command. In our example, we are making drbdnode1 as primary node and drbdnode2 as backup node initially. So, whenever drbdnode1 fails, drbdnode2 should take over as the primary.

Run the following command on both nodes.

modprobe drbd
/etc/init.d/drbd start

Figure 4. Creating kernel module for DRBD

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Now, initialize the r0 resource group on drbdnode1 using the following command:

drbdadm create-md r0

Figure 5. Creating metadata

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Then, define drbdnode1 as the primary node and drbdnode2 as the secondary node by running the following commands on the respective nodes:

drbdadm primary r0
drbdadm secondary r0

Figure 6. Overview of primary node

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Figure 7. Overview of secondary node

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

After, setting the nodes as primary and secondary, start the drbd resource group. We need to do this to make the drbd resource active and ready-to-use.

Create a file system on the primary node using the following command:

root@drbdnode1:~# mkfs.ext4 /dev/drbd1

Then, mount the disk on the primary node.

root@drbdnode1~# mount /dev/drbd1 /data

Figure 8. Check if file system is mounted

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Now, we completed DRBD configuration. Next, we need to configure heartbeat. We need heartbeat to automate our failover, in case of any disaster.

Configuring heartbeat


In order to get the required cluster up and running, we must set up the following heartbeat configuration files in /etc/ha.d, identically on both servers:

◈ ha.cf: Contains the global configuration of the heartbeat cluster, including its member nodes. This file is needed to make both the nodes aware of the network interfaces and the floating IP that need to be monitored for heartbeat purpose.
◈ authkeys: Contains a security key that provides nodes a way to authenticate to cluster.
◈ haresources: Specifies the services that are managed by the cluster and the node that is the preferred owner of the services. Note that this file is not used in a setup that uses a DRBD resource group.

Create the ha.cf file

On both servers, open /etc/ha.d/ha.cf:

vi /etc/ha.d/ha.cf

We need to add the details of each node in our cluster as shown in Figure 9.

Figure 9. Review ha.cf file

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Next, we'll set up the cluster's authorization key.

Create the authkeys file

The authorization key is used to allow cluster members to join a cluster. We can just generate a random key for this purpose.

On the primary node, run the following commands to generate a suitable authorization key in an environment variable, named AUTH_KEY:

if [ -z "${AUTH_KEY}" ]; then
  export AUTH_KEY="$(command dd if='/dev/urandom' bs=512 count=1 2>'/dev/null' \
      | command openssl sha1 \
      | command cut --delimiter=' ' --fields=2)"
fi

Then create the /etc/ha.d/authkeysfile using the following commands:

auth1
1 sha1 $AUTH_KEY

Figure 10. Generating authkeys

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Ensure that the file is only readable by root user:

chmod 600 /etc/ha.d/authkeys

Now, copy the /etc/ha.d/authkeys file from your primary node to your secondary node.

On the secondary server, make sure to set the permissions of the authkeys file:

chmod 600 /etc/ha.d/authkeys

Both servers should have an identical /etc/ha.d/authkeys file.

Create the haresources file

The haresources file should contain details of hosts participating in the cluster. The preferred host is the node that should run the associated services if the node is available. If the preferred host is not available, that is, it is not reachable by the cluster, one of the other nodes will take over. In other words, the secondary server takes over if the primary server goes down.

On both servers, open the haresources file in your favorite editor. We'll use vi.

vi /etc/ha.d/haresources

Now add the following line to the file, substituting in your primary node's name:

primary floatip

This configures the primary server as the preferred host for the floatip service, which is currently undefined.

Figure 11. Review haresources file

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Configuring floating IP

Now we will configure floating IP on the primary VMs, where we will have the service running for the first time. Floating IP should only be active on one server at a time. This IP is where we are going to host our application or webservice and in case of a failover, we will be moving this IP to another server. In the following example, we will configure the floating IP on drbdnode1 and in case of failover, it will need to move to drbdnode2.

ifconfig ibmeth0:0 9.126.160.53 netmask 255.255.192.0 up

Figure 12. Confirm floating IP is assigned

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Testing high availability


The next step is to start DRBD and heartbeat services one after the other one, in the following order.

Run the following command on the primary node:

drbdadm primary r0

Run the following command on the secondary node:

drbdadm secondary r0

Start heartbeat services on both cluster nodes using the following command:

service heartbeat start

Now to check for primary and secondary VMs, run the drbd-overview command on both the VMs.

Figure 13. Check DRBD status

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

After completing the primary and secondary set up, and verifying that our services are working as intended, initiate the failover test.

You can perform the failover test in the following scenarios:

◈ Reboot primary node using reboot or shutdown –r command.
◈ Halt primary node using halt command.
◈ Stop the heartbeat service on the primary node using the service heartbeat stop command.

After running one of these scenarios, you should monitor the fail over using the drbd-overview command, and within a few seconds, you can notice that the secondary node has taken over the primary role, and all the services are up on this node. Your floating IP will also be moved along with fail over of the services.

Figure 14. Successful fail over

LPI Certification, LPI Guides, LPI Tutorials and Materials, LPI Learning, Linux

Figure 14 shows that drbdnode2 has taken over as the primary node, and this indicates successful failover.