Saturday, 30 November 2019

LPIC-3 Exam 303: Security

LPIC-3, LPI Security, LPI Study Material, LPI Tutorial and Material, LPI Guides

The LPIC-3 certification is the culmination of LPI’s multi-level professional certification program. LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Three separate LPIC-3 specialty certifications are available. Passing any one of the three exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 303: Security certification covers the administration of Linux systems enterprise-wide with an emphasis on security.

Exam Objectives Version: Version 2.0

Exam Code: 303-200

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

Topic 325: Cryptography


325.1 X.509 Certificates and Public Key Infrastructures

Weight: 5

Description: Candidates should understand X.509 certificates and public key infrastructures. They should know how to configure and use OpenSSL to implement certification authorities and issue SSL certificates for various purposes.

Key Knowledge Areas:

◈ Understand X.509 certificates, X.509 certificate lifecycle, X.509 certificate fields and X.509v3 certificate extensions
◈ Understand trust chains and public key infrastructures
◈ Generate and manage public and private keys
◈ Create, operate and secure a certification authority
◈ Request, sign and manage server and client certificates
◈ Revoke certificates and certification authorities

The following is a partial list of the used files, terms and utilities:

◈ openssl, including relevant subcommands
◈ OpenSSL configuration
◈ PEM, DER, PKCS
◈ CSR
◈ CRL
◈ OCSP

325.2 X.509 Certificates for Encryption, Signing and Authentication

Weight: 4

Description: Candidates should know how to use X.509 certificates for both server and client authentication. Candidates should be able to implement user and server authentication for Apache HTTPD. The version of Apache HTTPD covered is 2.4 or higher.

Key Knowledge Areas:

◈ Understand SSL, TLS and protocol versions
◈ Understand common transport layer security threats, for example Man-in-the-Middle
◈ Configure Apache HTTPD with mod_ssl to provide HTTPS service, including SNI and HSTS
◈ Configure Apache HTTPD with mod_ssl to authenticate users using certificates
◈ Configure Apache HTTPD with mod_ssl to provide OCSP stapling
◈ Use OpenSSL for SSL/TLS client and server tests

Terms and Utilities:

◈ Intermediate certification authorities
◈ Cipher configuration (no cipher-specific knowledge)
◈ httpd.conf
◈ mod_ssl
◈ openssl

325.3 Encrypted File Systems

Weight: 3

Description: Candidates should be able to setup and configure encrypted file systems.

Key Knowledge Areas:

◈ Understand block device and file system encryption
◈ Use dm-crypt with LUKS to encrypt block devices
◈ Use eCryptfs to encrypt file systems, including home directories and
◈ PAM integration
◈ Be aware of plain dm-crypt and EncFS

Terms and Utilities:

◈ cryptsetup
◈ cryptmount
◈ /etc/crypttab
◈ ecryptfsd
◈ ecryptfs-* commands
◈ mount.ecryptfs, umount.ecryptfs
◈ pam_ecryptfs

325.4 DNS and Cryptography

Weight: 5

Description: Candidates should have experience and knowledge of cryptography in the context of DNS and its implementation using BIND. The version of BIND covered is 9.7 or higher.

Key Knowledge Areas:

◈ Understanding of DNSSEC and DANE
◈ Configure and troubleshoot BIND as an authoritative name server serving DNSSEC secured zones
◈ Configure BIND as an recursive name server that performs DNSSEC validation on behalf of its clients
◈ Key Signing Key, Zone Signing Key, Key Tag
◈ Key generation, key storage, key management and key rollover
◈ Maintenance and re-signing of zones
◈ Use DANE to publish X.509 certificate information in DNS
◈ Use TSIG for secure communication with BIND

Terms and Utilities:

◈ DNS, EDNS, Zones, Resource Records
◈ DNS resource records: DS, DNSKEY, RRSIG, NSEC, NSEC3, NSEC3PARAM, TLSA
◈ DO-Bit, AD-Bit
◈ TSIG
◈ named.conf
◈ dnssec-keygen
◈ dnssec-signzone
◈ dnssec-settime
◈ dnssec-dsfromkey
◈ rndc
◈ dig
◈ delv
◈ openssl

Topic 326: Host Security


326.1 Host Hardening

Weight: 3

Description: Candidates should be able to secure computers running Linux against common threats. This includes kernel and software configuration.

Key Knowledge Areas:

◈ Configure BIOS and boot loader (GRUB 2) security
◈ Disable useless software and services
◈ Use sysctl for security related kernel configuration, particularly ASLR, Exec-Shield and IP / ICMP configuration
◈ Exec-Shield and IP / ICMP configuration
◈ Limit resource usage
◈ Work with chroot environments
◈ Drop unnecessary capabilities
◈ Be aware of the security advantages of virtualization

Terms and Utilities:

◈ grub.cfg
◈ chkconfig, systemctl
◈ ulimit
◈ /etc/security/limits.conf
◈ pam_limits.so
◈ chroot
◈ sysctl
◈ /etc/sysctl.conf

326.2 Host Intrusion Detection

Weight: 4

Description: Candidates should be familiar with the use and configuration of common host intrusion detection software. This includes updates and maintenance as well as automated host scans.

Key Knowledge Areas:

◈ Use and configure the Linux Audit system
◈ Use chkrootkit
◈ Use and configure rkhunter, including updates
◈ Use Linux Malware Detect
◈ Automate host scans using cron
◈ Configure and use AIDE, including rule management
◈ Be aware of OpenSCAP

Terms and Utilities:

◈ auditd
◈ auditctl
◈ ausearch, aureport
◈ auditd.conf
◈ auditd.rules
◈ pam_tty_audit.so
◈ chkrootkit
◈ rkhunter
◈ /etc/rkhunter.conf
◈ maldet
◈ conf.maldet
◈ aide
◈ /etc/aide/aide.conf

326.3 User Management and Authentication

Weight: 5

Description: Candidates should be familiar with management and authentication of user accounts. This includes configuration and use of NSS, PAM, SSSD and Kerberos for both local and remote directories and authentication mechanisms as well as enforcing a password policy.

Key Knowledge Areas:

◈ Understand and configure NSS
◈ Understand and configure PAM
◈ Enforce password complexity policies and periodic password changes
◈ Lock accounts automatically after failed login attempts
◈ Configure and use SSSD
◈ Configure NSS and PAM for use with SSSD
◈ Configure SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains
◈ Kerberos and local domains
◈ Obtain and manage Kerberos tickets

Terms and Utilities:

◈ nsswitch.conf
◈ /etc/login.defs
◈ pam_cracklib.so
◈ chage
◈ pam_tally.so, pam_tally2.so
◈ faillog
◈ pam_sss.so
◈ sssd
◈ sssd.conf
◈ sss_* commands
◈ krb5.conf
◈ kinit, klist, kdestroy

326.4 FreeIPA Installation and Samba Integration

Weight: 4

Description: Candidates should be familiar with FreeIPA v4.x. This includes installation and maintenance of a server instance with a FreeIPA domain as well as integration of FreeIPA with Active Directory.

Key Knowledge Areas:

◈ Understand FreeIPA, including its architecture and components
◈ Understand system and configuration prerequisites for installing FreeIPA
◈ Install and manage a FreeIPA server and domain
◈ Understand and configure Active Directory replication and Kerberos cross-realm trusts
◈ Be aware of sudo, autofs, SSH and SELinux integration in FreeIPA

Terms and Utilities:

◈ 389 Directory Server, MIT Kerberos, Dogtag Certificate System, NTP, DNS, SSSD, certmonger
◈ ipa, including relevant subcommands
◈ ipa-server-install, ipa-client-install, ipa-replica-install
◈ ipa-replica-prepare, ipa-replica-manage

Topic 327: Access Control


327.1 Discretionary Access Control

Weight: 3

Description: Candidates are required to understand Discretionary Access Control and know how to implement it using Access Control Lists. Additionally, candidates are required to understand and know how to use Extended Attributes.

Key Knowledge Areas:

◈ Understand and manage file ownership and permissions, including SUID and SGID
◈ Understand and manage access control lists
◈ Understand and manage extended attributes and attribute classes

Terms and Utilities:

◈ getfacl
◈ setfacl
◈ getfattr
◈ setfattr

327.2 Mandatory Access Control

Weight: 4

Description: Candidates should be familiar with Mandatory Access Control systems for Linux. Specifically, candidates should have a thorough knowledge of SELinux. Also, candidates should be aware of other Mandatory Access Control systems for Linux. This includes major features of these systems but not configuration and use.

Key Knowledge Areas:

◈ Understand the concepts of TE, RBAC, MAC and DAC
◈ Configure, manage and use SELinux
◈ Be aware of AppArmor and Smack

Terms and Utilities:

◈ getenforce, setenforce, selinuxenabled
◈ getsebool, setsebool, togglesebool
◈ fixfiles, restorecon, setfiles
◈ newrole, runcon
◈ semanage
◈ sestatus, seinfo
◈ apol
◈ seaudit, seaudit-report, audit2why, audit2allow
◈ /etc/selinux/*

327.3 Network File Systems

Weight: 3

Description: Candidates should have experience and knowledge of security issues in use and configuration of NFSv4 clients and servers as well as CIFS client services. Earlier versions of NFS are not required knowledge.

Key Knowledge Areas:

◈ Understand NFSv4 security issues and improvements
◈ Configure NFSv4 server and clients
◈ Understand and configure NFSv4 authentication mechanisms (LIPKEY, SPKM, Kerberos)
◈ Understand and use NFSv4 pseudo file system
◈ Understand and use NFSv4 ACLs
◈ Configure CIFS clients
◈ Understand and use CIFS Unix Extensions
◈ Understand and configure CIFS security modes (NTLM, Kerberos)
◈ Understand and manage mapping and handling of CIFS ACLs and SIDs in a Linux system

Terms and Utilities:

◈ /etc/exports
◈ /etc/idmap.conf
◈ nfs4acl
◈ mount.cifs parameters related to ownership, permissions and security modes
winbind
◈ getcifsacl, setcifsacl

Topic 328: Network Security


328.1 Network Hardening

Weight: 4

Description: Candidates should be able to secure networks against common threats. This includes verification of the effectiveness of security measures.

Key Knowledge Areas:

◈ Configure FreeRADIUS to authenticate network nodes
◈ Use nmap to scan networks and hosts, including different scan methods
◈ Use Wireshark to analyze network traffic, including filters and statistics
◈ Identify and deal with rogue router advertisements and DHCP messages

Terms and Utilities:

◈ radiusd
◈ radmin
◈ radtest, radclient
◈ radlast, radwho
◈ radiusd.conf
◈ /etc/raddb/*
◈ nmap
◈ wireshark
◈ tshark
◈ tcpdump
◈ ndpmon

328.2 Network Intrusion Detection

Weight: 4

Description: Candidates should be familiar with the use and configuration of network security scanning, network monitoring and network intrusion detection software. This includes updating and maintaining the security scanners.

Key Knowledge Areas:

◈ Implement bandwidth usage monitoring
◈ Configure and use Snort, including rule management
◈ Configure and use OpenVAS, including NASL

Terms and Utilities:

◈ ntop
◈ Cacti
◈ snort
◈ snort-stat
◈ /etc/snort/*
◈ openvas-adduser, openvas-rmuser
◈ openvas-nvt-sync
◈ openvassd
◈ openvas-mkcert
◈ /etc/openvas/*

328.3 Packet Filtering

Weight: 5

Description: Candidates should be familiar with the use and configuration of packet filters. This includes netfilter, iptables and ip6tables as well as basic knowledge of nftables, nft and ebtables.

Key Knowledge Areas:

◈ Understand common firewall architectures, including DMZ
◈ Understand and use netfilter, iptables and ip6tables, including standard modules, tests and targets
◈ Implement packet filtering for both IPv4 and IPv6
◈ Implement connection tracking and network address translation
◈ Define IP sets and use them in netfilter rules
◈ Have basic knowledge of nftables and nft
◈ Have basic knowledge of ebtables
◈ Be aware of conntrackd

Terms and Utilities:

◈ iptables
◈ ip6tables
◈ iptables-save, iptables-restore
◈ ip6tables-save, ip6tables-restore
◈ ipset
◈ nft
◈ ebtables

328.4 Virtual Private Networks

Weight: 4

Description: Candidates should be familiar with the use of OpenVPN and IPsec.

Key Knowledge Areas:

◈ Configure and operate OpenVPN server and clients for both bridged and routed VPN networks
◈ Configure and operate IPsec server and clients for routed VPN networks using IPsec-Tools / racoon
◈ Awareness of L2TP

Terms and Utilities:

◈ /etc/openvpn/*
◈ openvpn server and client
◈ setkey
◈ /etc/ipsec-tools.conf
◈ /etc/racoon/racoon.conf

Thursday, 28 November 2019

ZIP command in Linux with examples

ZIP is a compression and file packaging utility for Unix. Each file is stored in single .zip {.zip-filename} file with the extension .zip.

ZIP Command, Linux, LPI Study Materials, LPI Guides, LPI Tutorial and Material

◈ zip is used to compress the files to reduce file size and also used as file package utility. zip is available in many operating systems like unix, linux, windows etc.

◈ If you have a limited bandwidth between two servers and want to transfer the files faster, then zip the files and transfer.

◈ The zip program puts one or more compressed files into a single zip archive, along with information about the files (name, path, date, time of last modification, protection, and check information to verify file integrity). An entire directory structure can be packed into a zip archive with a single command.

◈ Compression ratios of 2:1 to 3:1 are common for text files. zip has one compression method (deflation) and can also store files without compression. zip automatically chooses the better of the two for each file to be compressed.
The program is useful for packaging a set of files for distribution; for archiving files; and for saving disk space by temporarily compressing unused files or directories.

Syntax :

zip [options] zipfile files_list

Syntax for  Creating a zip file:

$zip myfile.zip filename.txt

Extracting files from zip file 


Unzip will list, test, or extract files from a ZIP archive, commonly found on Unix systems. The default behavior (with no options) is to extract into the current directory (and sub-directories below it) all files from the specified ZIP archive.

Syntax :
$unzip myfile.zip

Options :

1. -d Option: Removes the file from the zip archive.  After creating a zip file, you can remove a file from the archive using the -d option.
Suppose we have following files in my current directory are listed below:
hello1.c
hello2.c
hello3.c
hello4.c
hello5.c
hello6.c
hello7.c
hello8.c

Syntax :

$zip –d filename.zip file.txt
Command :
$zip –d myfile.zip hello7.c

After removing hello7.c from myfile.zip file, the files can be restored with unzip command

Command:
$unzip myfile.zip
$ls command
Output :
hello1.c
hello2.c
hello3.c
hello4.c
hello5.c
hello6.c
hello8.c
The hello7.c file is removed from zip file

2. -u Option: Updates the file in the zip archive. This option can be used to update the specified list of files or add new files to the existing zip file. Update an existing entry in the zip archive only if it has been modified more recently than the version already in the zip archive.

Syntax:

 $zip –u filename.zip file.txt

Suppose we have following files in my current directory are listed below:
hello1.c
hello2.c
hello3.c
hello4.c

Command :
$zip –u myfile.zip hello5.c

After updating hello5.c from myfile.zip file, the files can be restored with unzip command

Command:
$unzip myfile.zip
$ls command
Output :
hello1.c
hello2.c
hello3.c
hello4.c
hello5.c
The hello5.c file is updated to the zip file

ZIP Command, Linux, LPI Study Materials, LPI Guides, LPI Tutorial and Material

3. -m Option: Deletes the original files after zipping. Move the specified files into the zip archive actually, this deletes the target directories/files after making the specified zip archive. If a directory becomes empty after removal of the files, the directory is also removed. No deletions are done until zip has created the archive without error. This is useful for conserving disk space, but is potentially dangerous removing all input files.

Syntax :

 $zip –m filename.zip file.txt

Suppose we have following files in my current directory are listed below:
hello1.c
hello2.c
hello3.c
hello4.c

Command :
$zip –m myfile.zip *.c

After this command has been executed by the terminal here is the result:

Command:
$ls command
Output :
myfile.zip
//No other files of .c(extension) has been found

4. -r Option: To zip a directory recursively, use the -r option with the zip command and it will recursively zips the files in a directory. This option helps you to zip all the files present in the specified directory.

Syntax:

 $zip –r filename.zip directory_name

Suppose we have following files in my current directory (docs) are listed below:
unix.pdf
oracle.pdf
linux.pdf

Command :
$zip –r mydir.zip docs
Output :
  adding: docs/            //Compressing the directory
  adding: docs/unix.pdf   // Compressing first file
  adding: docs/oracle.pdf // Compressing second file
  adding: docs/linux.pdf  //Compressing third file

5. -x Option: Exclude the files in creating the zip. Let say you are zipping all the files in the current directory and want to exclude some unwanted files. You can exclude these unwanted files using the -x option.

Syntax :

 $zip –x filename.zip file_to_be_excluded

Suppose we have following files in my current directory are listed below:
hello1.c
hello2.c
hello3.c
hello4.c

Command :
$zip –x myfile.zip hello3.c

This command on execution will compress all the files except hello3.c

Command:
$ls command
Output :
myfile.zip //compressed file
hello3.c   //this file has been excluded while compressing

6. -v Option: Verbose mode or print diagnostic version info. Normally, when applied to real operations, this option enables the display of a progress indicator during compression and requests verbose diagnostic info about zip file structure oddities.
When -v is the only command line argument, and either stdin or stdout is not redirected to a file, a diagnostic screen is printed. In addition to the help screen header with program name, version, and release date, some pointers to the Info-ZIP home and distribution sites are given. Then, it shows information about the target environment (compiler type and version, OS version, compilation date and the enabled optional features used to create the zip executable.

Syntax :

 $zip –v filename.zip file1.txt

Suppose we have following files in my current directory are listed below:
hello1.c
hello2.c
hello3.c
hello4.c

Command
$zip -v file1.zip *.c
Output :
adding: hello1.c    (in=0) (out=0) (stored 0%)
  adding: hello2.c    (in=0) (out=0) (stored 0%)
  adding: hello3.c    (in=0) (out=0) (stored 0%)
  adding: hello4.c    (in=0) (out=0) (stored 0%)
total bytes=0, compressed=0 -> 0% savings

Tuesday, 26 November 2019

LPIC-3 Exam 304: Virtualization

LPIC-3 Exam, LPI Virtualization, LPI Study Materials, LPI Guides, LPI Tutorial and Material

Exam Objectives Version: Version 2.0

Exam Code: 304-200

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

Prerequisites: The candidate must have an active LPIC-2 certification to receive LPIC-3 certification, but the LPIC-2 and LPIC-3 exams may be taken in any order

Requirements: Passing the 304 exam

Validity Period: 5 years

Languages: English, Japanese

LPIC-3 Exam, LPI Virtualization, LPI Study Materials, LPI Guides, LPI Tutorial and Material

Topic 330: Virtualization


330.1 Virtualization Concepts and Theory

Weight: 8

Description: Candidates should know and understand the general concepts, theory and terminology of Virtualization. This includes Xen, KVM and libvirt terminology.

Key Knowledge Areas:

◈ Terminology
◈ Pros and Cons of Virtualization
◈ Variations of Virtual Machine Monitors
◈ Migration of Physical to Virtual Machines
◈ Migration of Virtual Machines between Host systems
◈ Cloud Computing

The following is a partial list of the used files, terms and utilities:

◈ Hypervisor
◈ Hardware Virtual Machine (HVM)
◈ Paravirtualization (PV)
◈ Container Virtualization
◈ Emulation and Simulation
◈ CPU flags
◈ /proc/cpuinfo
◈ Migration (P2V, V2V)
◈ IaaS, PaaS, SaaS

330.2 Xen

Weight: 9

Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot Xen installations. The focus is on Xen version 4.x.

Key Knowledge Areas:

◈ Xen architecture, networking and storage
◈ Xen configuration
◈ Xen utilities
◈ Troubleshooting Xen installations
◈ Basic knowledge of XAPI
◈ Awareness of XenStore
◈ Awareness of Xen Boot Parameters
◈ Awareness of the xm utility

Terms and Utilities:

◈ Domain0 (Dom0), DomainU (DomU)
◈ PV-DomU, HVM-DomU
◈ /etc/xen/
◈ xl
◈ xl.cfg
◈ xl.conf
◈ xe
◈ xentop

330.3 KVM

Weight: 9

Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot KVM installations.

Key Knowledge Areas:

◈ KVM architecture, networking and storage
◈ KVM configuration
◈ KVM utilities
◈ Troubleshooting KVM installations

Terms and Utilities:

◈ Kernel modules: kvm, kvm-intel and kvm-amd
◈ /etc/kvm/
◈ /dev/kvm
◈ kvm
◈ KVM monitor
◈ qemu
◈ qemu-img

330.4 Other Virtualization Solutions

Weight: 3

Description: Candidates should have some basic knowledge and experience with alternatives to Xen and KVM.

Key Knowledge Areas:

◈ Basic knowledge of OpenVZ and LXC
◈ Awareness of other virtualization technologies
◈ Basic knowledge of virtualization provisioning tools

Terms and Utilities:

◈ OpenVZ
◈ VirtualBox
◈ LXC
◈ docker
◈ packer
◈ vagrant

330.5 Libvirt and Related Tools

Weight: 5

Description: Candidates should have basic knowledge and experience with the libvirt library and commonly available tools.

Key Knowledge Areas:

◈ libvirt architecture, networking and storage
◈ Basic technical knowledge of libvirt and virsh
◈ Awareness of oVirt

Terms and Utilities:

◈ libvirtd
◈ /etc/libvirt/
◈ virsh
◈ oVirt

330.6 Cloud Management Tools

Weight: 2

Description: Candidates should have basic feature knowledge of commonly available cloud management tools.

Key Knowledge Areas:

◈ Basic feature knowledge of OpenStack and CloudStack
◈ Awareness of Eucalyptus and OpenNebula

Terms and Utilities:

◈ OpenStack
◈ CloudStack
◈ Eucalyptus
◈ OpenNebula

Topic 334: High Availability Cluster Management


334.1 High Availability Concepts and Theory

Weight: 5

Description: Candidates should understand the properties and design approaches of high availability clusters.

Key Knowledge Areas:

◈ Understand the most important cluster architectures
◈ Understand recovery and cluster reorganization mechanisms
◈ Design an appropriate cluster architecture for a given purpose
◈ Application aspects of high availability
◈ Operational considerations of high availability

Terms and Utilities:

◈ Active/Passive Cluster, Active/Active Cluster
◈ Failover Cluster, Load Balanced Cluster
◈ Shared-Nothing Cluster, Shared-Disk Cluster
◈ Cluster resources
◈ Cluster services
◈ Quorum
◈ Fencing
◈ Split brain
◈ Redundancy
◈ Mean Time Before Failure (MTBF)
◈ Mean Time To Repair (MTTR)
◈ Service Level Agreement (SLA)
◈ Disaster Recovery
◈ Replication
◈ Session handling

334.2 Load Balanced Clusters

Weight: 6

Description: Candidates should know how to install, configure, maintain and troubleshoot LVS. This includes the configuration and use of keepalived and ldirectord. Candidates should further be able to install, configure, maintain and troubleshoot HAProxy.

Key Knowledge Areas:

◈ Understanding of LVS / IPVS
◈ Basic knowledge of VRRP
◈ Configuration of keepalived
◈ Configuration of ldirectord
◈ Backend server network configuration
◈ Understanding of HAProxy
◈ Configuration of HAProxy

Terms and Utilities:

◈ ipvsadm
◈ syncd
◈ LVS Forwarding (NAT, Direct Routing, Tunneling, Local Node)
◈ connection scheduling algorithms
◈ keepalived configuration file
◈ ldirectord configuration file
◈ genhash
◈ HAProxy configuration file
◈ load balancing algorithms
◈ ACLs

334.3 Failover Clusters

Weight: 6

Description: Candidates should have experience in the installation, configuration, maintenance and troubleshooting of a Pacemaker cluster. This includes the use of Corosync. The focus is on Pacemaker 1.1 for Corosync 2.x.

Key Knowledge Areas:

◈ Pacemaker architecture and components (CIB, CRMd, PEngine, LRMd, DC, STONITHd)
◈ Pacemaker cluster configuration
◈ Resource classes (OCF, LSB, Systemd, Upstart, Service, STONITH, Nagios)
◈ Resource rules and constraints (location, order, colocation)
◈ Advanced resource features (templates, groups, clone resources, multi-state resources)
◈ Pacemaker management using pcs
◈ Pacemaker management using crmsh
◈ Configuration and Management of corosync in conjunction with Pacemaker
◈ Awareness of other cluster engines (OpenAIS, Heartbeat, CMAN)

Terms and Utilities:

◈ pcs
◈ crm
◈ crm_mon
◈ crm_verify
◈ crm_simulate
◈ crm_shadow
◈ crm_resource
◈ crm_attribute
◈ crm_node
◈ crm_standby
◈ cibadmin
◈ corosync.conf
◈ authkey
◈ corosync-cfgtool
◈ corosync-cmapctl
◈ corosync-quorumtool
◈ stonith_admin

334.4 High Availability in Enterprise Linux Distributions

Weight: 1

Description: Candidates should be aware of how enterprise Linux distributions integrate High Availability technologies.

Key Knowledge Areas:

◈ Basic knowledge of Red Hat Enterprise Linux High Availability Add-On
◈ Basic knowledge of SUSE Linux Enterprise High Availability Extension

Terms and Utilities:

◈ Distribution specific configuration tools
◈ Integration of cluster engines, load balancers, storage technology, cluster filesystems, etc.

Topic 335: High Availability Cluster Storage


335.1 DRBD / cLVM

Weight: 3

Description: Candidates are expected to have the experience and knowledge to install, configure, maintain and troubleshoot DRBD devices. This includes integration with Pacemaker. DRBD configuration of version 8.4.x is covered. Candidates are further expected to be able to manage LVM configuration within a shared storage cluster.

Key Knowledge Areas:

◈ Understanding of DRBD resources, states and replication modes
◈ Configuration of DRBD resources, networking, disks and devices
◈ Configuration of DRBD automatic recovery and error handling
◈ Management of DRBD using drbdadm
◈ Basic knowledge of drbdsetup and drbdmeta
◈ Integration of DRBD with Pacemaker
◈ cLVM
◈ Integration of cLVM with Pacemaker

Terms and Utilities:

◈ Protocol A, B and C
◈ Primary, Secondary
◈ Three-way replication
◈ drbd kernel module
◈ drbdadm
◈ drbdsetup
◈ drbdmeta
◈ /etc/drbd.conf
◈ /proc/drbd
◈ LVM2
◈ clvmd
◈ vgchange, vgs

335.2 Clustered File Systems

Weight: 3

Description: Candidates should know how to install, maintain and troubleshoot installations using GFS2 and OCFS2. This includes integration with Pacemaker as well as awareness of other clustered filesystems available in a Linux environment.

Key Knowledge Areas:

◈ Understand the principles of cluster file systems
◈ Create, maintain and troubleshoot GFS2 file systems in a cluster
◈ Create, maintain and troubleshoot OCFS2 file systems in a cluster
◈ Integration of GFS2 and OCFS2 with Pacemaker
◈ Awareness of the O2CB cluster stack
◈ Awareness of other commonly used clustered file systems

Terms and Utilities:

◈ Distributed Lock Manager (DLM)
◈ mkfs.gfs2
◈ mount.gfs2
◈ fsck.gfs2
◈ gfs2_grow
◈ gfs2_edit
◈ gfs2_jadd
◈ mkfs.ocfs2
◈ mount.ocfs2
◈ fsck.ocfs2
◈ tunefs.ocfs2
◈ mounted.ocfs2
◈ o2info
◈ o2image
◈ CephFS
◈ GlusterFS
◈ AFS

Sunday, 24 November 2019

Exam 701: DevOps Tools Engineer

Businesses across the globe are increasingly implementing DevOps practices to optimize daily systems administration and software development tasks. As a result, businesses across industries are hiring IT professionals that can effectively apply DevOps to reduce delivery time and improve quality in the development of new software products.

DevOps Tools Engineer, LPI Guides, LPI Learning, LPI Tutorial and Material, LPI Study Materials, LPI Online Exam

To meet this growing need for qualified professionals, LPI developed the Linux Professional Institute DevOps Tools Engineer certification which verifies the skills needed to use the tools that enhance collaboration in workflows throughout system administration and software development.


In developing the Linux Professional Institute DevOps Tools Engineer certification, LPI reviewed the DevOps tools landscape and defined a set of essential skills when applying DevOps. As such, the certification exam focuses on the practical skills required to work successfully in a DevOps environment – focusing on the skills needed to use the most prominent DevOps tools. The result is a certification that covers the intersection between development and operations, making it relevant for all IT professionals working in the field of DevOps.

DevOps Tools Engineer, LPI Guides, LPI Learning, LPI Tutorial and Material, LPI Study Materials, LPI Online Exam

Exam Objectives Version: Version 1.0

Exam Code: 701-100

About Objective Weights: Each objective is assigned a weighting value. The weights indicate the relative importance of each objective on the exam. Objectives with higher weights will be covered in the exam with more questions.

Validity Period: 5 years

Languages: English, Japanese

Topic 701: Software Engineering


701.1 Modern Software Development (weight: 6) 

Weight: 6

Description: Candidates should be able to design software solutions suitable for modern runtime environments. Candidates should understand how services handle data persistence, sessions, status information, transactions, concurrency, security, performance, availability, scaling, load balancing, messaging, monitoring and APIs. Furthermore, candidates should understand the implications of agile and DevOps on software development.

Key Knowledge Areas:

◈ Understand and design service based applications
◈ Understand common API concepts and standards
◈ Understand aspects of data storage, service status and session handling
◈ Design software to be run in containers
◈ Design software to be deployed to cloud services
◈ Awareness of risks in the migration and integration of monolithic legacy software
◈ Understand common application security risks and ways to mitigate them
◈ Understand the concept of agile software development
◈ Understand the concept of DevOps and its implications to software developers and operators

The following is a partial list of the used files, terms and utilities:

◈ REST, JSON
◈ Service Orientated Architectures (SOA)
◈ Microservices
◈ Immutable servers
◈ Loose coupling
◈ Cross site scripting, SQL injections, verbose error reports, API authentication, consistent enforcement of transport encryption
◈ CORS headers and CSRF tokens
◈ ACID properties and CAP theorem

701.2 Standard Components and Platforms for Software (weight: 2)

Weight: 2

Description: Candidates should understand services offered by common cloud platforms. They should be able to include these services in their application architectures and deployment toolchains and understand the required service configurations. OpenStack service components are used as a reference implementation.

Key Knowledge Areas:

◈ Features and concepts of object storage
◈ Features and concepts of relational and NoSQL databases
◈ Features and concepts of message brokers and message queues
◈ Features and concepts of big data services
◈ Features and concepts of application runtimes / PaaS
◈ Features and concepts of content delivery networks

The following is a partial list of the used files, terms and utilities:

◈ OpenStack Swift
◈ OpenStack Trove
◈ OpenStack Zaqar
◈ CloudFoundry
◈ OpenShift

701.3 Source Code Management (weight: 5)

Weight: 5

Description: Candidates should be able to use Git to manage and share source code. This includes creating and contributing to a repository as well as the usage of tags, branches and remote repositories. Furthermore, the candidate should be able to merge files and resolve merging conflicts.

Key Knowledge Areas:

◈ Understand Git concepts and repository structure
◈ Manage files within a Git repository
◈ Manage branches and tags
◈ Work with remote repositories and branches as well as submodules
◈ Merge files and branches
◈ Awareness of SVN and CVS, including concepts of centralized and distributed SCM solutions

The following is a partial list of the used files, terms and utilities:

◈ git
◈ .gitignore

701.4 Continuous Integration and Continuous Delivery (weight: 5)

Weight: 5

Description: Candidates should understand the principles and components of a continuous integration and continuous delivery pipeline. Candidates should be able to implement a CI/CD pipeline using Jenkins, including triggering the CI/CD pipeline, running unit, integration and acceptance tests, packaging software and handling the deployment of tested software artifacts. This objective covers the feature set of Jenkins version 2.0 or later.

Key Knowledge Areas:

◈ Understand the concepts of Continuous Integration and Continuous Delivery
◈ Understand the components of a CI/CD pipeline, including builds, unit, integration and acceptance tests, artifact management, delivery and deployment
◈ Understand deployment best practices
◈ Understand the architecture and features of Jenkins, including Jenkins Plugins, Jenkins API, notifications and distributed builds
◈ Define and run jobs in Jenkins, including parameter handling
◈ Fingerprinting, artifacts and artifact repositories
◈ Understand how Jenkins models continuous delivery pipelines and implement a declarative continuous delivery pipeline in Jenkins
◈ Awareness of possible authentication and authorization models
◈ Understanding of the Pipeline Plugin
◈ Understand the features of important Jenkins modules such as Copy Artifact Plugin, Fingerprint Plugin, Docker Pipeline, Docker Build and Publish plugin, Git Plugin, Credentials Plugin
◈ Awareness of Artifactory and Nexus

The following is a partial list of the used files, terms and utilities:

◈ Step, Node, Stage
◈ Jenkins SDL
◈ Jenkinsfile
◈ Declarative Pipeline
◈ Blue-green and canary deployment

Topic 702: Container Management


702.1 Container Usage (weight: 7)

Weight: 7

Description: Candidates should be able to build, share and operate Docker containers. This includes creating Dockerfiles, using a Docker registry, creating and interacting with containers as well as connecting containers to networks and storage volumes. This objective covers the feature set of Docker version 17.06 or later.

Key Knowledge Areas:

◈ Understand the Docker architecture
◈ Use existing Docker images from a Docker registry
◈ Create Dockerfiles and build images from Dockerfiles
◈ Upload images to a Docker registry
◈ Operate and access Docker containers
◈ Connect container to Docker networks
◈ Use Docker volumes for shared and persistent container storage

The following is a partial list of the used files, terms and utilities:

◈ docker
◈ Dockerfile
◈ .dockerignore

702.2 Container Deployment and Orchestration (weight: 5)

Weight: 5

Description: Candidates should be able to run and manage multiple containers that work together to provide a service. This includes the orchestration of Docker containers using Docker Compose in conjunction with an existing Docker Swarm cluster as well as using an existing Kubernetes cluster. This objective covers the feature sets of Docker Compose version 1.14 or later, Docker Swarm included in Docker 17.06 or later and Kubernetes 1.6 or later.

Key Knowledge Areas:

◈ Understand the application model of Docker Compose
◈ Create and run Docker Compose Files (version 3 or later)
◈ Understand the architecture and functionality of Docker Swarm mode
◈ Run containers in a Docker Swarm, including the definition of services, stacks and the usage of secrets
◈ Understand the architecture and application model Kubernetes
◈ Define and manage a container-based application for Kubernetes, including the definition of ◈ Deployments, Services, ReplicaSets and Pods

The following is a partial list of the used files, terms and utilities:

◈ docker-compose
◈ docker
◈ kubectl

702.3 Container Infrastructure (weight: 4)

Weight: 4

Description: Candidates should be able to set up a runtime environment for containers. This includes running containers on a local workstation as well as setting up a dedicated container host. Furthermore, candidates should be aware of other container infrastructures, storage, networking and container specific security aspects. This objective covers the feature set of Docker version 17.06 or later and Docker Machine 0.12 or later.

Key Knowledge Areas:

◈ Use Docker Machine to setup a Docker host
◈ Understand Docker networking concepts, including overlay networks
◈ Create and manage Docker networks
◈ Understand Docker storage concepts
◈ Create and manage Docker volumes
◈ Awareness of Flocker and flannel
◈ Understand the concepts of service discovery
◈ Basic feature knowledge of CoreOS Container Linux, rkt and etcd
◈ Understand security risks of container virtualization and container images and how to mitigate them

The following is a partial list of the used files, terms and utilities:

◈ docker-machine

Topic 703: Machine Deployment


703.1 Virtual Machine Deployment (weight: 4)

Weight: 4

Description: Candidates should be able to automate the deployment of a virtual machine with an operating system and a specific set of configuration files and software.

Key Knowledge Areas:

◈ Understand Vagrant architecture and concepts, including storage and networking
◈ Retrieve and use boxes from Atlas
◈ Create and run Vagrantfiles
◈ Access Vagrant virtual machines
◈ Share and synchronize folder between a Vagrant virtual machine and the host system
◈ Understand Vagrant provisioning, including File, Shell, Ansible and Docker
◈ Understand multi-machine setup

The following is a partial list of the used files, terms and utilities:

◈ vagrant
◈ Vagrantfile

703.2 Cloud Deployment (weight: 2)

Weight: 2

Description: Candidates should be able to configure IaaS cloud instances and adjust them to match their available hardware resources, specifically, disk space and volumes. Additinally, candidates should be able to configure instances to allow secure SSH logins and prepare the instances to be ready for a configuration management tool such as Ansible.

Key Knowledge Areas:

◈ Understanding the features and concepts of cloud-init, including user-data and initializing and configuring cloud-init
◈ Use cloud-init to create, resize and mount file systems, configure user accounts, including login credentials such as SSH keys and install software packages from the distribution’s repository
◈ Understand the features and implications of IaaS clouds and virtualization for a computing instance, such as snapshotting, pausing, cloning and resource limits.

703.3 System Image Creation (weight: 2)

Weight: 2

Description: Candidates should be able to create images for containers, virtual machines and IaaS cloud instances.

Key Knowledge Areas:

◈ Understand the functionality and features of Packer
◈ Create and maintain template files
◈ Build images from template files using different builders

The following is a partial list of the used files, terms and utilities:

◈ packer

Topic 704: Configuration Management


704.1 Ansible (weight: 8)

Weight: 8

Description: Candidates should be able to use Ansible to ensure a target server is in a specific state regarding its configuration and installed software. This objective covers the feature set of Ansible version 2.2 or later.

Key Knowledge Areas:

◈ Understand the principles of automated system configuration and software installation
◈ Create and maintain inventory files
◈ Understand how Ansible interacts with remote systems
◈ Manage SSH login credentials for Ansible, including using unprivileged login accounts
◈ Create, maintain and run Ansible playbooks, including tasks, handlers, conditionals, loops and registers
◈ Set and use variables
◈ Maintain secrets using Ansible vaults
◈ Write Jinja2 templates, including using common filters, loops and conditionals
◈ Understand and use Ansible roles and install Ansible roles from Ansible Galaxy
◈ Understand and use important Ansible tasks, including file, copy, template, ini_file, lineinfile, patch, replace, user, group, command, shell, service, systemd, cron, apt, debconf, yum, git, and debug
◈ Awareness of dynamic inventory
◈ Awareness of Ansibles features for non-Linux systems
◈ Awareness of Ansible containers

The following is a partial list of the used files, terms and utilities:

◈ ansible.cfg
◈ ansible-playbook
◈ ansible-vault
◈ ansible-galaxy
◈ ansible-doc

704.2 Other Configuration Management Tools (weight: 2)

Weight: 2

Description: Candidates should understand the main features and principles of important configuration management tools other than Ansible.

Key Knowledge Areas:

◈ Basic feature and architecture knowledge of Puppet.
◈ Basic feature and architecture knowledge of Chef.

The following is a partial list of the used files, terms and utilities:

◈ Manifest, Class, Recipe, Cookbook
◈ puppet
◈ chef
◈ chef-solo
◈ chef-client
◈ chef-server-ctl
◈ knife

Topic 705: Service Operations


705.1 IT Operations and Monitoring (weight: 4)

Weight: 4

Description: Candidates should understand how IT infrastructure is involved in delivering a service. This includes knowledge about the major goals of IT operations, understanding functional and nonfunctional properties of an IT services and ways to monitor and measure them using Prometheus. Furthermore candidates should understand major security risks in IT infrastructure. This objective covers the feature set of Prometheus 1.7 or later.

Key Knowledge Areas:

◈ Understand goals of IT operations and service provisioning, including nonfunctional properties such as availability, latency, responsiveness
◈ Understand and identify metrics and indicators to monitor and measure the technical functionality of a service
◈ Understand and identify metrics and indicators to monitor and measure the logical functionality of a service
◈ Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana
◈ Monitor containers and microservices using Prometheus
◈ Understand the principles of IT attacks against IT infrastructure
◈ Understand the principles of the most important ways to protect IT infrastructure
◈ Understand core IT infrastructure components and their the role in deployment

The following is a partial list of the used files, terms and utilities:

◈ Prometheus, Node exporter, Pushgateway, Altermanager, Grafana
◈ Service exploits, brute force attacks, and denial of service attacks
◈ Security updates, packet filtering and application gateways
◈ Virtualization hosts, DNS and load balancers

705.2 Log Management and Analysis (weight: 4)

Weight: 4

Description: Candidates should understand the role of log files in operations and troubleshooting. They should be able to set up centralized logging infrastructure based on Logstash to collect and normalize log data. Furthermore, candidates should understand how Elasticsearch and Kibana help to store and access log data.

Key Knowledge Areas:

◈ Understand how application and system logging works
◈ Understand the architecture and functionality of Logstash, including the lifecycle of a log message and Logstash plugins
◈ Understand the architecture and functionality of Elasticsearch and Kibana in the context of log data management (Elastic Stack)
◈ Configure Logstash to collect, normalize, transform and store log data
◈ Configure syslog and Filebeat to send log data to Logstash
◈ Configure Logstash to send email alerts
◈ Understand application support for log management

The following is a partial list of the used files, terms and utilities:

◈ logstash
◈ input, filter, output
◈ grok filter
◈ Log files, metrics
◈ syslog.conf
◈ /etc/logstash/logstash.yml
◈ /etc/filebeat/filebeat.yml

Saturday, 23 November 2019

Unix Sed Command to Delete Lines in File - 15 Examples

LPI Certifications, LPI Guides, LPI Learning, LPI Linux-Unix

Sed Command to Delete Lines: Sed command can be used to delete or remove specific lines which matches a given pattern or in a particular position in a file. Here we will see how to delete lines using sed command with various examples.

The following file contains a sample data which is used as input file in all the examples:

> cat file
linux
unix
fedora
debian
ubuntu

Sed Command to Delete Lines - Based on Position in File


In the following examples, the sed command removes the lines in file that are in a particular position in a file.

1. Delete first line or header line

The d option in sed command is used to delete a line. The syntax for deleting a line is:

> sed 'Nd' file

Here N indicates Nth line in a file. In the following example, the sed command removes the first line in a file.

> sed '1d' file
unix
fedora
debian
ubuntu

2. Delete last line or footer line or trailer line

The following sed command is used to remove the footer line in a file. The $ indicates the last line of a file.

> sed '$d' file
linux
unix
fedora
debian

3. Delete particular line

This is similar to the first example. The below sed command removes the second line in a file.

> sed '2d' file
linux
fedora
debian
ubuntu

4. Delete range of lines

The sed command can be used to delete a range of lines. The syntax is shown below:

> sed 'm,nd' file

Here m and n are min and max line numbers. The sed command removes the lines from m to n in the file. The following sed command deletes the lines ranging from 2 to 4:

> sed '2,4d' file
linux
ubuntu

5. Delete lines other than the first line or header line

Use the negation (!) operator with d option in sed command. The following sed command removes all the lines except the header line.

> sed '1!d' file
linux

6. Delete lines other than last line or footer line

> sed '$!d' file
ubuntu

7. Delete lines other than the specified range

> sed '2,4!d' file
unix
fedora
debian

Here the sed command removes lines other than 2nd, 3rd and 4th.

8. Delete first and last line

You can specify the list of lines you want to remove in sed command with semicolon as a delimiter.

> sed '1d;$d' file
unix
fedora
debian

9. Delete empty lines or blank lines

> sed '/^$/d' file

The ^$ indicates sed command to delete empty lines. However, this sed do not remove the lines that contain spaces.

Sed Command to Delete Lines - Based on Pattern Match


In the following examples, the sed command deletes the lines in file which match the given pattern.

10. Delete lines that begin with specified character

> sed '/^u/d' file
linux
fedora
debian

^ is to specify the starting of the line. Above sed command removes all the lines that start with character 'u'.

11. Delete lines that end with specified character

> sed '/x$/d' file
fedora
debian
ubuntu

$ is to indicate the end of the line. The above command deletes all the lines that end with character 'x'.

12. Delete lines which are in upper case or capital letters

> sed '/^[A-Z]*$/d' file

13. Delete lines that contain a pattern

> sed '/debian/d' file
linux
unix
fedora
ubuntu

14. Delete lines starting from a pattern till the last line

> sed '/fedora/,$d' file
linux
unix

Here the sed command removes the line that matches the pattern fedora and also deletes all the lines to the end of the file which appear next to this matching line.

15. Delete last line only if it contains the pattern

> sed '${/ubuntu/d;}' file
linux
unix
fedora
debian

Here $ indicates the last line. If you want to delete Nth line only if it contains a pattern, then in place of $ place the line number.

Note: In all the above examples, the sed command prints the contents of the file on the unix or linux terminal by removing the lines. However the sed command does not remove the lines from the source file. To Remove the lines from the source file itself, use the -i option with sed command.

> sed -i '1d' file

If you dont wish to delete the lines from the original source file you can redirect the output of the sed command to another file.

sed '1d' file > newfile

Thursday, 21 November 2019

Converting Awk Script to Perl Script - Examples of a2p Unix Command

LPI Study Materials, LPI Certification, LPI Tutorials and Materials, Unix Command

Unix provides the a2p (awk to perl) utility for converting the awk script to perl script. The a2p command takes an awk script and produces a comparable perl script.

Syntax of a2p:


a2p [options] [awk_script_filename]

Some of the useful options that you can pass to a2p are:

-D<number>     Sets debugging flags.
-F<character>  This will tell a2p that awk script is always invoked with -F option.
-<number>       This makes a2p to assume that input will always have the specified number of fields.

For more options see the man pages; man a2p

Example 1:


The awk script which prints the squares of numbers up to 10 is shown below. Call the below script as awk_squares.

#!/bin/awk -f
BEGIN 
{
    for (i=1; i <= 10; i++) 
    {
        print "The square of ", i, " is ", i*i;
    }
exit;
}

Run this script using awk command; awk -f awk_squares. This will produce squares of numbers up to 10.

Now we will convert this script using the a2p as
a2p awk_squares > perl_squares

The content of converted perl script, perl_squares, is shown below:

#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"mailto:$@%22%7D'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' ';               # set output field separator
$\ = "\n";              # set output record separator

for ($i = 1; $i <= 10; $i++) {
    print 'The square of ', $i, ' is ', $i * $i;
}
last line;

Run the perl script as: perl perl_squares. This will produce the same result as the awk.

Example 2:


We will see an awk script which prints the first field from a file. The awk script for this is shown below. Call this script at awk_first_field.

#!/bin/awk -f
{
    print $1;
}

Run this script using awk command by passing a file as input: awk -f awk_first_field file_name. This will prints the first field of each line from the file_name.

We will convert this awk script into per script using the a2p command as
a2p awk_first_field > perl_first_field

The content of converted perl script, perl_first_field, is shown below:

#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"mailto:$@%22%7D'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' ';               # set output field separator
$\ = "\n";              # set output record separator
while (<>) {
    ($Fld1) = split(' ', $_, -1);
    print $Fld1;
}

Now run the perl script as: perl perl_first_field file_name. This will produce the same result as awk command.

Thursday, 14 November 2019

Learn the Linux Command 'setfacl'

setfacl Linux Command, Linux Study Materials, Linux Tutorial and Material, Linux Certification, Linux Guides

Setfacl utility sets Access Control Lists (ACLs) of files and directories. On the command line, a sequence of commands is followed by a sequence of files (which in turn can be followed by another sequence of commands, etc.).

◈ The options -m, and -x expect an ACL on the command line. Multiple ACL entries are separated by comma characters (`,'). The options -M, and -X read an ACL from a file or from standard input. The ACL entry format is described in Section ACL ENTRIES.

◈ The --set and --set-file options set the ACL of a file or a directory. The previous ACL is replaced. ACL entries for this operation must include permissions.

◈ The -m (--modify) and -M (--modify-file) options modify the ACL of a file or directory. ACL entries for this operation must include permissions.

◈ The -x (--remove) and -X (--remove-file) options remove ACL entries. Only ACL entries without the perms field are accepted as parameters unless POSIXLY_CORRECT is defined.

When reading from files using the -M, and -X options, setfacl accepts the output getfacl produces. There is at most one ACL entry per line. After a Pound sign (`#'), everything up to the end of the line is treated as a comment.

If setfacl is used on a file system which does not support ACLs, setfacl operates on the file mode permission bits. If the ACL does not fit completely in the permission bits, setfacl modifies the file mode permission bits to reflect the ACL as closely as possible, writes an error message to standard error, and returns with an exit status greater than 0.

Synopsis


setfacl [-bkndRLPvh] [{-m|-x} acl_spec] [{-M|-X} acl_file] file ...

setfacl --restore=file

Permissions


The file owner and processes capable of CAP_FOWNER are granted the right to modify ACLs of a file. This is analogous to the permissions required for accessing the file mode. (On current Linux systems, root is the only user with the CAP_FOWNER capability.)

Options


-b, --remove-all

◈ Remove all extended ACL entries. The base ACL entries of the owner, group, and others are retained.

-k, --remove-default

◈ Remove the Default ACL. If no Default ACL exists, no warnings are issued.

-n, --no-mask

◈ Do not recalculate the effective rights mask. The default behavior of setfacl is to recalculate the ACL mask entry unless a mask entry was explicitly given. The mask entry is set to the union of all permissions of the owning group, and all named user and group entries. (These are exactly the entries affected by the mask entry).

--mask

◈ Do recalculate the effective rights mask, even if an ACL mask entry was explicitly given. (See the -n option.)

-d, --default

◈ All operations apply to the Default ACL. Regular ACL entries in the input set are promoted to Default ACL entries. Default ACL entries in the input set are discarded. (A warning is issued if that happens).

--restore=file

◈ Restore a permission backup created by `getfacl -R' or similar. All permissions of a complete directory subtree are restored using this mechanism. If the input contains owner comments or group comments, and setfacl is run by root, the owner and owning group of all files are restored as well. This option cannot be mixed with other options except `--test'.

--test

◈ Test mode. Instead of changing the ACLs of any files, the resulting ACLs are listed.

-R, --recursive

◈ Apply operations to all files and directories recursively. This option cannot be mixed with `--restore'.

-L, --logical

◈ Logical walk, follow symbolic links. The default behavior is to follow symbolic link arguments and to skip symbolic links encountered in subdirectories. This option cannot be mixed with `--restore'.

-P, --physical

◈ Physical walk, skip all symbolic links. This also skips symbolic link arguments. This option cannot be mixed with `--restore'.

--version

◈ Print the version of setfacl and exit.

--help

◈ Print help explaining the command line options.

End of command line options. All remaining parameters are interpreted as file names, even if they start with a dash.

If the file name parameter is a single dash, setfacl reads a list of files from standard input.

ACL Entries


The setfacl utility recognizes the following ACL entry formats:

◈ [d[efault]:] [u[ser]:]uid [:perms]
◈ Permissions of a named user. Permissions of the file owner if uid is empty.
◈ [d[efault]:] g[roup]:gid [:perms]
◈ Permissions of a named group. Permissions of the owning group if gid is empty.
◈ [d[efault]:] m[ask][:] [:perms]
◈ Effective rights mask
◈ [d[efault]:] o[ther][:] [:perms]
◈ Permissions of others

Whitespace between delimiter characters and non-delimiter characters is ignored.

Proper ACL entries including permissions are used in modify and set operations. (options -m, -M, --set and --set-file). Entries without the perms field are used for deletion of entries (options -x and -X).

For uid and gid you can specify either a name or a number.

The perms field is a combination of characters that indicate the permissions: read (r), write (w), execute (x), execute only if the file is a directory or already has execute permission for some user(X). Alternatively, the perms field can be an octal digit (0-7).

Automatically Created Entries


Initially, files and directories contain only the three base ACL entries for the owner, the group, and others. There are some rules that need to be satisfied in order for an ACL to be valid:

◈ The three base entries cannot be removed. There must be exactly one entry of each of these base entry types.

◈ Whenever an ACL contains named user entries or named group objects, it must also contain an effective rights mask.

◈ Whenever an ACL contains any Default ACL entries, the three Default ACL base entries (default owner, default group, and default others) must also exist.

◈ Whenever a Default ACL contains named user entries or named group objects, it must also contain a default effective rights mask.

To help the user ensure these rules, setfacl creates entries from existing entries under the following conditions:

◈ If an ACL contains named user or named group entries, and no mask entry exists, a mask entry containing the same permissions as the group entry is created. Unless the -n option is given, the permissions of the mask entry are further adjusted to include the union of all permissions affected by the mask entry. (See the -n option description).

◈ If a Default ACL entry is created, and the Default ACL contains no owner, owning group, or others entry, a copy of the ACL owner, owning group, or others entry is added to the Default ACL.

◈ If a Default ACL contains named user entries or named group entries, and no mask entry exists, a mask entry containing the same permissions as the default Default ACL's group entry is added. Unless the -n option is given, the permissions of the mask entry are further adjusted to include the union of all permissions affected by the mask entry. (See the -noption description).

EXAMPLES

◈ Granting an additional user read access
◈ setfacl -m u:lisa:r file
◈ Revoking write access from all groups and all named users (using the effective rights mask)
◈ setfacl -m m::rx file
◈ Removing a named group entry from a file's ACL
◈ setfacl -x g:staff file
◈ Copying the ACL of one file to another
◈ getfacl file1 | setfacl --set-file=- file2
◈ Copying the access ACL into the Default ACL
◈ getfacl -a dir | setfacl -d -M- dir

Conformance to Posix 1003.1e Draft Standard 17


If the environment variable POSIXLY_CORRECT is defined, the default behavior of setfacl changes as follows: All non-standard options are disabled. The ``default:'' prefix is disabled. The -x and -X options also accept permission fields (and ignore them).