Tuesday, 31 March 2020

Some time-saving tips for Linux Users

LPI Cert Exam, LPI Tutorial and Material, LPI Learning, LPI User, Linux Guides

Are you making most out of the Linux? There are lots of helpful features which appears to be time saving Tips and Tricks for many of Linux Users. Sometimes these time saving Tips and Tricks become the need. They help you to get productive with the same set of commands yet with enhanced functionality.

Also Read: 101-500: Linux Administrator - 101 (LPIC-1 101)

Here are some of my favorites time-saving tips that every Linux user should know :

1. Better way to change directory:

If you are a command-line user, autojump is a must have package. You can change directory by just specifying a part of directory name. You can also use jumpstat to get a statistics of your directory jumps.

$ j log
/var/log
$ j ard
/home/ab/work/arduino

2. Switching between Windows: The windows you create in screen are numbered starting from zero. You can switch to a window by its number. For example, jump to the first window with Ctrl-a 0, the second window with Ctrl-a 1 and so on. It’s also very convenient to switch to the next and previous windows with Ctrl-a n and Ctrl-a p  respectively.

Ctrl-a 0
Ctrl-a 1

3. Need to execute the last command with sudo, use sudo !!

ls -l /root

sudo !!
# This is equivalent to sudo ls -l /root

4. Quickly locate a file on disk:

locate filename

5. System debugging:

◉ To know disk/cpu/network status, use iostat, netstat, top (or the better htop), and (especially) dstat. Good for getting a quick idea of what’s happening on a system.

◉ To know memory status, run and understand the output of free and vmstat. In particular, be aware the “cached” value is memory held by the Linux kernel as file cache, so effectively counts toward the “free” value.

◉ Java system debugging is a different kettle of fish, but a simple trick on Sun’s and some other JVMs is that you can run kill -3 and a full stack trace and heap summary (including generational garbage collection details,which can be highly informative) will be dumped to stderr/logs.

◉ Use mtr as a better traceroute, to identify network issues.

◉ For looking at why a disk is full, ncdu saves time over the usual commands like

du -sk *

◉ To find which socket or process is using bandwidth, try iftop or nethogs.

6. Free up disk space: bleachbit is a neat utility to find and remove files based on application specific knowledge.

7. Undelete for console: libtrash provides trashcan/recycle-bin like functionality for console users.

8. Mute / Unmute Sound:  

$ amixer set Master on|off

9. Wireless network listing:

$iwlist INTERFACE scan
Example: $ iwlist wlan0 scan | grep ESSID

10. Finding the biggest files:

ls -lSrh

11. Package search:

dpkg -S /path/to/file
rpm -qf /path/to/file
rpm -qa, apt-file are additional useful commands to take a look at.

12. Getting help:

◉ man – Make this as habit and learn how to use man
◉ whatis – To know the short description of command
◉ type – Gives info whether the command is shell built-in, alias or actual path of command

13. Splitting Files: Splits Files into 1024 Megabyte chunks

split -b 1024m filename

14. Editing the Command Line:

Many highly practical shortcuts can make you faster and more efficient on the command line in different ways:

◉ Find and re-run or edit a long and complex command from the history.

◉ Edit much more quickly than just using the backspace key and retyping text.

◉ Move around much faster than just using the left- and right-arrow keys.

15. Other tips for Everyday use:

◉ In bash, use Ctrl-R to search through command history.

◉ In bash, use Ctrl-W to kill the last word and Ctrl-U to kill the line.

◉ pstree -p is a helpful display of the process tree.

◉ Use pgrep and pkill to find or signal processes by name (-f is helpful).

To go back to the previous working directory: cd –

Sunday, 29 March 2020

Unix Vs. Linux: What’s the Difference?

What is UNIX?


The UNIX OS was born in the late 1960s. AT&T Bell Labs released an operating system called Unix written in C, which allows quicker modification, acceptance, and portability.

Unix, Linux, LPI Study Materials, LPI Guides, LPI Learning, LPI Certifications

It began as a one-man project under the leadership of Ken Thompson of Bell Labs. It went on to become most widely used operating systems. Unix is a proprietary operating system.

The Unix OS works on CLI (Command Line Interface), but recently, there have been developments for GUI on Unix systems. Unix is an OS which is popular in companies, universities big enterprises, etc.

What is LINUX?


Linux is an operating system built by Linus Torvalds at the University of Helsinki in 1991. The name "Linux" comes from the Linux kernel. It is the software on a computer which enables applications and the users to access the devices on the computer to perform some specific function.

The Linux OS relays instructions from an application from the computer's processor and sends the results back to the application via the Linux OS. It can be installed on a different type of computers mobile phones, tablets video game consoles, etc.

The development of Linux is one of the most prominent examples of free and open source software collaboration. Today many companies and similar numbers of individuals have released their own version of OS based on the Linux Kernel.

Features of Unix OS


◈ Multi-user, multitasking operating system

◈ It can be used as the master control program in workstations and servers.

◈ Hundreds of commercial applications are available

◈ In its heydays, UNIX was rapidly adopted and became the standard OS in universities.


Features of Linux


◈ Support multitasking

◈ Programs consist of one or more processes, and each process have one or more threads

◈ It can easily co-exists along with other Operating systems.

◈ It can run multiple user programs

◈ Individual accounts are protected because of appropriate authorization

◈ Linux is a replica of UNIX but does not use its code.

Unix, Linux, LPI Study Materials, LPI Guides, LPI Learning, LPI Certifications

Linux vs. Unix


Basis of Difference Linux  Unix 
Cost  Linux is freely distributed, downloaded through magazines, Books, website, etc. There are paid versions also available for Linux.  Different flavors of Unix have different pricing depending upon the type of vendor. 
Development  Linux is Open Source, and thousands of programmer collaborate online and contribute to its development.  Unix systems have different versions. These versions are primarily developed by AT&T as well as other commercial vendors. 
User  Everyone. From home users to developers and computer enthusiasts alike.  The UNIX can be used in internet servers, workstations, and PCs. 
Text made interface  BASH is the Linux default shell. It offers support for multiple command interpreters.  Originally made to work in Bourne Shell. However, it is now compatible with many others software. 
GUI  Linux provides two GUIs,viz., KDE and Gnome. Though there are many alternatives such as Mate, LXDE, Xfce, etc.  Common Desktop Environment and also has Gnome. 
Viruses  Linux has had about 60-100 viruses listed to date which are currently not spreading.  There are between 80 to 120 viruses reported till date in Unix. 
Threat detection  Threat detection and solution is very fast because Linux is mainly community driven. So, if any Linux user posts any kind of threat, a team of qualified developers starts working to resolve this threat.  Unix users require longer wait time, to get the proper bug fixing patch. 
Architectures  Initially developed for Intel's x86 hardware processors. It is available for over twenty different types of CPU which also includes an ARM.  It is available on PA-RISC and Itanium machines.
Usage  Linux OS can be installed on various types of devices like mobile, tablet computers.  The UNIX operating system is used for internet servers, workstations & PCs. 
Best feature  Kernel update without reboot  Feta ZFS - next generation filesystem DTrace - dynamic Kernel Tracing 
Versions  Different Versions of Linux are Redhat, Ubuntu, OpenSuse, Solaris, etc.  Different Versions of Unix are HP-UX, AIS, BSD, etc.
Supported file type  The Filesystems supported by file type like xfs, nfs, cramfsm ext 1 to 4, ufs, devpts, NTFS.  The Filesystems supported by file types are zfs, hfx, GPS, xfs, vxfs. 
Portability   Linux is portable and is booted from a USB Stick  Unix is not portable 
Source Code  The source is available to the general public  The source code is not available to anyone. 

Limitation of Linux


◈ There's no standard edition of Linux

◈ Linux has patchier support for drivers which may result in misfunctioning of the entire system.

◈ Linux is, for new users at least, not as easy to use as Windows.

◈ Many of the programs we are using for Windows will only run on Linux only with the help of a complicated emulator. For example. Microsoft Office.

◈ Linux is best suitable for a corporate user. It's much harder to introduce in a home setting.


Limitations of Unix


◈ The unfriendly, terse, inconsistent, and non-mnemonic user interface

◈ Unix OS is designed for a slow computer system, so you can't expect fast performance.

◈ Shell interface can be treacherous because typing mistake can destroy files.

◈ Versions on various machines are slightly different, so it lacks consistency.

◈ Unix does not provide any assured hardware interrupt response time, so it does not support real time response time systems.

Saturday, 28 March 2020

LPI Exam 301 Prep: Usage

Prerequisites


To get the most benefit from this post, you should have advanced knowledge of Linux and a working Linux system on which to practice the commands covered.

If your fundamental Linux skills are a bit rusty, you may want to first review the post for the LPIC-1 and LPIC-2 exams.

Different versions of a program may format output differently, so your results may not look exactly like the listings and figures in this post.

System requirements

To follow along with the examples in this post, you'll need a Linux workstation with the OpenLDAP package and support for PAM. Most modern distributions meet these requirements.

Searching the directory


This section covers material for topic 304.1 for the Senior Level Linux Professional (LPIC-3) exam 301. This topic has a weight of 2.

In this section, learn how to:

◉ Use OpenLDAP search tools with basic options

◉ Use OpenLDAP search tools with advanced options

◉ Optimize LDAP search queries

◉ Use search filters and their syntax

The data in your tree is useful only if you can find entries when you need them. LDAP provides a powerful set of features that allow you to extract information from your tree.

The basics of search

To search a tree, you need four pieces of information:

1. Credentials on the server that holds the tree

2. A Distinguished Name (DN) on the tree to base your search on

3. A search scope

4. A search filter

The credentials can be nothing, which results in an anonymous bind, or they can be the DN of an entry along with a password. Implicit in this is that the server recognizes these credentials as valid, and is willing to allow you to search!

The DN that you base your search on is called the base DN. All results will be either the base DN itself or its children. If your base DN is ou=people,dc=ertw,dc=com, then you might find cn=Sean Walberg,ou=people,dc=ertw,dc=com, but you won't find cn=Users,ou=Groups,dc=ertw,dc=com, because it lies outside the base DN you were trying to search.

The scope determines which entries under the base DN will be searched. You may want to limit the scope because of performance reasons, or because only certain children of the base DN contain the information you want. The default search scope, subordinate (usually abbreviated as sub), includes the base DN and all children. You can search only the base DN with a base scope, such as when you want to test to see if an entry exists. The search scope called one searches only the base DN's immediate children and excludes any grandchildren and the base DN itself. Figure 1 shows a tree and the entries that would be included in the three different search scopes.

Figure 1. Three different search scopes

LPI Exam 301 Prep, LPI Exam Prep, LPI Guides, LPI Tutorial and Material, LPI Certification, LPI Exam Prep

The most powerful (and complex) part of searching is the search filter. The credentials, base DN, and scope limit which entries are to be searched, but it is the query that examines each record and returns only the ones that match your criteria.

Simple search filters

Search filters are enclosed in parentheses. Within the parentheses is one attribute=value pair. A simple search filter would be (objectClass=inetOrgPerson), which will find any entries with an objectClass of inetOrgPerson. The attribute itself is not case sensitive, but the value may or may not be depending on how the attribute is defined in the schema.

Substring searches are performed with the asterisk (*) operator. Search for (Sean*) to match anything beginning with Sean. The asterisk can go anywhere in the string, such as (* Walberg) to find anything ending in Walberg, or even S*Wa*berg to find anything starting with S, ending in berg, and having Wa somewhere in the middle. You might use this to find the author's name, not knowing if it is Sean or Shawn, or Walberg or Wahlberg.

The most generic form of the asterisk operator, attribute=* checks for the existence of the specified attribute. To find all the entries with defined e-mail addresses, you could use (mail=*).

AND, OR, and NOT

You can perform logical AND and OR operations with the "&" and "|" operators respectively. LDAP search strings place the operator before the conditions, so you will see filters like those shown in Listing 1.

Listing 1. Sample search filters using AND and OR

(|(objectClass=inetOrgPerson)(objectClass=posixAccount)) (&(objectClass=*)(cn=Sean*)(ou=Engineering)) (&(|(objectClass=inetOrgPerson)(objectClass=posixAccount))(cn=Sean*))

The first search string in Listing 1 looks for anything with an objectClass of inetOrgPerson or posixAccount. Note that each component is still enclosed in parentheses, and that the OR operator (|) along with its two search options are also enclosed in another set of parentheses.

The second search string is similar, but starts with an AND operation instead of OR. Here, three different tests must be satisfied, and they all follow the ampersand in their own set of parentheses. The first clause, objectClass=*, matches anything with a defined objectClass (which should be everything, anyway). This search of all objectClasses is often used as a search filter when you want to match everything and are required to enter a filter. The second clause matches anything that starts with Sean.

The third search string shows both an AND and an OR used together in a filter that looks for anything with an objectClass of either inetOrgPerson or posixAccount, and a common name beginning with Sean.

The logical NOT is performed with the exclamation mark (!), much like the AND and OR. A logical NOT has only one argument so only one set of parentheses may follow the exclamation mark. Listing 2 shows some valid and invalid uses of NOT.

Listing 2. How to use, and how not to use, the logical NOT

(!cn=Sean) # invalid, the ! applies to a filter inside () (!(cn=Sean)) # valid (!(cn=Sean)(ou=Engineering)) # invalid, only one filter can be negated (!(&cn=Sean*)(ou=Engineering))) # valid, negates the AND clause

In the fourth example of Listing 2, the negation is applied to an AND filter. Thus, that rule returns any entries that do not satisfy both of the AND clauses. Be careful when dealing with negation of composite filters, because the results are not always intuitive. The fourth example from Listing 2 will still return entries with an ou of Engineering if they don't have a common name starting with Sean. Both tests must pass for the record to be excluded.

Searching ranges

Often you need to search for a range of values. LDAP provides the <= and >= operators to query an attribute. Be careful that the equal sign (=) is included, because there are no < and > operators—you must also test for equality.

Not all integer attributes can be checked with the range operators. When in doubt, check the schema to make sure the attribute implements an ordering type through the ORDERING keyword.

Searching for close matches

An LDAP directory is often used to store names, which leads to spelling problems. "Sean" can also be "Shawn" or "Shaun." LDAP provides a "sounds-like" operator, "~=", which returns results that sound similar to the search string. For example, (cn~=Shaun) returns results that have a common name containing a word that sounds like "Shaun." Implicit in the sounds-like search is a substring search, such that (cn=~Shaun) will return results for cn=Shawn Walberg. The OpenLDAP implementation is not perfect, though; the same search will not return results for the "Sean" spelling.

Searching the DN

All the examples so far have focused on searching attributes, not searching on the distinguished name (DN) that identifies the record. Even though the leftmost component of the DN, the relative DN (RDN), appears as an attribute and is therefore searchable, the search filters presented so far will not look at the rest of the DN.

Searching the DN is done through a specific query filter requiring an exact match. The format is attribute:dn:=value, where the attribute is the component of the DN you want to search, and the value is the search string (no wildcards allowed). For example, (ou:dn:=People) would return all the entries that have ou=People in the DN, including the container object itself.

Altering the matchingRule

By default, most strings, such as the common name, are case insensitive. If you want to override the matching rule, you can use a form similar to the DN search. A search such as (ou:caseexactmatch:=people) will match an organizational unit of "people", but not "People". Some common matching rules are:

◉ caseIgnoreMatch matches a string without regard for capitalization. Also ignores leading and trailing whitespace when matching.

◉ caseExactMatch is a string match that also requires similar capitalization between the two strings being searched.

◉ octetStringMatch is like a string match, but does not remove whitespaces, and therefore requires an exact, byte-for-byte, match.

◉ telephoneNumberMatch searches a telephone number, which has its own data type in LDAP.

You can also change the matching rule of a DN search by combining the DN search with the matching rule search. For example, (ou:dn:caseexactmatch:=people) searches for a DN containing the exact string "people".

These two types of searches, DN searches and matching rule searches, are also called extensible searches. They both require exact strings and do not allow wildcards.

Using ldapsearch

The command-line tool to search the tree is ldapsearch. This tool lets you bind to the directory in a variety of ways, execute one or more searches, and retrieve the data in LDIF format.

The default behavior of ldapsearch is:

◉ Attempt a Simple Authentication and Security Layer (SASL) authentication to the server

◉ Connect to the server at ldap://localhost:389

◉ Use (objectClass=*) as a search filter

◉ Read the search base from /etc/openldap/ldap.conf

◉ Perform a sub search; that is, include the search base and all children

◉ Return all user attributes, ignoring operational (internal usage) attributes

◉ Use extended LDAP Data Interchange Format (LDIF) for output

◉ Do not sort the output

Authenticating to the server

If you are not using SASL, then you need simple authentication using the -x parameter. By itself, -x performs an anonymous bind, which is a bind without any bind DN or password. Given the other defaults, ldapsearch -x will dump your entire tree, starting at the search base specified in /etc/openldap/ldap.conf. Listing 3 shows the usage of a simple anonymous search.

Listing 3. A simple anonymous search

$ ldapsearch -x # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: (objectclass=*) # requesting: ALL # # people, ertw.com dn: ou=people,dc=ertw,dc=com ou: people description: All people in organization objectClass: organizationalUnit ... output truncated ...

Listing 3 shows the header and first entry returned from a simple anonymous search. The first seven lines form the header, and in LDIF fashion, are commented starting with the hash mark (#). The first three lines identify the rest of the text as being extended LDIF and retrieved using LDAP version 3. The next line indicates that no base DN was specified and that a subtree search was used. The last two lines of text give the search filter which is everything and that all attributes were requested.

You may use the -LLL option to remove all the comments from your output.

After the header comes each entry; each entry starts with a header describing the entry and then the list of attributes, starting with the DN. Attributes are not sorted.

If you need to use a username and password to log in, use the -D and -w options to specify a bind DN and a password, respectively. For example, ldapsearch -x D cn=root,dc=ertw,dc=com -w mypassword will perform a simple authentication with the root DN username and password. You may also choose to type the password into a prompt that does not echo to the screen by using -W instead of -w password.

You may also connect to a different server by passing a Uniform Resource Identifier (URI) to the remote LDAP server using the -H option, such as ldapsearch -x -H ldap://192.168.1.1/ to connect to an LDAP server at 192.168.1.1.

Performing searches

Append your search filter to your command line in order to perform a search. You will likely have to enclose the filter in quotes to protect special characters in the search string from being interpreted by the shell. Listing 4 shows a simple search on the common name.

Listing 4. A simple search from the command line

$ ldapsearch -LLL -x '(cn=Fred Smith)' dn: cn=Fred Smith,ou=people,dc=ertw,dc=com objectClass: inetOrgPerson sn: Smith cn: Fred Smith mail: fred@example.com

The search in Listing 4 uses the -LLL option to remove comments in the output and the -x option to force simple authentication. The final parameter is a search string that looks for Fred Smith's entry. Note that the parentheses are used around the search, and that single quotes are used both to protect the parentheses as being interpreted as a subshell invocation, and because the search string contains a space which would cause the "Smith" to be interpreted as a separate argument.

Listing 4 returned all of Fred Smith's attributes. It is a waste of both client and server resources to retrieve all values of a record if only one or two attributes are needed. Add the attributes you want to see to the end of the ldapsearch command line to only request those attributes. Listing 5 shows how the previous search looks if you only wanted Fred's e-mail address.

Listing 5. Requesting Fred Smith's e-mail address

$ ldapsearch -LLL -x '(cn=Fred Smith)' mail dn: cn=Fred Smith,ou=people,dc=ertw,dc=com mail: fred@example.com

The mail attribute is appended to the command line from Listing 4, and the result is the distinguished name of the record found, along with the requested attributes.

ldapsearch looks to /etc/openldap/ldap.conf for a line starting with BASE to determine the search base, and failing that, relies on the server's defaultsearchbase setting. The search base is the point on the tree where searches start from. Only entries that are children of the search base (and the search base itself) will be searched. Use the -b parameter to specify a different search base, such as ldapsearch -x -b ou=groups,dc=ertw,dc=com to search the groups container from the ertw.com tree.

Altering how data is returned

LDAP can store binary data such as pictures. The jpegPhoto attribute is the standard way to store a picture in the tree. If you retrieve the value of the attribute from the command line, you will find it is base64 encoded. The -t parameter is used to save any binary attributes into a temporary file. Listing 6 shows how to use this parameter.

Listing 6. Saving binary attributes on the file system

$ ldapsearch -LLL -x 'cn=joe*' jpegphoto | head dn: cn=Joe Blow,ou=people,dc=ertw,dc=com jpegPhoto:: /9j/4AAQSkZJRgABAQEASABIAAD//gAXQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q/9sAQw ... output continues for 1300+ lines ... $ ldapsearch -LLL -t -x '(cn=joe*)' jpegphoto dn: cn=Joe Blow,ou=people,dc=ertw,dc=com jpegPhoto:< file:///tmp/ldapsearch-jpegPhoto-VaIjkE $ file /tmp/ldapsearch-jpegPhoto-VaIjkE /tmp/ldapsearch-jpegPhoto-VaIjkE: JPEG image data, JFIF standard 1.01, comment: \ "Created with The GIMP\377"

Listing 6 shows two searches for anyone with a name beginning with "Joe," and only retrieving the jpegPhoto attribute. The first try does not use the -t parameter, and therefore the value of jpegPhoto is shown on the console in base64 format. The usefulness of this is limited at the command line, so the second try specifies -t on the command line. This time the value of jpegPhoto is a URI to a file (you may change the directory with the -T option). Finally, the returned file is inspected, and indeed, it is the binary version of the picture that can be viewed.

By default, ldapsearch prints out results in the order they were received in from the server. You can sort the output with the -S parameter, passing it the name of the attribute you want to sort on. For sorting on multiple attributes, separate the attributes with a comma (,).

LDAP command-line tools


This section covers material for topic 304.2 for the Senior Level Linux Professional (LPIC-3) exam 301. This topic has a weight of 4.

In this section, learn how to:

◉ Use the ldap* tools to access and modify the directory
◉ Use the slap* tools to access and modify the directory

Several tools are provided with OpenLDAP to manipulate the directory and administer the server. You are already familiar with ldapsearch, which was covered in the previous section. The commands beginning with ldap are for users of the tree, the commands beginning with slap are for administrators.

Tree manipulation tools

The commands in this section are for manipulating the tree, either by changing data or reading data. ldapsearch falls into this category, too. To use these commands, you need to authenticate to the server.

ldapadd and ldapmodify

These two commands are used to add and change entries in the tree.

Listing 7. LDIF to add an entry to the tree

dn: cn=Sean Walberg,ou=people,dc=ertw,dc=com objectclass: inetOrgPerson cn: Sean Walberg cn: Sean A. Walberg sn: Walberg homephone: 555-111-2222

Listing 7 begins with a description of the distinguished name of the entry. This entry will end up under the ou=people,dc=ertw,dc=com container, and have a relative distinguished name of cn=Sean Walberg, which is obtained by splitting the distinguished name (DN) after the first attribute/value pair. The entry has an objectclass of inetOrgPerson, which is a fairly generic type for any person belonging to an organization. Two variants of the common name follow, then a surname, and finally, a home phone number.

Implicit in Listing 7 is that this is an addition to the tree, as opposed to a change or deletion. Recall that LDIF files can specify the changetype keyword, which tells the reader what to do with the data.

The ldapadd command is used to process this LDIF file. If Listing 7 were stored as "sean.ldif", then ldapadd -x -D cn=root,dc=ertw,dc=com -w mypass -f sean.ldif would be one way to add the new entry to the tree. The -x -D cn=root,dc=ertw,dc=com -w mypass part of the command should be familiar from the earlier discussion of ldapsearch, as a way to authenticate to the tree with simple authentication and the all-powerful root DN. All the ldap commands in this section use the same parameters to authenticate to the tree, so you will see this form repeated.

ldapadd is implemented as a symbolic link to ldapmodify, and when called as ldapadd is interpreted as ldapmodify -a. The -a parameter tells ldapmodify to assume a default changetype of add, which is used to add new entries to the tree. When called as ldapmodify, the assumption is that the default changetype is modify operation.

ldapadd (and ldapmodify) is an efficient way of loading bulk data into a server without shutting it down. LDIF files can contain many operations, and often it is easier to generate LDIF from whatever other data source you are trying to import than to write custom code to parse the data source and add it directly through LDAP.

ldapdelete

ldapdelete, like the name implies, deletes an entry from the tree. All entries are uniquely identified in the tree by their DN; therefore, ldapdelete deletes entries by DN, and not by any other query.

Besides the authentication parameters already discussed, ldapdelete can take its list of DNs to delete either from the command line or from a file. To delete from the command line, simply append the DNs to your command line, such as ldapdelete -x -D cn=root,dc=ertw,dc=com -w mypass "cn=Sean Walberg,ou=people,dc=ertw,dc=com". If you have many entries to delete, you can place the DNs, one per line, in a file, and point ldapdelete to that file with -f filename.

Note that you can also delete entries through LDIF and the ldapadd/ldapmodify commands. The ldapdelete command is more convenient in many cases, but is not the only way of deleting entries.

ldapmodrdn

The ldapmodrdn command changes the relative distinguished name of the object, that is, the first attribute/value pair in the DN. This effectively renames the entry within the current branch of the tree. Unlike the LDIF moddn changetype, this command can only rename the entry, and cannot move it to another spot on the tree.

Usage of this command is simple: give it the authentication credentials, the DN of the entry, and the new RDN. Listing 8 shows an account being renamed from "Joe Blow" to "Joseph Blow".

Listing 8. Renaming an entry

$ ldapmodrdn -x -D cn=root,dc=ertw,dc=com -w dirtysecret \ 'cn=Joe Blow,ou=people,dc=ertw,dc=com' 'cn=Joseph Blow' $ ldapsearch -LLL -x '(cn=Joseph Blow)' dn: cn=Joseph Blow,ou=people,dc=ertw,dc=com objectClass: inetOrgPerson sn: Blow cn: Joe Blow cn: Joseph Blow

Note that the old RDN still appears as an attribute, that is, cn: Joe Blow. If you want the old RDN to be removed, add -r to your command line. This is the same as adding deleteoldrdn: 1 to your LDIF code (which, curiously, is the default behavior for LDIF but not ldapmodrdn).

ldapcompare

ldapcompare allows you to compare a predetermined value to the value stored somewhere in the LDAP tree. An example will show how this works.

Listing 9. Using ldapcompare

$ ldapcompare -x "cn=Sean Walberg,ou=people,dc=ertw,dc=com" userPassword:mypassword TRUE $ ldapcompare -x "cn=Sean Walberg,ou=people,dc=ertw,dc=com" userPassword:badpassword FALSE

In Listing 9, the ldapcompare command is run. After the authentication parameters are taken care of, the final two parameters are the DN to check and the attribute and value to check against. The DN in both the examples above is the listing for "cn=Sean Walberg". The attributes being checked in both cases are the userPassword attribute. When the proper password is given, ldapcompare prints the string TRUE and an error code of 6. If the value given doesn't match what's in the entry, then FALSE is sent to the console, and an error code of 5 is returned. The -z option prevents anything from being printed; the caller is expected to use the error code to determine if the check was successful or not.

Even though the example in Listing 9 checked a password, any attribute can be used, including objectClass. If the attribute has multiple values, such as multiple common names or objectClasses, then the comparison is successful if one of them matches.

ldapwhoami

ldapwhoami allows you to test authentication to the LDAP server and to determine which DN you are authenticated against on the server. Simply call ldapwhoami with the normal authentication parameters, as shown in Listing 10.

Listing 10. A demonstration of ldapwhoami

$ ldapwhoami -x anonymous Result: Success (0) $ ldapwhoami -x -D "cn=Sean Walberg,ou=people,dc=ertw,dc=com" -w mypassword dn:cn=Sean Walberg,ou=people,dc=ertw,dc=com Result: Success (0) $ ldapwhoami -x -D "cn=Sean Walberg,ou=people,dc=ertw,dc=com" -w badpass ldap_bind: Invalid credentials (49)

The first example in Listing 10 shows a bind with no username or password. Ldapwhoami returns the string anonymous to indicate an anonymous bind, and also a status line indicating that the authentication was successful. The second example binds as a user's DN. This time the DN returned is the same one that was authenticated with. Finally, a bind attempt is made with invalid credentials. The result is an explanation of the problem.

Ldapwhoami is helpful for troubleshooting the configuration of the server, and also for manually verifying passwords. Access lists might get in the way of an ldapsearch, so using ldapwhoami instead can help you determine if the problem is credentials or access lists.

Administration tools

The commands beginning with slap are for administrators, and operate directly on the database files rather than through the LDAP protocol. As such, you will generally need to be root to use these commands, and in some cases, the server must also be shut down.

slapacl

Slapacl is a utility that lets the administrator test access lists against various combinations of bind DN, entry, and attribute. For instance, you would use slapacl to test to see what access a particular user has on another user's attributes. This command must be run as root because it is reading the database and configuration files directly rather than using LDAP.

The usage of slapacl is best described through an example. In Listing 11, the administrator is testing to see what access a user has on his own password before implementing an ACL, and then again after implementing an ACL that is supposed to limit the access to something more secure.

Listing 11. Using slapacl to determine the effect of an ACL change

# slapacl -D "cn=Sean Walberg,ou=people,dc=ertw,dc=com" \ -b "cn=Sean Walberg,ou=People,dc=ertw,dc=com" userPassword authcDN: "cn=sean walberg,ou=people,dc=ertw,dc=com" userPassword: read(=rscxd) ... change slapd.conf ... # slapacl -D "cn=Sean Walberg,ou=people,dc=ertw,dc=com" \ -b "cn=Sean Walberg,ou=People,dc=ertw,dc=com" userPassword authcDN: "cn=sean walberg,ou=people,dc=ertw,dc=com" userPassword: =wx # slapacl -D "cn=Joseph Blow,ou=people,dc=ertw,dc=com" \ -b "cn=Sean Walberg,ou=People,dc=ertw,dc=com" userPassword authcDN: "cn=joseph blow,ou=people,dc=ertw,dc=com" userPassword: =0

Two pieces of information are mandatory for the slapacl command. The first is the bind DN, which is the DN of the user you are testing access for. The second piece is the DN of the entry you are testing against. The bind DN is specified with -D, and the target DN is set with -b. You can optionally limit the test to a single attribute by including it at the end (like the userPassword example in Listing 11). If you don't specify an attribute, you will receive results for each attribute in the entry.

In the first command from Listing 11, the administrator is testing the cn=Sean Walberg entry to see what access he has against his own password. The result is read access. After changing the ACLs, the test is performed again, and only the write and authenticate permissions are available. Finally, a test is performed to see what access Joseph Blow has on Sean Walberg's password; the result is that he has no access.

Slapaclis an effective way to test the results of ACL changes, and to debug ACL problems. It is particularly effective because it reads directly from the database and slapd.conf, so any changes made to slapd.conf are reflected in the output of slapacl and don't require a restart of slapd.

slapcat

Slapcat dumps the contents of the LDAP tree as LDIF to the standard output, or to a file if you use -l filename. You can optionally use the -s option to provide the starting DN, or -a to pass a query filter.

Slapcat operates directly on the database, and can be run while the server is still running. Only the bdb database types are supported.

slapadd

Slapadd is a bulk import tool that operates directly on the backend databases, which means slapd must be stopped to use this tool. It is designed to be used with the output from slapcat. Slapadd doesn't perform much validation on the input data, so it is possible to end up with branches of the tree that are separated. This would happen if some container objects weren't imported.

The input to slapadd is an LDIF file, such as the one generated by slapcat. The slapadd(8C) manpage suggests using ldapadd instead because of the data validation provided by the online variant. The manpage also notes that the output of slapcat is not guaranteed to be ordered in a way that is compatible with ldapadd (the container objects may come after the children in the output, and hence would fail validation). Using any filters in slapcat may also cause important data to be missing. Therefore, you should use slapadd only with LDIF produced by slapcat, and use ldapadd for any other LDIF.

After shutting down your LDAP server, you can just run the slapadd command and pipe your LDAP output to the standard input. If you want to read from a file, use the -l option. As with slapcat, only the bdb database types are supported.

slappasswd

Slappasswd is used to generate hashed passwords to be stored in the directory, or in slapd.conf. A common use is to use a hashed password for the rootdn's account in slapd.conf so that anyone looking at the configuration file can not determine the password. Slappasswd will prompt you for a password to hash if you don't provide any parameters, as shown in Listing 12.

Listing 12. Using slappasswd to hash a password

$ slappasswd New password: Re-enter new password: {SSHA}G8Ly2+t/HMHJ3OWWE7LN+GRmZJAweXoE

You may then copy the entire string to the rootpw line in slapd.conf. Slapd recognizes the format of the password and understands that {SSHA} means that what follows is a SHA1 hash. Anyone who reads slapd.conf will not learn the root password.

The hashes generated by slappasswd can also be used in LDIF files used with ldapadd and ldapmodify, which will allow you to store secure one-way hashes of your password instead of a less secure plaintext or base64-encoded version.

slapindex

After creating or changing an index with the index keyword in slapd.conf, you must rebuild your indexes, or slapd will return incorrect results. To rebuild your indexes, stop slapd and run slapindex. This may take a while depending on how many entries are in your databases, or as the manpage puts it, "This command provides ample opportunity for the user to obtain and drink their favorite beverage."

slaptest

Slapdtest simply checks to see if your slapd.conf file is correct. This is helpful because if you were to restart slapd with a bad configuration file, it would fail to start up until you fixed the file. Slaptest lets you perform a sanity check on your configuration file before restarting.

Using slaptest is as simple as typing slaptest. If the slapd.conf is correct, you will see config file testing succeeded. Otherwise, you will receive an error explaining the problem.

Slaptest also checks for the existence of various files and directories necessary for operation. During testing however, the author was able to find some configuration file errors that passed slaptest, but would still cause slapd to fail.

Whitepages


This section covers material for topic 304.3 for the Senior Level Linux Professional (LPIC-3) exam 301. This topic has a weight of 1.

In this section, learn how to do:

◉ Plan whitepages services
◉ Configure whitepages services
◉ Configure clients to retrieve data from whitepages services

A whitepages service allows e-mail clients to retrieve contact information from an LDAP database. By staying with common attribute names, such as those provided by the inetOrgPerson objectClass, you can get the most compatibility with e-mail clients. For example, both Microsoft Outlook and Evolution use the mail attribute to store the user's e-mail address, and the givenName, displayName, cn, and sn attributes to store various forms of the name.

Configuring e-mail clients for an LDAP directory

In theory, any mail client that supports LDAP can use your tree. You will need the following information configured in the client:

◉ The LDAP server's address or hostname
◉ Credentials to bind with, unless you are binding anonymously
◉ The base DN to search from
◉ A search filter, such as (mail=*), to weed out accounts without an e-mail address (optional)

Once you input the above information into your e-mail client, you should be able to search for contacts.

Configuring Microsoft Outlook for an LDAP directory

To configure Microsoft Outlook (tested on Outlook 2003), select Tools > Email Accounts. You will see a dialog similar to Figure 2.

Figure 2. Selecting the type of account to add

LPI Exam 301 Prep, LPI Exam Prep, LPI Guides, LPI Tutorial and Material, LPI Certification, LPI Exam Prep

Select the option to add a new directory, and click Next. You will then see the dialog in Figure 3.

Figure 3. Selecting the type of directory to add

LPI Exam 301 Prep, LPI Exam Prep, LPI Guides, LPI Tutorial and Material, LPI Certification, LPI Exam Prep

Select the option to add a new LDAP directory, and click Next. You will then see the dialog in Figure 4.

Figure 4. Specifying the LDAP server details

LPI Exam 301 Prep, LPI Exam Prep, LPI Guides, LPI Tutorial and Material, LPI Certification, LPI Exam Prep

Enter the relevant details about your LDAP server in the dialog shown in Figure 4. The example shown uses user credentials to bind to the tree. You can use anonymous access if your server's configuration supports it.

After entering in the basic details, click More Settings, and you will be prompted for more information, as shown in Figure 5.

Figure 5. Adding advanced options to the LDAP server configuration

LPI Exam 301 Prep, LPI Exam Prep, LPI Guides, LPI Tutorial and Material, LPI Certification, LPI Exam Prep

Figure 5 shows more options, the important one being the search base. Click OK after entering the search base, and you will be returned to the main Outlook screen.

You may now use the LDAP database wherever you are prompted to look for users by selecting the server name from the "Show Names From" field.

Source: ibm.com

Thursday, 26 March 2020

Virtual Meetings in Times of Pandemic

LPI Study Materials, LPI Certification, LPI Exam Prep, LPI Prep, LPI Tutorial and Material

"May you live in interesting times." So goes the proverbial and purportedly Chinese curse, which seems appropriate given the global pandemic that originated around Wuhan, China. That said, it's also very likely false; the quote, that is, not the pandemic. The most reliable sources out there indicate that it's actually an English saying, as spoken in March of 1936 by British MP, Sir Austen Chamberlain. Chamberlain was remarking on Germany's breaking of the "Treaty of Locarno" and attributed the saying to a fellow parliamentarian who had been to China and therefore assumed it was Chinese. Regardless of whether Chamberlain's recollections were correct, we are now, most definitely, living in interesting times.

The current COVID-19 pandemic is serious on many levels, from the obvious dangers to health and well-being, to financial and social. It's probably an understatement to suggest that the Coronavirus outbreak of 2019 is affecting all aspects of our lives, and may take years to recover from. It may well turn out to be a defining moment in history for how we deal with such outbreaks as well as a turning point in how we deal with each other. Around the world, public gatherings have been limited, meetings and conferences have been cancelled, schools have been closed (my own kids are home for, at a minimum, three weeks), sporting events and entire seasons have been postponed or cancelled outright, and employees are being told to work at home whenever and wherever possible. Don't even ask about the toilet paper panic.

My friend and colleague, Evan Leibovitch, was slated to appear at FOSSASIA, an open technology conference that was supposed to start late March in Singapore. With worldwide travel restrictions in place, flights disrupted, and conferences like this being postponed or outright canceled, Evan was a bit bummed out, as we older kids like to say. As we were discussing this, I recounted stories of various events that have moved, not just online, but into virtual space.

As a techno-addict, I tend to enjoy living on the edge when it comes to consumer tech. Consequently, I've been playing with virtual reality (VR) for a few years now. I've had a fair bit of experience with VR conferences and meetings of late, and I've used several different platforms, including AltspaceVR, Mozilla Hubs, VRChat, RecRoom, and others. Shortly before this whole COVID-19 crisis reached DEFCON 3, I attended a few sessions of the "Educators in VR" event which took place in AltspaceVR, and later attended a few sessions of a health IT conference which had been canceled and moved into VR. Look at the image below. That's me in the red shirt attending one of the sessions.

LPI Study Materials, LPI Certification, LPI Exam Prep, LPI Prep, LPI Tutorial and Material

I even recorded an episode of my podcast with Evan Leibovitch in AltspaceVR. If you take a look at the image below, I'm on the left and Evan is the guy wearing that yellow shirt and blue track pants. 

LPI Study Materials, LPI Certification, LPI Exam Prep, LPI Prep, LPI Tutorial and Material

Human beings are social creatures. Joke all you want about the four hour meeting from Hell, but we naturally want to be together. Most of us would prefer to sit and talk to people directly, at least occasionally. We've had video conferencing for some time and we, here at LPI, often use Zoom to host online meetings. To us, it comes naturally because we are scattered across various parts of the globe.

Here's the thing about VR, though; it feels like you're actually there. Okay, it's not exactly like being there, but the immersive experience of VR will, after only a brief period of time, start to fool your mind into treating sights, sounds, objects, and people, as though they are actually there with and around you. It is, in many ways, like being there. Trust me. I've faced off against invading hordes of zombies and when you're toe-to-toe with them, your heart is pumping like they're in the room with you.

If you've stuck with me up to this point, you may be wondering where the Open Source angle is in all of this. During our experiments, Evan noted that our podcast environment required specialized VR gear in order to play. And even the Oculus Go that he’d bought at Christmas ($200) didn’t give him full freedom of movement, access to all of the VR platforms, or even two working arms. For that you need more-powerful gear like I have, the Oculus Quest which is almost triple the price of the Go. If you don’t have a dedicated VR gear, you’re out of luck with many of these platforms (and Google Cardboard doesn’t count). To top it all off, most environments is completely proprietary.

LPI Study Materials, LPI Certification, LPI Exam Prep, LPI Prep, LPI Tutorial and Material

I'm the one in the red Star Trek uniform. Red shirts forever!

Mozilla Hubs is a completely open source social VR experience that allows you to build your own worlds, craft avatars, have meetings, watch videos together, and more. Here's where it gets really exciting. HUBS doesn't require a VR headset. You can access it from any desktop, using a Web browser. In fact, people joining a HUBS session can interact with people using phones, tablets, laptops, or virtual reality headsets. To those who use headsets, the browser sessions look as though they are also participating in the VR experience. When I first provided Evan with a link to a world I had set up in HUBS, I went in with my Oculus Quest while he came in with the browser on his phone, and I didn't even realize it. That's pretty cool. Even cooler. You can create a virtual meeting room for you and your friends, family, or colleagues, right now. From your browser! Just go to hubs.mozilla.com and click "Create a Room". It's incredibly easy.

Hubs also provides a number of tools for creating worlds to suit your individual needs, whether business or entertainment, and for creating custom avatars. The world creation tool, which also works directly in your browser, is called Spoke. The tool has a nice tutorial to get you started. From the Spoke world builder, you can import Google Poly models, Sketchfab, and other models, sound and light elements, video links, and more. If you're curious, click https://hubs.mozilla.com/tbr8k4W/ to check out a simple world I created using Spoke.

LPI Study Materials, LPI Certification, LPI Exam Prep, LPI Prep, LPI Tutorial and Material

For avatars, there are multiple options. You can select from a variety of pre-built avatars, or generate your own using a variety of tools. I'm going to suggest that you try Quilt (http://tryquilt.io/), an easy way to quickly skin and generate an avatar for Hubs. 

We're all going to have to learn to work remotely and in virtual environments, at least in the short term, and maybe going forward into the future. To that end, I am inviting members and anyone else reading this post, to embrace this open source technology and step into the virtual world of the future. 

The big players are already working hard to pave the way. AltspaceVR, which I mentioned earlier, is a Microsoft product. Facebook is releasing Horizons, their social VR platform (currently in limited alpha release). VRChat and RecRoom, while more suited to entertainment than business applications, are also commercial products. There will be more. 

The existing options all have their pros and cons, but as open source professionals, we should perhaps consider championing an open source alternative. That's where Mozilla Hubs comes into play. All you need is a browser, or if you've got one, a VR headset. 

There's a real future in VR and it's just going to get bigger.

Source: lpi.org

Tuesday, 24 March 2020

LPI Exam 302 Prep (Mixed environments): Performance tuning

LPI Study Materials, LPI Guides, LPI Learning, LPI Tutorial and Material, LPI Certification

Prerequisites


To get the most from the articles in this series, you should have an advanced knowledge of Linux and a working Linux system on which you can practice the commands covered in this article. You should also have a good understanding of TCP/IP networking.

Measuring Samba performance


Before you can improve something, you must be able to reliably measure it. Measure something, make changes, measure again, and compare results. In the case of Samba, you need to measure:

◉ Response time and throughput of a client under idle server conditions

◉ Response time and throughput of a client under a given server load

◉ Server characteristics under a given load

◉ Maximum server capacity in terms of clients or throughput

Measuring response times gives you an idea of what a typical client would expect under the test conditions. For this to work, your test cases should approximate actual client load. For example, seeing how fast you can repeatedly request the same file is not the same as copying a directory containing a mix of small and large files.

Measuring server characteristics under a given load gives you some idea of how much room you have to grow. If, under load, your server is struggling to keep up, then you know you don't have much capacity beyond the test load. The same techniques are used to measure a production server's load and extrapolate capacity.

Finally, the so-called "torture tests," where you throw all you can at a server and see where it breaks, provide interesting information but are not as useful as the other types of tests. If the goal is to prove that your server can handle a certain load, then this type of test will do. Such tests are often more helpful for measuring the lower-level characteristics, such as the maximum disk I/O capacity of a server.

Samba is a network server, so it is important to reasonably simulate the network being used. If your clients are 50 milliseconds away from the server, the effects of network latency will be more pronounced than with local clients. This information directs your tuning priorities accordingly.

Designing a test

You could buy a fairly expensive device that simulates client traffic and takes precise performance measurements. And if you're publishing benchmarks or developing the server hardware, such a device might be a good option. However, if you're interested in tuning your own server, and are faced with the usual constraints of time and money, you're probably not looking for an expensive tool that you'll have to learn how to use.

As an example, the first test assesses random read performance by downloading a large number of files with the Samba command-line client. The primary concern is, "how fast can this directory be downloaded?"

Like any well-behaved UNIX® utility, the smbclient tool can read its list of instructions from the standard input. The following snippet shows a series of commands for downloading the contents of a directory:

prompt
recurse
mget smbtest

The prompt command suppresses the client from asking you for confirmation of downloads, and recurse indicates that you want to descend into directories when you download multiple files. Finally, mget smbtest instructs the client to start downloading the smbtest directory. Fill that directory with a few hundred megabytes of test files, and you have all you need to test performance.

To run the test, connect to your share with smbclient, and redirect standard input to your file. Listing 1 shows how you do it.

Listing 1. Running the test

$ time smbclient '\\192.168.1.1\test' password < instructions
Domain=[BOB] OS=[Unix] Server=[Samba 3.5.8-75.fc13]
getting file \smbtest\file2 of size 524288000 as file2 (5323.2 kb/s) (average 5323.2 kb/s)
getting file \smbtest\file1 of size 139460608 as file1 (5275.3 kb/s) (average 5313.0 kb/s)

real    2m2.289s
user    0m0.509s
sys 0m4.580s

The command in Listing 1 starts with the time command, which times how long the command specified in the rest of the arguments takes to run. The command itself is smbclient, and the arguments to that are the name of the share and the user's password. You can add the typical arguments, such as -U to pass an alternate user name if your environment requires it. Finally, < instructions redirects the standard input of smbclient to the file called instructions, which contains the instructions from the first code snippet. The result of this command is a timed batch copy of several files.

The result of the command is a list of the files transferred, along with an average transfer rate per file. The output of time is added at the end, showing that copying roughly 664MB of files took 2 minutes and 2.289 wall-clock seconds. This is now the benchmark. If you make any changes and the test takes longer than 2 minutes, 2 seconds, then you're making things worse with your changes.

If you want to test just Samba parameters and negate the effects of local disk and caching, you can run the test several times and take the last measurement. Doing so ensures that the operating system caches as much as possible and minimizes disk access. As you test your changes, make sure you have a similar amount of free memory on your server; otherwise, the differences in the amount of cached data might alter the results of your experiment.

Viewing Samba's status

Looking at a server's CPU, memory, disk, and network information gives you some good information about the health of the server itself but provides no context for what the application is doing. Samba comes with a helpful utility called smbstatus that shows current connections and file activity; it's also good for both performance tuning and troubleshooting.

Listing 2. The smbstatus command

$ smbstatus
lp_load_ex: refreshing parameters
Initialising global parameters
params.c:pm_process() - Processing configuration file "/etc/samba/smb.conf"
Processing section "[global]"
Processing section "[homes]"
Processing section "[printers]"
Processing section "[extdrive]"

Samba version 3.5.8-75.fc13
PID     Username      Group         Machine                     
-------------------------------------------------------------------
17456     fred         fred       macbookpro-d0cd (::ffff:192.168.1.167)

Service      pid     machine       Connected at
-------------------------------------------------------
fred         17456   macbookpro-d0cd  Mon Jul 18 07:36:46 2011
extdrive     17456   macbookpro-d0cd  Mon Jul 18 07:36:46 2011

Locked files:
Pid    Uid   DenyMode   Access      R/W        Oplock      SharePath   Name   Time
----------------------------------------------------------------------------------
17456  505   DENY_NONE  0x100081    RDONLY     NONE        /home/fred   .
17456  505   DENY_NONE  0x100081    RDONLY     NONE        /home/fred   Documents

Listing 2 shows the current activity of the Samba server. One user, fred, is connected, and he has two shares mounted (fred and extdrive). There are also two locked directories.

Network tuning


Samba is a daemon that primarily sends and receives packets over the network. Several parameters alter how the packets are sent and how Samba interacts with the underlying operating system; these parameters can have a drastic effect on performance.

The basic options

Many daemons provide the best possible level of service to everyone and do not discriminate among clients. If the service is overloaded, everyone gets the same bad service. Contrast this to a telephone network: if congestion occurs, people aren't allowed to make new calls, but existing calls continue as though nothing's wrong. If you artificially limit certain options in Samba, you can prevent getting to a resource starvation situation.

The max connections parameter limits the number of connections you can have to a particular Samba server. Each connection takes memory and CPU resources, so it's possible to overload a server by having too many connections. By default, unlimited connections are allowed. If you need to limit a busy server, you can specify a hard limit with max connections.

The max smbd processes parameter controls the maximum number of processes that can be run. This parameter is similar to max connections but controls the number of processes resulting from the connections.

The more you log with log level, the more server resources are spent writing logs to disk. Keeping this value at 1 or 2 limits the amount of logs written to disk and saves more resources for serving clients.

Setting socket options

When an application asks the operating system to open a network connection, the application can also ask that the operating system treat the packets a certain way using socket options. Socket options can enable or disable network performance tweaks, set particular quality-of-service bits on the packets, or set kernel-level options on the socket.

The socket options command can control the type of service (TOS) bits on the packets. The TOS bits tell routers along the way how the traffic should be treated. If the routers are configured to respect these bits, the traffic can be handled in the manner the application asked for. The IPTOS_LOWDELAY keyword is most appropriate for low-delay networks such as LANs, while IPTOS_THROUGHPUT is for higher-latency WAN links. Your network may be configured differently, so it's possible that using the options might have the opposite effect.

The TCP_NODELAY option disables the Nagle algorithm, which is more appropriate for chatty protocols, at a cost of more packets being sent.

If you have firewalls or other devices that keep state in your network, you may be interested in the SO_KEEPALIVE option, which turns on TCP keepalives. These periodic packets keep the connection open and keep the state fresh inside firewalls. Otherwise, the firewall will drop the packets, and it will take some time for the client to realize it has to reconnect to the server.

Beyond tuning


Although playing with different settings and trying to come up with optimal configurations may be fun, some items outside the Samba configuration can really hamper performance. Any time the server is doing something other than reading data from disk and sending it to a client or receiving incoming network traffic and writing it to disk, you're making the operation take longer.

Ethernet errors

If a packet is lost in transmission, the kernel has to notice that the packet has gone missing and request a retransmission. This process can slow down a fast conversation, especially if the two sides are separated by high latency. One common source of packet loss is mismatched settings between the switch and the server. Either auto-negotiation fails to come up with a correct match or one side is set statically and the other side is still auto-negotiating. This discrepancy results in errors on both sides. You can look for such errors with the netstat command:

# netstat -d -i 2
Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth1       1500   0  5258404      0      0      0  3024340      0      0      0 BMRU
eth1       1500   0  5258409      0      0      0  3024341      0      0      0 BMRU
eth1       1500   0  5258411      0      0      0  3024342      0      0      0 BMRU

This code shows the multipurpose netstat command looking for interface errors every 2 seconds with the -d -i 2 parameters. Every 2 seconds, the status of the available interfaces is shown. In the example above, the RX-OK and TX-OK columns show that packets are flowing in and out. The errors, drops, and overruns are all 0 in both directions, showing that there has been no packet loss.

If you see errors, determine what speed/duplex is being used with mii-tool or mii-diag. Listing 3 shows how to check your network settings.

Listing 3. Verifying network settings

# mii-tool
eth1: negotiated 100baseTx-FD, link ok
# mii-diag eth1
Basic registers of MII PHY #24:  3000 782d 0040 6177 05e1 41e1 0003 0000.
The autonegotiated capability is 01e0.
The autonegotiated media type is 100baseTx-FD.
Basic mode control register 0x3000: Auto-negotiation enabled.
You have link beat, and everything is working OK.
Your link partner advertised 41e1: 100baseTx-FD 100baseTx 10baseT-FD 10baseT.
End of basic transceiver information.

Listing 3 starts with the mii-tool command to give a short summary of the active interfaces. The results show that the interface has negotiated at 100/Full. mii-diag shows more detailed information and might be more suitable for sending to your network team if you have to escalate the issue. The mii-tool command can be used to fix the speed and duplex, though in practice it's better to have the link auto-negotiated.

Maintaining TDB files

If something is wrong with these databases, your server will have to perform more work, such as unnecessary disk seeks, or fail to cache information from remote services. Fortunately, you can check the TDB files for corruption and fix them if there's a problem. You can delete the temporary TDB files after shutting down Samba, and they'll be re-created on startup. For others, be sure to perform a backup, and then verify the files with tdbbackup -v.

Look at the client

Sometimes, the client may be the cause of slow performance. The client might have duplex problems, it might have malware, or some other problem may be occurring. Using a consistent client for your testing can help you eliminate the server as a potential source of slowness.

Sunday, 22 March 2020

LPI Exam 201 Prep: Linux kernel

LPI Exam, Linux Kernel, Linux Tutorial and Material, Linux Learning, Linux Exam Prep

Prerequisites

To get the most from this post, you should already have a basic knowledge of Linux and a working Linux system on which you can practice the commands covered in this post.

Kernel components


This section covers material for topic 2.201.1 for the Intermediate Level Administration (LPIC-2) exam 201. The topic has a weight of 1.

What makes up a kernel?

A Linux kernel is made up of the base kernel itself plus any number of kernel modules. In many or most cases, the base kernel and a large collection of kernel modules are compiled at the same time and installed or distributed together, based on the code created by Linus Torvalds or customized by Linux distributors. A base kernel is always loaded during system boot and stays loaded during all uptime; kernel modules may or may not be loaded initially (though generally some are), and kernel modules may be loaded or unloaded during runtime.

The kernel module system allows the inclusion of extra modules that are compiled after, or separately from, the base kernel. Extra modules may be created either when you add hardware devices to a running Linux system or are sometimes distributed by third parties. Third parties sometime distribute kernel modules in binary form, though doing so takes away your capability as a system administrator to customize a kernel module. In any case, once a kernel module is loaded, it becomes part of the running kernel for as long as it remains loaded. Contrary to some conceptions, a kernel module is not simply an API for talking with a base kernel, but becomes patched in as part of the running kernel itself.

Kernel naming conventions

Linux kernels follow a naming/numbering convention that quickly tells you significant information about the kernel you are running. The convention used indicates a major number, minor number, revision, and, in some cases, vendor/customization string. This same convention applies to several types of files, including the kernel source archive, patches, and perhaps multiple base kernels (if you run several).

As well as the basic dot-separated sequence, Linux kernels follow a convention to distinguish stable from experimental branches. Stable branches use an even minor number, whereas experimental branches use an odd minor number. Revisions are simply sequential numbers that represent bug fixes and backward-compatible improvements. Customization strings often describe a vendor or specific feature. For example:

◉ linux-2.4.37-foo.tar.gz: Indicates a stable 2.4 kernel source archive from the vendor "Foo Industries"

◉ /boot/bzImage-2.7.5-smp: Indicates a compiled experimental 2.7 base kernel with SMP support enabled

◉ patch-2.6.21.bz2: Indicates a patch to update an earlier 2.6 stable kernel to revision 21

Kernel files

The Linux base kernel comes in two versions: zImage, which is limited to about 508 KB, and bzImage for larger kernels (up to about 2.5 MB). Generally, modern Linux distributions use the bzImage kernel format to allow inclusion of more features. You might expect that since the "z" in zImage indicates gzip compression, the "bz" in bzImage might mean bzip2 compression is used there. However, the "b" simply stands for "big" -- gzip compression is still used. In either case, as installed in the /boot/ directory, the base kernel is often renamed as vmlinuz. Generally the file /vmlinuz is a link to a version names file such as /boot/vmlinuz-2.6.10-5-386.

There are a few other files in the /boot/ directory associated with a base kernel that you should be aware of (sometimes you will find these at the file system root instead). System.map is a table showing the addresses for kernel symbols. initrd.img is sometimes used by the base kernel to create a simple file system in a ramdisk prior to mounting the full file system.

Kernel modules

Kernel modules contain extra kernel code that may be loaded after the base kernel. Modules typically provide one of the following functions:

◉ Device drivers: Support a specific type of hardware

◉ File system drivers: Provide the optional capability to read and/or write a particular file system

◉ System calls: Most are supported in the base kernel, but kernel modules can add or modify system services

◉ Network drivers: Implement a particular network protocol

◉ Executable loaders: Parse and load additional executable formats

Compiling a kernel


This section covers material for topic 2.201.2 for the Intermediate Level Administration (LPIC-2) exam 201. The topic has a weight of 1.

Obtaining kernel sources

The first thing you need to do to compile a new Linux kernel is obtain the source code for one. The main place to find kernel sources is from the Linux Kernel Archives (kernel.org; see Related topics for a link). The provider of your distribution might also provide its own updated kernel sources that reflect vendor-specific enhancements. For example, you might fetch and unpack a recent kernel version with commands similar to these:

Listing 1. Fetching and unpacking kernel

% cd /tmp/src/
% wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.12.tar.bz2
% cd /usr/src/
% tar jxvfy /tmp/src/linux-2.6.12.tar.bz2

You may need root permissions to unpack the sources under /usr/src/. However, you are able to unpack or compile a kernel in a user directory. Check out kernel.org for other archive formats and download protocols.

Checking your kernel sources

If you have successfully obtained and unpacked a kernel source archive, your system should contain a directory such as /usr/src/linux-2.6.12 (or a similar leaf directory if you unpacked the archive elsewhere). Of particular importance, that directory should contain a README file you might want to read for current information. Underneath this directory are numerous subdirectories containing source files, chiefly either .c or .h files. The main work of assembling these source files into a working kernel is coded into the file Makefile, which is utilized by the make utility.

Configuring the compilation

Once you have obtained and unpacked your kernel sources, you will want to configure your target kernel. There are three flags to the make command that you can use to configure kernel options. Technically, you can also manually edit the file .config, but in practice doing so is rarely desirable (you forgo extra informational context and can easily create an invalid configuration). The three flags are config, menuconfig, and xconfig.

Of theses options, make config is almost as crude as manually editing the .config file; it requires you configure every option (out of hundreds) in a fixed order, with no backtracking. For text terminals, make menuconfig gives you an attractive curses screen that you can navigate to set just the options you wish to modify. The command make xconfig is similar for X11 interfaces but adds a bit extra graphical eye candy (especially pretty with Linux 2.6+).

For many kernel options you have three choices: (1) include the capability in the base kernel; (2) include it as a kernel module; (3) omit the capability entirely. Generally, there is no harm (except a little extra compilation time) in creating numerous kernel modules, since they are not loaded unless needed. For space-constrained media, you might omit capabilities entirely.

Running the compilation

To actually build a kernel based on the options you have selected, you perform several steps:

◉ make dep: Only necessary on 2.4, no longer on 2.6.

◉ make clean: Cleans up prior object files, a good idea especially if this is not your first compilation of a given kernel tree.

◉ make bzImage: Builds the base kernel. In special circumstances you might use make zImage for a small kernel image. You might also use make zlilo to install the kernel directly within the lilo boot loader, or make zdisk to create a bootable floppy. Generally, it is a better idea to create the kernel image in a directory like /usr/src/linux/arch/i386/boot/vmlinuz using make bzImage, and manually copy from there.

◉ make modules: Builds all the loadable kernel modules you have configured for the build.

◉ sudo make modules_install: Installs all the built modules to a directory such as /lib/modules/2.6.12/, where the directory leaf is named after the kernel version.

Creating an initial ramdisk

If you built important boot drivers as modules, an initial ramdisk is a way of bootstrapping the need for their capabilities during the initial boot process. The especially applies to file system drivers that are compiled as kernel modules. Basically, an initial ramdisk is a magic root pseudo-partition that lives only in memory and is later chrooted to the real disk partition (for example, if your root partition is on RAID). Later tutorials in this series will cover this in more detail.

Creating an initial ramdisk image is performed with the command mkinitrd. Consult the manpage on your specific Linux distribution for the particular options given to the mkinitrd command. In the simplest case, you might run something like this:

Listing 2. Creating a ramdisk

% mkinitrd /boot/initrd-2.6.12 2.6.12

Installing the compiled Linux kernel

Once you have successfully compiled the base kernel and its associated modules (this might take a while -- maybe hours on a slow machine), you should copy the kernel image (vmlinuz or bzImage) and the System.map file to your /boot/ directory.

Once you have copied the necessary kernel files to /boot/, and installed the kernel modules using make modules_install, you need to configure your boot loader -- typically lilo or grub to access the appropriate kernel(s). The next tutorial in this series provides information on configuring lilo and grub.

Further information

The kernel.org site contains a number of useful links to more information about kernel features and requirements for compilation. A particularly useful and detailed document is Kwan Lowe's Kernel Rebuild Guide.

Patching a kernel


This section covers material for topic 2.201.3 for the Intermediate Level Administration (LPIC-2) exam 201. The topic has a weight of 2.

Obtaining a patch

Linux kernel sources are distributed as main source trees combined with much smaller patches. Generally, doing it this way allows you to obtain a "bleeding edge" kernel with much quicker downloads. This arrangement lets you apply special-purpose patches from sources other than kernel.org.

If you wish to patch several levels of changes, you will need to obtain each incremental patch. For example, suppose that by the time you read this, a Linux 2.6.14 kernel is available, and you had downloaded the 2.6.12 kernel in the prior section. You might run:

Listing 3. Getting incremental patches

% wget http://www.kernel.org/pub/linux/kernel/v2.6/patch-2.6.13.bz2
% wget http://www.kernel.org/pub/linux/kernel/v2.6/patch-2.6.14.bz2

Unpacking and applying patches

To apply patches, you must first unpack them using bzip2 or gzip, depending on the compression archive format you downloaded, then apply each patch. For example:

Listing 4. Unzipping and applying patches

% bzip2 -d patch2.6.13.bz2
% bzip2 -d patch2.6.14.bz2
% cd /usr/src/linux-2.6.12
% patch -p1 < /path/to/patch2.6.13
% patch -p1 < /path/to/patch2.6.14

Once patches are applied, proceed with compilation as described in the prior section. make clean will remove extra object files that may not reflect the new changes.

Customizing a kernel


This section covers material for topic 2.201.4 for the Intermediate Level Administration (LPIC-2) exam 201. The topic has a weight of 1.

About customization

Much of what you would think of as customizing a kernel was discussed in the section of this tutorial on compiling a kernel (specifically, the make [x|menu]config options). When compiling a base kernel and kernel modules, you may include or omit many kernel capabilities in order to achieve specific capabilities, run profiles, and memory usage.

This section looks at ways you can modify kernel behavior at runtime.

Finding information about a running kernel

Linux (and other UNIX-like operating systems) uses a special, generally consistent, and elegant technique to store information about a running kernel (or other running processes). The special directory /proc/ contains pseudo-files and subdirectories with a wealth of information about the running system.

Each process that is created during the uptime of a Linux system creates its own numeric subdirectory with several status files. Much of this information is summarized by userlevel commands and system tools, but the underlying information resides in the /proc/ file system.

Of particular note for understanding the status of the kernel itself are the contents of /proc/sys/kernel.

More about current processes

While the status of processes, especially userland processes, does not pertain to the kernel per se, it is important to understand these if you intend to tweak an underlying kernel. The easiest way to obtain a summary of processes is with the ps command (graphical and higher level tools also exist). With a process ID in mind, you can explore the running process. For example:

Listing 5. Exploring the running process

% ps
  PID TTY          TIME CMD
16961 pts/2    00:00:00 bash
17239 pts/2    00:00:00 ps
% ls /proc/16961
binfmt   cwd@     exe@  maps  mounts  stat   status
cmdline  environ  fd/   mem   root@   statm

This tutorial cannot address all the information contained in those process pseudo-files, but just as an example, let's look at part of status:

Listing 6. A look at the status pseudo-file

$ head -12 /proc/17268/status
Name:   bash
State:  S (sleeping)
Tgid:   17268
Pid:    17268
PPid:   17266
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 256
Groups: 0
VmSize:     2640 kB
VmLck:         0 kB

The kernel process

As with user processes, the /proc/ file system contains useful information about a running kernel. Of particular significance is the directory /proc/sys/kernel/:

Listing 7. /proc/sys/kernel/ directory

% ls /proc/sys/kernel/
acct           domainname  msgmni       printk         shmall   threads-max
cad_pid        hostname    osrelease    random/        shmmax   version
cap-bound      hotplug     ostype       real-root-dev  shmmni
core_pattern   modprobe    overflowgid  rtsig-max      swsusp
core_uses_pid  msgmax      overflowuid  rtsig-nr       sysrq
ctrl-alt-del   msgmnb      panic        sem            tainted

The contents of these pseudo-files show information on the running kernel. For example:

Listing 8. A look at the ostype pseudo-file

% cat /proc/sys/kernel/ostype
Linux
% cat /proc/sys/kernel/threads-max
4095

Already loaded kernel modules

As with other aspects of a running Linux system, information on loaded kernel modules lives in the /proc/ file system, specifically in /proc/modules. Generally, however, you will access this information using the lsmod utility (which simply puts a header on the display of the raw contents of /proc/modules); cat /proc/modules displays the same information. Let's look at an example:

Listing 9. Contents of /proc/modules

% lsmod
Module                  Size  Used by    Not tainted
lp                      8096   0
parport_pc             25096   1
parport                34176   1  [lp parport_pc]
sg                     34636   0  (autoclean) (unused)
st                     29488   0  (autoclean) (unused)
sr_mod                 16920   0  (autoclean) (unused)
sd_mod                 13100   0  (autoclean) (unused)
scsi_mod              103284   4  (autoclean) [sg st sr_mod sd_mod]
ide-cd                 33856   0  (autoclean)
cdrom                  31648   0  (autoclean) [sr_mod ide-cd]
nfsd                   74256   8  (autoclean)
af_packet              14952   1  (autoclean)
ip_vs                  83192   0  (autoclean)
floppy                 55132   0
8139too                17160   1  (autoclean)
mii                     3832   0  (autoclean) [8139too]
supermount             15296   2  (autoclean)
usb-uhci               24652   0  (unused)
usbcore                72992   1  [usb-uhci]
rtc                     8060   0  (autoclean)
ext3                   59916   2
jbd                    38972   2  [ext3]

Loading additional kernel modules

There are two tools for loading kernel modules. The command modprobe is slightly higher level, and handles loading dependencies -- that is, other kernel modules a loaded kernel module may need. At heart, however, modprobe is just a wrapper for calling insmod.

For example, suppose you want to load support for the Reiser file system into the kernel (assuming it is not already compiled into the kernel). You can use the modprobe -nv option to just see what the command would do, but not actually load anything:

Listing 10. Checking dependencies with modprobe

%  modprobe -nv reiserfs
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/fs/reiserfs/reiserfs.o.gz

In this case, there are no dependencies. In other cases, dependencies might exist (which would be handled by modprobe if run without -n). For example:

Listing 11. More modprobe

% modprobe -nv snd-emux-synth
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/drivers/sound/
   soundcore.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/
   snd.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/synth/
   snd-util-mem.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/seq/
   snd-seq-device.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/
   snd-timer.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/seq/
   snd-seq.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/seq/
   snd-seq-midi-event.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/
   snd-rawmidi.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/seq/
   snd-seq-virmidi.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/core/seq/
   snd-seq-midi-emul.o.gz
/sbin/insmod /lib/modules/2.4.21-0.13mdk/kernel/sound/synth/emux/
   snd-emux-synth.o.gz

Suppose you want to load a kernel module now. You can use modprobe to load all dependencies along the way, but to be explicit you should use insmod.

From the information given above, you might think to run, for example, insmod snd-emux-synth. But if you do that without first loading the dependencies, you will receive complaints about "unresolved symbols." So let's try Reiser file system instead, which stands alone:

Listing 12. Loading a kernel module

% insmod reiserfs
Using /lib/modules/2.4.21-0.13mdk/kernel/fs/reiserfs/reiserfs.o.gz

Happily enough, your kernel will now support a new file system. You can mount a partition, read/write to it, and so on. For other system capabilities, the concept would be the same.

Removing loaded kernel modules

As with loading modules, unloading them can either be done at a higher level with modprobe or at a lower level with rmmod. The higher level tool unloads everything in reverse dependency order. rmmod just removes a single kernel module, but will fail if modules are in use (usually because of dependencies). For example:

Listing 13. Trying to unload modules with dependencies in use

% modprobe snd-emux-synth
% rmmod soundcore
soundcore: Device or resource busy
% modprobe -rv snd-emux-synth
# delete snd-emux-synth
# delete snd-seq-midi-emul
# delete snd-seq-virmidi
# delete snd-rawmidi
# delete snd-seq-midi-event
# delete snd-seq
# delete snd-timer
# delete snd-seq-device
# delete snd-util-mem
# delete snd
# delete soundcore

However, if a kernel module is eligible for removal, rmmod will unload it from memory, for example:

Listing 14. Unloading modules with no dependencies

% rmmod -v reiserfs
Checking reiserfs for persistent data

Automatically loading kernel modules

You can cause kernel modules to be loaded automatically, if you wish, using either the kernel module loader in recent Linux versions, or the kerneld daemon in older version. If you use these techniques, the kernel will detect the fact it does not support a particular system call, then attempt to load the appropriate kernel module.

However, unless you run in very memory-constrained systems, there is usually no reason not to simply load needed kernel modules during system startup (see the next tutorial in this series for more information). Some distributions may ship with the kernel module loader enabled.

Autocleaning kernel modules

As with automatic loading, autocleaning kernel modules is mostly only an issue for memory-constrained systems, such as embedded Linux systems. However, you should be aware that kernel modules may be loaded with the insmod --autoclean flag, which marks them as unloadable if they are not currently used.

The older kerneld daemon would make a call to rmmod --all periodically to remove unused kernel modules. In special circumstances (if you are not using kerneld, which you will not on recent Linux systems), you might add the command rmmod --all to your crontab, perhaps running once a minute or so. But mostly, this whole issue is superfluous, since kernel modules generally use much less memory than typical user processes do.