Friday 29 December 2017

LPIC-3 Host Configuration Management Part 1

Weight: 2
Description: Candidates should be familiar with the use of RCS and Puppet for host configuration management.

Key Knowledge Areas


◈ RCS
◈ Puppet

Terms and Utilities


◈ RCS
◈ ci/co
◈ rcsdiff
◈ puppet
◈ puppetd
◈ puppetmasterd
◈ /etc/puppet/

Configuring Puppet Master


Centralized server management can be achieved on your Linux Server with products such as the long established Puppet project. The puppet server is rather aptly names the Puppet Master and this acts as a central configuration server that can be used to keep configuration files maintained across your server estate and ensure services are installed and running. Along with Puppet we look at RCS, version control software that allows you to check-out and check-in documents to both provide version control and effective access control where perhaps many administrators may update scripts.

What is Puppet


Puppet is an open source framework based on Ruby for managing the configuration of computer system. Puppet is licensed under GPLv2 and can be used as a standalone or client-server model. We will use both models in the tutorials the first video being with the Puppet Master (Server) and applying local policies then extending to bring in more clients tutorial. To see information relating to the puppet master package:

Default host “puppet”


The default configuration of the clients or puppet agents will look for the puppet server or puppet master as the host puppet or puppet.yourdomain.com. it is therefore easiest to ensure that the host that will act as the puppet master be configured with DNS or hosts entries as puppet. In the example the puppet master will be configured on the host 192.168.0.200 so the clients will have host file entries:

192.168.0.200     puppet

The lab does not use DNS.

Install the package puppetmaster


The central puppet server is known as the puppetmaster and should be the host with the entry puppet in the hosts file or DNS. To install the puppetmaster package on Ubuntu

apt-get update
apt-get install puppetmaster

To install on SUSE and we use openSUSE in the video

zypper in puppet-server

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

With this in place we are ready to configure the server and in the first video we will use this as a standalone deployment bringing in clients in the second video. First we will check the resolution of the puppet master. The file /etc/sysconfig/puppet will define the hostname that the client expects the puppet master to be; it default to puppet. I have hostname records point to this machine as puppet. In your own environment you may use local host entries or a CNAME entry in DNS

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

Correct Permissions


On some distributions this is not required but on other such as SUSE the directory permissions need to be implemented correctly for the puppet master to work correctly. The directory /var/lib/puppet is owned by root and should be owned by the account that puppet users: puppet. The following command will correct the issue and don’t forget to double check on the user and group name from on your system from the /etc/passwd and /etc/group file. The following command is correct on the openSUSE system.

chown puppet:puppet /var/lib/puppet

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

We can now start the puppetmasterd, on SUSE we can use the sym-link:

rcpuppetmasterd start

On other systems

service puppetmasterd start

This will populate the directory /var/lib/puppet and will also create the /etc/puppet/manifests sub directory if it does not exists.

Configure the Puppet File Server


Some distributions will have the /etc/puppet/fileserver.conf already created and it will need to be modified. In SUSE you will need to create and populate the file. This is the default file that defines the puppet master’s fileserver configuration; you can specify a different file using the puppetmasterd –fsconfig flag.

As the name suggests it allows the puppet server to share files to clients. This was configuration files can be kept up-to-date on client machines. In the example we will distribute the /etc/motd file.

From the following /etc/puppet/fileserver.conf we have defines a share simply called files and have the path pointing though to /etc/puppet/files; if you are thinking of sharing more files then perhaps /var/puppet/files may be more appropriate than the /etc directory.

[files]
 path /etc/puppet/files
 allow *

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

The path directive may also take variables such as:

◈ %h : To represent the client’s host name
◈ %H : To represent the client’s FQDN
◈ %d : to represent the client’s domain name

Which can subsequently allow for more granularity when distributing files.

For security we can add allow and deny directives and specify hostnames or IP addresses as required. Here we allow all devices.

To deploy a file from this share we must add the file to the share and restart the puppetmasterd. We are distributing a standard motd file so we will create the directory:

mkdir /etc/puppet/files

We then create the file to distribute with:

echo “Only authorized access is a allowed” > /etc/puppet/files/motd

Now we can restart the service with the following command on SUSE. We will then be ready to create the first puppet manifest.

rcpuppetmasterd restart

Define the site manifest


Normally we would need to create the file /etc/puppet/manifests/site.pp. This file is the manifest or policy that all clients will look for when connecting to the Puppet Master. As we are first applying only local manifest we can create files with the names of our choice. We will create three files to demonstrate deployment of files, control of services, and finally a manifest to ensure a package is installed. These all could be in a single manifest but for the video demonstration in the video it will be shown as three files.

For ease of page space we will create a single file /etc/puppet/manifests/test.pp. PP is the normal suffix for manifests and these are just etxt files so using your text editor of choice is just fine.

package {'nmap': ensure => installed }
service { 'sshd': ensure => running, enable => true, }
file { “/etc/motd” : source => “puppet:///files/motd”, }

In the file we create three instructions:

◈ package
◈ service
◈ file

Each instruction has a name and attributes to look for. With the package we are just looking to installed nmap if it is not installed. We do not mind how it is installed but it just needs to exist in a repository. This could easily be Solaris, SUSE, Red Hat or Debian as we do not concern ourselves with the “how it is installed”.

The same for the service we just want to make sure that it is running and is enabled for auto start. How it is started does not matter, using the service command or rc symlinks and nor are we concerned of the chkconfig command ad the switches used. This is OS specific and not a concern for our instruction. In this way we keep as open as possible onto which platforms this will work.

The file command then is ensuring the centralized motd file is used from the puppet server. We have only set the source attribute but we could also add in the owner, group, and mode if we needed.

Manually applying the manifest


To test the manifest we can use the apply option to the client. This is used only in standalone configurations and is useful for testing. As we have not added in any client yet this is a great way to prove the manifest is working

puppet apply /etc/puppet/manifests/test.pp

This command will check the named manifest and the associated tasks; we then should see the file being updated, the service sshd will run and nmap will be installed. Certainly this shows how powerful Puppet can be and will give you some ideas of centralized management and machine configuration for your environment. In the next video we will add in more clients to test the full potential of puppet.

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

Wednesday 27 December 2017

Linux Essentials - What is the Certification

Linux Essentials, LPIC Certification, LPIC Exam

Before Linux Essentials


Over the last few years, I have been writing many free video tutorials for learning Linux. In the main, they have been aimed at existing system administrators who may be new to Linux but versed in IT management tasks. Linux is not just for seasoned administrators and not only for administrators. There are many people that can benefit from what the Operating System has to offer both. This includes system administration folk as well as those in a pure user based function. The LPI has introduced the Linux Essentials certification aiming it at young people through schools and colleges, and academics as well as those just wanting to make a start in Linux.

Making Linux Certification Open to All


Certainly having the mindset to involve young people and their teachers will help. The more secondary and college tutors that have an understanding of Linux then the more Linux will naturally be taught within schools. This is important when you consider that Linux is FREE and the OS and Applications used in schools  should not matter. It is the cost and freedom to manage the software the way you want that is important. We want a generation growing up with the understanding of how the software works and how collaboration empowers.

Free Training


During this course, we will follow the key objectives of the exam. This will involve investigating Linux both from a  simple administration and usage perspectives. The Linux Essentials exam is a recommendation, and not required for pre-requisite for training in the LPIC professional program. Exams are delivered in schools and training centres around the world. To locate the centre nearest you, please contact your local LPI Affiliate.

In order to pass the exam and gain the Certificate of Achievement in Linux Essentials, you should be able to demonstrate:

◉ Understanding of the basic concepts of processes, programs and the components of an Operating System.
◉ Having a basic knowledge of computer hardware.
◉ A knowledge of Open Source Applications such as OpenOffice in the Workplace as they relate to Closed Source or proprietary equivalents from other software vendors.
◉ An understanding of navigation systems on a Linux Desktop and what tools can be used to locate help.
◉ Basic skills of using a command line interface.
◉ The skills needed to use basic functions in command line text editors such as vi or nano.

Wednesday 20 December 2017

Setup a TeamSpeak 3 Server on Linux (Ubuntu / Debian)

TeamSpeak 3 is a heavily used solution (if not the most used one) to do low latency voice chat while gaming. For e.g. if you use Skype, the delay and the traffic between the talking people will be much higher, besides the Skype client being way more bloated than TeamSpeak. Besides TeamSpeak 3 there are other gaming based low latency solutions like Discord (which uses central servers without the possibility to setup your own instance) and Mumble.

However, this tutorial is about how to setup a TeamSpeak 3 server on your Linux box. Thanks to the TeamSpeak 3 developers, this process is rather easy and you should have a running TeamSpeak 3 server within minutes. So, let’s start.

Install requirements


The TeamSpeak 3 Server doesn’t really need any extra libraries in order to work. With a new Debian 9 setup for e.g. it start without any additional libraries. However to download and extract the server software we need some additional software, in this case a download manger (wget) and the utility to extract the compromised server software (bzip2). With the following command you will install this needed utilities. In this case we use Debian / Ubuntus package manager APT:

user@server:~$ sudo apt-get update
user@server:~$ sudo apt-get install wget bzip2

Now that all the needed utilities are on board, let’s move forward and install the server software itself.

Download and install the TeamSpeak 3 Server


TeamSpeak 3 is a proprietary software solution. Due to this fact you will not be able to install it from the repositories of your Linux distribution. So this means you have to download it from the developers homepage onto your server. You can download the latest TeamSpeak 3 Server software. As of writing this tutorial the latest and greatest TeamSpeak 3 Server version was 3.0.13.8. Whenever you go through this tutorial, your version number may be a newer one. The following command downloads version 3.0.13.8 to your server:

user@server:~$ wget http://dl.4players.de/ts/releases/3.0.13.8/teamspeak3-server_linux_amd64-3.0.13.8.tar.bz2

After the download is finished (which can take some time depending on your network speed), we can extract the downloaded server software. The following command is doing this:

user@server:~$ tar xfvj teamspeak3-server_linux_amd64-3.0.13.8.tar.bz2

Now it’s time to start the server for the first time.

Starting the TeamSpeak 3 Server


Now, that we’ve downloaded and extracted the server software, we will be able to start the server software. To do so, we have to change into the TeamSpeak Server directory (which has been automatically created with extracting the server software) and issue the command to start the server:

user@server:~$ cd teamspeak3-server_linux_amd64
user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh start

The first start takes some time, approximate 1-3 minutes. After the first start is finished, you will get an output like this:

------------------------------------------------------------------
 I M P O R T A N T
------------------------------------------------------------------
 Server Query Admin Account created
 loginname= "serveradmin", password= "BVV2YUIJ"
------------------------------------------------------------------


------------------------------------------------------------------
 I M P O R T A N T
------------------------------------------------------------------
 ServerAdmin privilege key created, please use it to gain
 serveradmin rights for your virtualserver. please
 also check the doc/privilegekey_guide.txt for details.

token=zvCYfUTRlcYl12dviAMuGKR7e+nQYKSE0bD9O4CI
------------------------------------------------------------------

Important: You should write down the server query admin account on a piece of paper, or you save these informations in a password database. This account is needed in emergency cases, like lost TeamSpeak user data or hacking attempts.

In this case we only need the privilege key for now. Store the line, starting with token= in a text file. We need this token later on.

To finally ensure if you’re server is running correctly, you can issue the following command:

user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh status
Server is running

If the output Server is running is welcoming you, it’s time to connect to your new server.

Connect to your server and give yourself admin rights


At this point I assume, that you’ve already installed the TeamSpeak 3 client onto your computer. If you didn’t, you should download it. If you’re a Linux user, you have to download the TeamSpeak 3 client through the link. You will not find the TeamSpeak 3 client in the distribution repositories due to the same reason as you will not find the TeamSpeak 3 server software.

To connect to your server, start the TeamSpeak 3 client and click on Connections –> Connect or use the hotkey CTRL+S. In the upcoming dialog, enter the IP address or name of your server and pick a nickname which you want to use on that server and hit the Connect button.

Linux Tutorials and Materials, Linux Guides, Linux Learning
Connection dialog

The server recognizes that the server was initially setup and pops up another dialog where it asks for a so called Privilege Key. This Privilege Key is the generated token we’ve saved a few steps before in a text file. Open the text file (if not already) and copy everything after token= and insert this key into the dialog box like this:

Linux Tutorials and Materials, Linux Guides, Linux Learning
TeamSpeak privilege key

After you’ve used the privilege key you can delete the text file. A privilege key is for onetime use only. However, you should now see a new symbol besides your nickname which states that you’re an Administrator. From now on, you should be able to create channels, server groups, edit the servers name and so on.

Linux Tutorials and Materials, Linux Guides, Linux Learning
Indicator that you’re an Admin (click to enlarge)

After this step your TeamSpeak 3 server is completely and fully setup. You can now close the SSH connection to your server and start to share your servers address with your friends and start talking.

Useful tips


While the TeamSpeak 3 software is mainly rock solid, you should take care that your server is always up to date. To update the TeamSpeak 3 server software go to their official homepage, download the newest version (like you did before in this tutorial with wget) and extract it. The files will be overwritten besides the database files. This ensures that you don’t have to start all over again when you do an update. However, you have to stop the TeamSpeak 3 server before you update it. You can do this easily like this:

user@server:~$ cd teamspeak3-server_linux_amd64
user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh stop

After you’ve extracted the updated server files you can start the server again:

user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh start

Please be also aware that you should use a firewall or package filter solution like IPTables. A server with the latest security patches is good, but a firewall solution will always increases the security these days.

Final words


In times where almost everything goes more and more centralized. I feel that a solution like TeamSpeak 3 is really needed. I know there are other solutions like Mumble which has the additional benefit of being Open Source, however, we can’t have enough decentralized solutions if you ask me.

Sunday 17 December 2017

Proxy over SSH on Windows, Mac or Linux

SSH: A tool not only to do remote work


SSH (Secure Shell) is mostly used to do maintenance on your Linux machines. However, over the years the capabilities of SSH has been extended from a simple secure “remote maintenance protocol” to a utility which is capable of doing things like X-Forwarding (for forwarding graphical application), port forwarding or providing a SOCKS proxy.

Why do you even want to use an proxy server?


Proxy servers are helpful in a lot of ways. For e.g. if you’re staying some nights in a hotel or you’re in any other public Wireless LAN which blocks a specific website you want to visit a proxy will help you to surpass the filter. Or if you are forced to use techniques like DSLight, were you have to share a single IPv4 address with other users. Or to unblock videos on Netflix which are blocked in your country. You see, the situations where a proxy server is helping you are almost countless.

But why would you want to “setup” an proxy server on your own? The simple answer is, that a lot of the public proxy servers are simply overloaded. They have to handle so much traffic that you sometime barely be able to get 50% of your normal internet speed while using one of these public proxy servers. Besides this, using SSH as a proxy is really easy.

How start a SOCKS proxy server by using SSH


In order to establish a SSH connection to your server which will then be an SOCKS proxy, you have to have the SSH server installed on the server side and the client software on the client side of course.

Using SSH as a proxy on Linux or Mac

For Linux or Mac you can use the SSH client command which is integrated in both systems. The following command would start an SSH connection, where your SOCKS proxy would then be locally reachable on port 19999 (19999 is just an suggestion and can be changed to almost everything starting from 1024 to 49151 (so called “user ports”)) :

user@client:~$ ssh -D 19999 user@server

After the connection has been successfully established, configure your browser to use the proxy server (follow the instructions below).

Using SSH as a proxy on Windows

Windows doesn’t comes with an SSH command integrated. This means we need an additional software in order to get connected and use the SSH server as a proxy. My recommendation here is PuTTY. PuTTY is a lightweight SSH client for Windows, which is the counterpart of the SSH command on Linux / Mac. You can download it here. After the download is finished, start PuTTY and enter the server you want to connect to like this:

Hostname you want to connect to

Navigate to Connection –> SSH –> Tunnels and enter the port 19999 in the Source port field (19999 is just an suggestion and can be almost everything starting from 1024 to 49151 (so called “user ports”)). After you’ve entered the desired port number, ensure that you’ve selected Dynamic instead of Local:

Settings to tell SSH to create a SOCKS proxy

Click on the button Add in order to tell PuTTY to actually use the given information for the next connection. If you clicked on Add, you should see the port number you have chosen with the letter D in the upper box. If you’ve done this as well, you’re ready to connect to your server. After the connection is successfully established, go on and configure your browser (follow the instructions below).

Configure Firefox / Google Chrome to use the SOCKS proxy

Now that we’ve connected successfully to our server via SSH, we can actually use the SOCKS proxy which has been provided with the actual SSH connection.

Configuring Firefox to use the SOCKS proxy

Click on the upper right options Symbol (represented as three horizontal lines) and click on Preferences. On the upcoming window, select General and scroll down until you see the context Network proxy. Click on Settings and enter your SOCKS proxy details like this:

Firefox proxy settings

Ensure that you’ve checked the box Use this proxy server for all protocols. After you’ve clicked on OK you’re ready to go. Use portals like BearsMyIp to check if you’re actually surfing through your SSH SOCKS proxy tunnel.

Configuring Google Chrome (or Chromium) to use the SOCKS proxy

For Googles Chrome browser you have to use the command line in order to set your SOCKS proxy. This includes Windows users as well. To start Googles Chrome using your SSH SOCKS proxy start the browser like this:

google-chrome --socks-proxy="socks5://localhost:19999"

The windows command line may look like this:

google-chrome.exe --socks-proxy="socks5://localhost:19999"

Of course you can change google-chrome to chromium if you’re an Chromium user instead.

Final words


An proxy server does have it’s advantages. However, public proxies are sometimes overloaded and you will recognize that as a significantly slow down of your internet connection when you start using them. As an alternative you can use SSH as a simple and fast way to make yourself an SOCKS proxy. Using SSH as a SOCKS proxy is a lot easier than configuring an Apache with Squid for e.g.. If you have a server and you need a proxy, I highly recommend you to use SSH in order to get a safe, fast and stable proxy server with a single command or a few clicks.

Friday 15 December 2017

Exploring DevOps Tools - How to Choose the Tools Right for You

LPI Guides, LPI Tutorials and Materials, LPI Certifications, LPI Learning

The popularity of the DevOps movement has resulted in a wide range of tools in the marketplace; the XebiaLabs DevOps Tool Chest alone lists over 200 different individual tools. And while DevOps is about more than just which tools you use, they are essential to benefiting from the improved speed, agility, and automation that DevOps offers.

To choose the DevOps tools that are most appropriate for you, your projects, and your organization, it makes sense to begin with exploring how they are categorized.

How DevOps Tools are Categorized


LPI Guides, LPI Tutorials and Materials, LPI Certifications, LPI Learning
The types of tools emerge directly from the activities required to deliver software to users via the Continuous Delivery (CD) pipeline model. Each stage in the CD model corresponds to an activity in the software development lifecycle which moves software from development towards production.

Every software team’s CD pipeline – or toolchain – is a mirror of their software development processes, which means there are many possible configurations. DevOps principles involve collaboratively delivering high-quality software, and that means that tools naturally fall into more than one category because they are used throughout DevOps teams. Having said that, there are tool types that are common to all pipelines. These are: build tools, test tools, deploy tools, and monitor tools.

DevOps ‘Build’ Tools

‘Build’ tools assist in the creation of the software, and they make up the beginning stages of all pipelines. Included under this category are tools for pulling source code from Version Control Systems (VCS), handling build-time dependencies among components, and building entire software products. Such tools automatically send reports if any errors are encountered and prevent software changes from moving down the pipeline. It is, by far, the largest category of DevOps tools.

DevOps ‘Test’ Tools

To ensure quality, automated testing is a vital stage of the CD pipeline. These tools test whether or not software features work as expected, previous software ‘bugs’ have reappeared (through regression testing), and check that performance is maintained. Failing tests should prevent software from reaching further stages, but the severity of the test failure is taken into account.

DevOps ‘Deploy’ Tools

Once code changes have passed all the quality checks from testing, they are packaged and tagged for release, and deployed to the production environment. This stage incorporates all tasks required to configure and provision the target environment, and install the software on the machines.

Deployment tools are increasingly working directly through cloud services to automatically create and configure virtual machines. The steps for creating the environment are increasingly written as code, giving rise to the term Infrastructure as Code.

DevOps ‘Monitor’ Tools

Once the latest code is running in the production environment, its operation needs to be monitored for signs of bugs, performance issues, and anything negatively impacting the user experience. Issues appear when users are engaging with the software, and therefore it is important to capture information through logging, alerting, and performance monitoring for analysis. DevOps monitor tools capture this data.

How to Choose DevOps Tools


With these categorizations in mind, here are the items you should consider when reviewing, evaluating, and choosing DevOps tools that will be right for you, your projects, and your organization:

Common Considerations (across all DevOps tools)
  • Track record of the tool working across different projects of various sizes and complexity
  • Time and/or cost involved in getting team members up-and-running on the tool – taking into account project deadlines and budgets
  • The expected return on investment, cost-savings, or cost-recovery expected from the tool.
  • The ability of the tool to integrate seamlessly with other tools along the Continuous Delivery Pipeline
  • The tool’s ability to keep project/client data secure (i.e within project groups or the organization)

Category-Specific Considerations


For ‘build’ tools – consider the programming language and runtime environment of your software product.

For ‘test’ tools – consider the scale and type(s) of testing you are conducting, e.g. functional testing, performance testing, accessibility testing, data testing, security testing, etc.

For ‘deployment’ tools – consider the reliability you need and whether a master-client or decentralized model would meet the requirements of your production environment.

For ‘monitoring’ tools – consider the degree to which they support your software architecture and their scalability.

Knowing the categories of DevOps tools and key considerations should help you optimize your DevOps processes and its outcomes.

Wednesday 13 December 2017

How OpenBSD and Linux Mitigate Security Bugs

At Open Source Summit in Prague, Giovanni Bechis will discuss tools that improve software security by blocking unwanted syscalls.

Bechis is CEO and DevOps engineer at SNB s.r.l., a hosting provider and develops web applications based on Linux/BSD operating systems that is mainly focused on integrating web applications with legacy softwares. In this interview, Bechis explained more about his approach to software security.

Linux.com: What’s the focus of your talk?


The talk will focus on two similar solutions implemented in Linux and OpenBSD kernels, designed to prevent a program from calling syscalls they should not call to improve security of software.

In both kernels (Linux and OpenBSD), unwanted syscalls can be blocked and the offending program terminated, but there are some differences between Linux and OpenBSD’s solution of the problem.

During my talk, I will analyze the differences between two similar techniques that are present in Linux and OpenBSD kernels that are used to mitigate security bugs (that could be used to attack  software and escalate privileges on a machine).

Linux.com: Who should attend?


The scope of the talk is to teach developers how they can develop better and more secure software by adding just few lines to their code. The target audience is mainly developers interested in securing applications.

Linux.com: Can you please explain both solutions and what problems they actually solve?


The main problem that these solutions are trying to solve is that bugs can be exploited to let software do something that it is not designed to do. For example, with some crafty parameters or some crafty TCP/IP packet, it could be possible to let a program read a password file; it should not read or delete some files that it should not delete.

This is more dangerous if the program is running as root instead of a dedicated user because it will have access to all files of the machine if proper security techniques have not been applied.

With these solutions, if a program tries to do something it is not designed for, it will be killed by the kernel and the execution of the program will terminate.

To do that, the source code of the program should be modified with some “more or less” simple lines of code that will “describe” which system calls the program is allowed to request.

A system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on, by allowing only a subset of the system calls we can mitigate security bugs.

Last year, for example, memcached, a popular application designed to speed up dynamic web applications, has suffered by a remote code execution bug that could be exploited to remotely run arbitrary code on the targeted system, thereby compromising the many websites that expose Memcache servers accessible over the Internet.

With a solution like seccomp(2) or pledge(2), a similar bug could be mitigated, the remote code would never be executed, and the memcached process would be terminated.

Linux.com: What’s the main difference between the two solutions?


The main difference (at least the more visible one without viewing under the hood) between Linux and OpenBSD implementation is that, with Linux seccomp(2), you can instruct the program in a very granular way, and you can create very complex policies, while on OpenBSD pledge(2) permitted syscalls have been grouped so policies will be simpler.

On the other hand, using seccomp(2) in Linux could be difficult, while OpenBSD pledge(2) is far easier to use.

On both operating systems, every program should be studied in order to decide which system call the application could use, and there are some facilities that can help understand how a program is operating, what it is doing, and which operations it should be allowed to do.

Saturday 9 December 2017

Installing the Perl Interpreter on Windows and Customize Notepad++

The Perl Interpreter on Windows


Of course if you are learning Perl on Linux it is usually included in the OS and does not need to be added by yourself but, if you would like to try it on Windows then you are going to need to install the program. You have a choice of Strawberry Perl and Active Perl, this is a personal choice and should not make a whole heap of difference. In the demonstration we install Strawberry Perl and configure Notepad ++ so that it can run Perl scripts from inside the editor program.

LPI Tutorials and Materials, LPI Guides, LPI Learning, LPI Certifications


The installation program can be install from strawberryperl.com and once downloaded the MSI installer is a simple process to install. Once installed from the command line prompt we can type the command :

perl -v

To verify the program works and displays the version of perl. We have installed version 5.20.1.

The run a perl program we can execute it direct from the command line where the code is not long:

perl -e "print 'Hello';"

This will print the word hello to the screen. More complex code will be written and preserved into a script with a text editor and then run as similar to this:

perl c:\temp\test.pl

However, if we choose the use the popular and free Notepad++ text editor we can add a plugin that will allow us to run Perl code from within the editor itself.

Firtst we need to start Notepadd++ and from the Plugins Menu Select Plugin Manager. From there we need to install the NppExec plugin. Once restarted we can return the the Plugin Menu and this time select NppExec > Execute and add a script.

NPP_SAVE
cd "$(CURRENT_DIRECTORY)"
C:\strawberryperl\bin\perl "$(FILE_NAME)"

Save this and name it Run Perl

From Plugin > NppExec > Advance Options We can choose to add this script to the Macron Menu.

Once restarted we can then create a Perl Script in he editor and run it directly without the need of the command line.

Wednesday 6 December 2017

LPIC-1 Using the command type

LPIC-1 Using the command type, LPIC-1

For objective 104.7 of the LPIC-1 101 exam there are a few commands to look at, one of which is type. The command type itself is a shell builtin, in other words the program is part of the BASH shell and not a stand-alone program. Try the following command :

$ type type

In the above example we are using the command type with an argument of type. I know it looks strange but it is an introduction to the command. We should be able to see the output similar to the following:

type is a shell builtin

Now that we have been able to determine the type of command it is we can realise other information about the command. Being a shell builtin there will not be a man page. We will need to use man bash and search for the type in the man page. There is no explicit help for type but using type –help is an invalid option and display the usage statement

As we saw for the command type it was a shell builtin; however It could be 1 of the following 5 types

1. Alias
2. Shell Builtin
3. Keyword
4. Function
5. Command File

LPIC-1 Using the command type, LPIC-1If we use check on type ls, most often this will report as an alias. Trying the command type case should return a shell keyword. If we unalias ls, and then try type ls again we should see that it is hashed to a file.

Lets start to look at some of the useful options. We can also use type to show all types for a given command; for example with ls aliased as normal we can use type -a ls and the output will show that is found as an alias first and then the file:

$ type -a ls
ls is aliased to 'ls --color=auto'
ls is /bin/ls

The use of type -t will just show the the single word type of a command:

$ type -t if
keyword

Thursday 30 November 2017

Creating Arch Linux Packages

In this blog, we take a look at creating Arch Linux packages. Working through an example we create an Arch Linux package with the latest SED source code.  In Arch, we have pkg files stat make up the software packages used to encapsulate out software that we install and remove from the system. These files are very similar to the deb files we have in Debian and the rpm files used in Red Hat.

Arch Linux Packages, LPI Tutorials and Materials, LPI Certifications

Using Software Packages


In all Linux distributions, we want to use software packages where possible when installing software. The process of installing is simplified and the package contains the packaged programs and dependencies lists. More than this though, we are able to easily list what is installed and remove software that is no longer needed as we have a database of what is installed.

Latest Software Versions


When a package is not available in the repository then you can choose to download and compile the source code. This, though, is not the best where we have many servers and it is not so easy to audit or remove software installed in this way. This is where we can create our own packages so we can maintain the integrity of our installed software base across the server estate. We may also choose this method where the latest version of the vendor software has not made it into the Arch repositories. We will use SED 4.4 for the demonstration, the current version in the repo is 4.2.

Creating Arch Linux Packages for SED 4.4


We will be working with the source code for SED, the Stream Editor. The version we have in the repos as of February 2017 is version 4.2 and the latest version from the vendor itself is 4.4. Although, there may not be a lot of difference we may need these new features and hence the need to package sed 4.4.

Build Host

We need an Arch system up and running to build the packages on. This needs to be the same architecture of the target clients. Ensure that we have the base-devel package group installed as this will give as there required compilers and the makepkg command.

$ sudo pacman -S base-devel

With this installed and ready top go we can create a directory to work with. We should be logged in as a standard user and NOT root. Move to your home directory and create a folder called abs for Arch Build System. We won’t be using the ABS command but the directory name still makes sense. In that directory, we can create a directory named sed. Representing the package that we are creating.

$ cd
$ midir -p abs/sed
$ cd abs/sed

Create the PKGBUILD File


The command makepkg will read it work inventory from the PKGBUILD file. A sample file can be copied from /usr/share/pacman/PKGBUILD.proto. The file should be copied across to the sed directory and named PKGBUILD. The file should be edited so it appears similar to the following:

pkgname="sed"
pkgver=4.4
pkgrel=1
pkgdesc="SED the stream editor"
arch=("x86_64")
license=('GPL')
source=("ftp://ftp.gnu.org/gnu/sed/$pkgname-$pkgver.tar.xz")
build() {
        cd "$pkgname-${pkgver}"
        ./configure --prefix=/usr
        make
}

package() {
        cd "$pkgname-${pkgver}"
        make DESTDIR="$pkgdir" install
}

◉ The source function is used to download the source code tarball
◉ The build function creates the Makefile and compiles the code
◉ The package function installs the target code to a local subdirectory so the install can run as a standard user and not actually install onto the system. This steps also creates the package from the dummy directory.

The structure of the abs directory should be similar to this:

abs
└── sed
    └── PKGBUILD

Execute makepkg


To create the package we run the aptly named program makepkg. This is from the base-devel package group. We first run with the -g option which will create an MD5 Checksum for the downloaded source file and add it to the PKGBUILD file. We then run makepkg proper. This all should be run from the ~/abs/sed directory.

$ makepkg -g >> PKGBUILD && makepkg

This will run through the complete instruction set that we added to the PKGBUILD file and create a file named: sed-4.4-1-x86_64.pkg.tar.xz. This we can copy to the target systems or add to our own repo. In the demo we create the package file to another Arch Linux system for installation:

$ scp sed-4.4-1-x86_64.pkg.tar.xz 192.168.56.11:

We can then install it from that system with:

$ sudo pacman -U sed-4.4-1-x86_64.pkg.tar.xz

I hope you enjoy the video demonstration:

Friday 24 November 2017

7 Steps to Start Your Linux SysAdmin Career

LPI Tutorials and Materials, LPI Certification, LPI Guides, LPI SysAdmin

Linux is hot right now. Everybody is looking for Linux talent. Recruiters are knocking down the doors of anybody with Linux experience, and there are tens of thousands of jobs waiting to be filled. But what if you want to take advantage of this trend and you’re new to Linux? How do you get started?

1. Install Linux  


It should almost go without saying, but the first key to learning Linux is to install Linux. Both the LFS101x and the LFS201 courses include detailed sections on installing and configuring Linux for the first time.

2. Take LFS101x


If you are completely new to Linux, the best place to start is our free LFS101x Introduction to Linux course. This online course is hosted by edX.org, and explores the various tools and techniques commonly used by Linux system administrators and end users to achieve their day-to-day work in a Linux environment. It is designed for experienced computer users who have limited or no previous exposure to Linux, whether they are working in an individual or enterprise environment. This course will give you a good working knowledge of Linux from both a graphical and command line perspective, allowing you to easily navigate through any of the major Linux distributions.

3. Look into LFS201


Once you’ve completed LFS101x, you’re ready to start diving into the more complicated tasks in Linux that will be required of you as a professional sysadmin. To gain those skills, you’ll want to take LFS201 Essentials of Linux System Administration. The course gives you in-depth explanations and instructions for each topic, along with plenty of exercises and labs to help you get real, hands-on experience with the subject matter.

LPI Tutorials and Materials, LPI Certification, LPI Guides, LPI SysAdmin

If you would rather have a live instructor teach you or you have an employer who is interested in helping you become a Linux sysadmin, you might also be interested in LFS220 Linux System Administration. This course includes all the same topics as the LFS201 course, but is taught by an expert instructor who can guide you through the labs and answer any questions you have on the topics covered in the course.

4. Practice!


Practice makes perfect, and that’s as true for Linux as it is for any musical instrument or sport. Once you’ve installed Linux, use it regularly. Perform key tasks over and over again until you can do them easily without reference material. Learn the ins and outs of the command line as well as the GUI. This practice will ensure that you’ve got the skills and knowledge to be successful as a professional Linux sysadmin.

5. Get Certified


After you’ve taken LFS201 or LFS220 and you’ve gotten some practice, you are now ready to get certified as a system administrator. You’ll need this certification because this is how you will prove to employers that you have the necessary skills to be a professional Linux sysadmin.
There are several Linux certifications on the market today, and all of them have their place. However, most of these certifications are either centered on a specific distro (like Red Hat) or are purely knowledge-based and don’t demonstrate actual skill with Linux. The Linux Foundation Certified System Administrator certification is an excellent alternative for someone looking for a flexible, meaningful entry-level certification.

6. Get Involved


At this point you may also want to consider joining up with a local Linux Users Group (or LUG), if there’s one in your area. These groups are usually composed of people of all ages and experience levels, so regardless of where you are at with your Linux experience, you can find people with similar skill levels to bond with, or more advanced Linux users who can help answer questions and point you towards helpful resources. To find out if there’s a LUG near you, try looking on meetup.com, check with a nearby university, or just do a simple Internet search.

There are also many online communities available to you as you learn Linux. These sites and communities provide help and support to both individuals new to Linux or experienced administrators:

http://wiki.centos.org/Documentation

7. Learn To Love The Documentation


Last but not least, if you ever get stuck on something within Linux, don’t forget about Linux’s included documentation. Using the commands man (for manual), info and help, you can find information on virtually every aspect of Linux, right from within the operating system. The usefulness of these built-in resources cannot be overstated, and you’ll find yourself using them throughout your career, so you might as well get familiar with them early on.

Tuesday 21 November 2017

Join the Linux Professional Institute Development Community and earn your LPIC-2 Linux Engineer certification for free.

The Linux Professional Institute (LPI) has updated the objectives for LPIC-2 and is offering free beta exams to a limited number of qualified candidates.

The Linux Professional Institute (LPI) is organizing select events worldwide that offer a rare opportunity for eligible LPIC-1 certificate holders to be among the first to take the updated 201 and 202 beta exams, join the LPI Exam Development Community, and advance their professional credentials.

LPI Certifications, LPI LPIC Exam, LPI Exam

LPI is committed to the development of global standards and certifications in Linux and open source innovation. A community of Linux professionals, volunteers, vendors, and educators design the LPI Certification Program that unites the requirements of both IT professionals and the organizations that would employ them.

To achieve this goal LPI utilizes an open, rigorous, and consultative development process that uses both volunteer and hired resources. The LPI development process is widely recognized and endorsed by Fortune 500 companies, and has met the strict requirements of independent certification authorities.

About the LPIC-2 Linux Engineer Certification

LPIC-2 is aimed at advanced Linux professionals. To be awarded LPIC-2, candidates must be able administer small-to-medium sized mixed networks, and provide recommendations to upper management. To become LPIC-2 certified the candidate must be able to:

◉ Administer a small to medium-sized site
◉ Plan, implement, maintain, keep consistent, secure, and troubleshoot a small mixed (MS, Linux) network, including a:
◉ LAN server (Samba, NFS, DNS, DHCP, client management)
◉ Internet Gateway (firewall, VPN, SSH, web cache/proxy, mail)
◉ Internet Server (web server and reverse proxy, FTP server)
◉ Supervise assistants
◉ Advise management on automation and purchases

About the LPIC-2 Beta Exams (Version 4.5)

Beta exams are organized in the English language only, and will be delivered as paper based tests (PBT). Both exams, 201 and 202, each take 90 minutes and contain 60 questions. They are offered free of charge. Passing the exams for 201 and 202 in conjunction with an active LPIC-1 certification leads to the LPIC-2 Linux Engineer certification.

In addition, beta candidates will be asked to answer a short survey and provide feedback on the exam content. For this purpose, LPI Exam Development staff may visit beta exam labs to collect direct feedback from the candidates.

Candidates should be aware that beta exams cover the new version of the objectives which will contain new exam material. Their passed exams are counted as regular exams and can be used to achieve a certification. Failed exams can be deleted from the candidate’s profile on their request.

How to prepare for the LPIC-2 Beta Exams

Candidates can find updated exam objectives for the new LPIC-2 201 and 202 (Version 4.5) on the LPI Wiki Resources website: https://wiki.lpi.org/wiki/LPIC-2_Objectives_V4.5

The detailed list of changes is available at: https://wiki.lpi.org/wiki/LPIC-2_Summary_Version_4.0_To_4.5

How to sign up for LPIC-2 Beta Exams for free

Beta exams are currently available in select regions including Latin America, North America, Europe, Africa, and Asia). To apply for the free beta exams in your country, please fill out the LPIC-2 Beta Exam Contact Request Form here: https://www.lpi.org/lpic-2-beta-signup

“We are thankful to all the candidates for their support of our exam development process and standards at LPI. Accurate skills verification is vitally important in today’s economy, ” states Mr. Matthew Rice, Executive Director of LPI. He goes on to explain, “Our organization has a fundamental commitment to championing workforce development initiatives for Linux and open source professionals. We have been working closely with employers globally to reinforce the value of certification, and we are seeing demand for certification rise. A recent survey found that 93% of employers plan to hire a Linux professional.”

About the Linux Professional Institute (LPI)

LPI is the global certification standard and career support organization for open source professionals. With more than 500,000 exams delivered, it's the world's first and largest vendor-neutral Linux and open source certification body. LPI has certified professionals in 181 countries, delivers exams in 9 languages, and has over 200 training partners.

We are committed to providing the IT community with certifications of the highest quality, relevance, and accuracy. 

Saturday 18 November 2017

What is DevOps? or: Why Another DevOps Certification?

The Linux Professional Institute ("LPI") recently announced the objectives for a new certification – the LPIC-OT DevOps Tools Engineer – which tests the skills and understanding of the open source tools commonly used by organisations trying to create a DevOps environment.

LPI Tutorials and Materials, LPI Certification, LPI Exam, LPI Learning

If you want a brief introduction into DevOps, Wikipedia has a good description of the subject:

"a term used to refer to a set of practices that emphasize the collaboration and communication of both software developers and information technology (IT) professionals while automating the process of software delivery and infrastructure changes"

It is the simplicity of this description that belies the complexity that exists in both the collaboration methods and the full technology stack required to implement the desired organizational changes.

While researching the need for and potential content of a DevOps certification, LPI quickly found that while many organizations were covering the collaboration side of DevOps – such as the Project Management Institute with the PMI-ACP credential – few were offering a complementary certification that covered the technology required to support it.

Considering that most, if not all, of the most popular DevOps tools are open source, it was a natural decision for us to create a certification that tests the skills required to use these technologies effectively.

The image below is a good representation of the cyclical nature of DevOps, which involves taking new code, using it in production and providing feedback in order to aid further improvements and feature development:


The basic building blocks of a DevOps toolchain are covered in detail by LPI’s new DevOps tools certification – with two exceptions: the programming language technologies and the individual service configuration topics.

These two areas deserve their own attention and, possibly, their own certifications. Although, it should be noted that LPI does already cover the configuration and management of commonly deployed network services in our LPIC-2 certification.

There are many more services, including custom developed ones, which are also beyond the scope of the new certification.

However, what exactly should be covered in a programming language certification track remains a contentious topic.

As an aside, if you are interested in helping us determine what we cover in future certifications, feel free to join the LPI exam development mailing list by signing up here. We’d love for you to get involved.

The creation of a DevOps certification was also a little contentious among our development groups – partially because, at LPI, we tend to cover field-deployed topics.

On closer inspection by everyone involved it became clear that best practices and the use of reliable open source tools within DevOps were becoming ubiquitous. Certification of these skills became an important next step for LPI and our community.

This dominance of open source DevOps tools also demonstrates that open source software continues to lead and enable innovation. As IT professionals who relish using open source, this gives us every reason to look forward to more opportunities for participating in interesting projects.

It also means you will increasingly have the ability to better support the tools that you create. As an LPI certification holder myself, I'll be getting my certification as soon as I can.

Wednesday 15 November 2017

Installing Arch Linux

Installing Arch Linux is not the easy way to get Linux up and running and neither should it be. The idea behind Arch is that your learn Linux. Nothing is hidden from you and you control everything. Using this approach you do gain real control over the Operating System you install adding only what you want and nothing more. There is not installer program, so all the steps of the install process has to be completed by you. By the end of the install you will already be an experienced Linux Administrator, although you may have done more than a little Googling only the way.

LPI Tutorials and Material, LPI Guides, LPI Live

Downloading the ISO


Installing Arch starts with downloading the ISO file. This can be obtained from Arch themselves and the ISO is date based. The ISO file I use is dated 2016-11-01. As always, new features may be added in later releases of the installer disk. Downloading the DUAL version will allow you to install the 32 bit or 64 bit version of Arch. With the ISO downloaded you can install to a virtual machine or physical machine. Using the Linux command dd you may transfer the ISO contents to a USB drive.

Hardware Requirements


◉ 32 bit Version >= 256MB
◉ 64 bit Version >= 512MB

When installing Arch Linux in the demonstration we will see that a Virtual Machine is used but this could equally be a physical machine on bare-metal. The requirements can be very low depending on what you want to do with the system. I will not use a GUI or run many services so I can get away with very small requirements. Using as low as 256MB RAM is possible for the 32 bit version and 512MB is required for the 64 bit version. This is one feature that Arch offers, is that nothing is added that you do specifically add in. We have no spurious services running in the background that you may or may not run.

Installing Arch Linux


Starting the VM or the Physical Box to run the install making sure that we do boot from the ISO or CD. In doing this we will boot to the live Arch installation system. The boot menu will allow us to choose to install the 64 bit or 32 bit version. We choose the 64 bit edition in the demonstration. Linux will load and login automatically as the root user to the ISO.

Setting the Keyboard Layout


Using the default keyboard layout is going to be fine if you have a US keyboard. Using other keyboard layouts may require you to set the layout to match the keyboard that you have. If you are only going to connect via SSH you may well be able to leave this at the default and your client layout will have the correct mapping no mater what your Arch Server is set to. For example, connecting from a UK SSH client will give you access to the UK keylayout.

The keymaps are stored in sub-directories below the /usr/share/keymaps directory. The UK layout would be /usr/share/keymaps/i386/qwerty/uk.map.gz. To use this layout we can use the command:

loadkeys uk

Check Network


Since 2012 the network should load automatically where wifi is not required. So if you are using a wired connection with either a VM or Physical system you should have networking. Using the command ip we can verify the address settings:

ip addr show

The output will show that you have an IP Address.

Partitioning the Disk


We are now ready to start partitioning the disk. There are many scenarios that we could run here but we will make use of a swap partition and a single partition for the root filesystem. Initially, though, we can check the the disks we have available to the system.

lsblk

We should see devices /dev/sda if we have a standard hard drive available or perhaps, /dev/vda if we are using XEN or KVM virtualisation. When portioning the disk you can use tools like parted, fdisk, cfdisk and sfdisk to manage this. Much depends on tools you have used before and are comfortable with. We will use fdisk but tools such as cfdisk provide more of a menu if this is what you prefer. We will create a swap portion and a single portion for the rootfs.

Using fdisk as the partitioning tool:

fdisk /dev/sda
n enter # Create new partition
enter # Create primary
enter # Accept the default start
+256M enter # Set size to 256MB
t enter # Change type of partition
enter # Accept the default of partition 1
82 enter # Set it for swap
n enter # Create second new partition
enter # Create primary
enter # Accept the default start
enter # Accept the default end being the rest of the disk
a enter # Set the bootable flag
enter # Accept that this will be on partition 2 the last use partition
w enter # Save the changes
lsblk #Confirm partitions

Format Partitions


So we have the sda1 partition now which we will use for SWAP and sda2 which we will use for our root filesystem. We make use of the XFS filesystem for our root filesystem, you may choose other filesystems if you prefer.

mkswap /dev/sda1
mkfs.xfs -L ROOT /dev/sda2

Using the blkid command we can confirm that the label is set correctly:

blkid

With the filesystem comfortably on the partition we need to use for root, we can mount the filesystem through to the /mnt directory of the live CD. The swapon command is used to add the swap space to the system.

mount /dev/sdb2 /mnt
swapon /dev/sdb1
swapon -s #Can be used to display swap space in use

Installing Packages


While we continue install Arch Linux we do need to add some packages. We will add package groups to make this a little easier. The Arch Linux site has a list of package groups. We add the base package group, which as the name suggests, adds the minimal packages that we require. Adding in base-devel will give you tools like sed and gcc. We target the packages to be installed in the mount point the root filesystem was targeted at. As well as the package groups base and base-devel we can add individual packages.

The individual packages we add are listed below:

◉ grub: The GRUB 2 boot loader
◉ vim: Although the basic vi editor is included in the base group vim provides more functionality such as syntax highlighting.
◉ bash-completion: This package allows for tab completion on programs to be able to list sub-commands and options. Really useful to see which options are available to which sub-commands
◉ openssh: The OpenSSH Server so that we can connect remotely if required.

pacstrap /mnt base base-devel grub vim bash-completion openssh

This will take a little while to download, expand and install. Aim to leave 20 to 25 minutes or so for this process to complete

Create a New /etc/fstab File


The /etc/fstab file is used to mount filesystems at boot time, we of course, need to create this file. Installing Arch Linux in this way we get to see each process, whilst other distributions will use an installer process that will execute many of these task for you. Exposing these elements to you at installation does help you understand the installation better; even though it may seem a little frustrating if you are new to Linux. We still target the etc directory located in the mount point, as that is where the target root filesystem is located. The option -p will include psuedo-filesystems in needed and the -U option is to ensure that partition UUIDs are used in favour of partition devices.

genfstab -pU /mnt >> /mnt/etc/fstab
cat !$ # Will display the /mnt/etc/fstab file, !$ is the last argument

Change Root


We have now completed all the task that we need from the installation disk itself. We can now change the root directory to /mnt. In this ways all commands target our real root filesystem and not the installation disk.

arch-chroot /mnt

Set Root User Password


We can now assign a password to the root user:

passwd root

We will also take this opportunity to create a non-root user account. Adding the user to the wheel group which can be used for administrative purposes. The -m option ensure the user’s home directory is created and the -G option adds the user to the wheel group. We can use this membership of the wheel group later to allow this user access to administrative command via sudo.

useradd -m tux -G wheel
passwd tux

Setting the Hostname and Hosts Entry


We can echo the name that we want to use for our host to the /etc/hostname file. This can be just the name to the Fully Qualified Domain Name if you want.

echo zeus > /etc/hostname

We normally will have a localhost record for that name, so we can append an entry to the local hosts file. Using >> will append to the file and the -e with echo allows for escape code to be used. We use \t for a tab.

echo -e "127.0.1.1\tzeus" >> /etc/hosts

Setting the Timezone


The timezone of the system means that we can accurate know the correct time from time servers around the world. The time is given as UTC time and we can adjust the display to match the timezone we are located in. We need to create a symlink /etc/localtime that points to the correct timezone file we use. On my system I am setting for UK time.

ln -s /usr/share/zoneinfo/Europe/London /etc/localtime

We can now make sure that the system time is synchronised back to the hardware clock, with the hardware clock using UTC time. This is normal as the system time will then add the offset to the Hardware Clocks UTC time.

hwclock --systohc --utc

Setting the Locale


The locale has regional specific information such as the way that the date is display and numerical separators. To set the locale when installing Arch Linux we first edit a template file that list the locales. We just uncomment the locales that we want to use on the system. Uncomment the locale that you want from the file /etc/locale.gen. In my case I  only need to use en_GB.UTF-8 and this is the only locale that I uncomment. Once edited we can generate the lace information using the command locale-gen. Then add the default locale to the file /etc/locale.conf. In my case I add the line LANG=en_GB.UTF-8. To ensure that it is in use now we can also export the variable.

vim /etc/locale.gen
locale-gen
echo LANG=en_GB.UTF-8 > /etc/locale.conf
export LANG=en_GB.UTF-8

Generate the InitRAMFS and Install GRUB


To create the ram disk for the kernel we run the following command

mkinitcpio -p linux

We can then install the GRUB boot loader and populate the grub.cfg file.

grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

Enable Services


If we want to continue using DHCP on boot to obtain the IP Address we need to start the DHCP Client automatically and if we want to connect via SSH to the system we need to start the SSH Server on boot. Arch Linux is system based so we use systemctl to manage this.

systemctl enable sshd dhcpcd

Reboot the System


We are almost at the end of installing Arch Linux now. We first have to exit back to the ISO system from the chroot jail we entered. We then shutdown so we can remove the CD or ISO file before rebooting.

exit
shutdown -h now

If it is a virtual machine we can disconnect the ISO file before rebooting.

Final Configuration


With Arch now installed we can restart the system and login as the root user. Once looked in we can ensure that we have the correct keymap loaded on to load on boot.

localectl set-keymap uk

We can also set the default locale

localtectl set-locale LANG=en_GB.UTF-8

To allow the use we created to use sudo to run commands, this use was added to the wheel group, we run the command visudo and un-comment the entry for the wheel group.

visudo

The following video will step you through the complete installation process and is worthwhile watching the full 30 minutes.


Saturday 11 November 2017

PXELinux using Proxy DHCP

In this blog we look at PXELinux using Proxy DHCP. PXELinux is a network boot server and can be used as a replacement to boot CDs or USB. Devices boot from the network and the PXELinux server provides the bootstrap files. Often this is used to deploy new installations of Linux when a system boots. The PXELinux server will often use its own DHCP Server, but often you have an existing DHCP server and the PXELinux server, then , just needs to send a few extra DHCP options. This is achieved by setting up PXELinux using Proxy DHCP. For the demonstration we are using Ubuntu 16.04 Server.

Install Required Packages for PXELinux using Proxy DHCP


We will install the package dnsmasq as this provides DNS, DHCP, DHCP Proxy and TFTP services with the single package and single service. This is very much designed with PXELinux in mind as we want DHCP and TFTP or as we will use TFTP with Proxy DHCP. Along with this we want the package pxelinux and its sister package syslinux. Pxelinux provides network boot and syslinux provides boot mechanisms from hard disk, iso file systems and USB drives. The package systenlix provides a lot of the shared files that we need for booting to any medium.

PXELinux using Proxy DHCP, LPI Certifications, LPI Tutorials and Materials

$ sudo apt-get update
$ sudo apt-get install pxelinux syslinux dnsmasq

By default the dnsmasq service will be running and is configured as a DNS Server by default. We do not need the DNS server and we will disable this later.

Create the DNSMASQ Configuration


As our first step we will rename the dnsmasq configuration file, /etc/dnsmasq.conf

$ sudo mv dnsmasq.conf dnsmasq.conf.orig

We can then use the editor of choice to create a new configuration:

$ sudo vim /etc/dnsmasq.conf

port=0
log-dhcp
dhcp-range=192.168.56.0,proxy
dhcp-boot=pxelinux.0
pxe-service=x86PC,'Network Boot',pxelinux
enable-tftp
tftp-root=/tftpboot

Make sure that you setup the correct IP address for the network that you want Proxy DHCP to work with. You must have an interface configured on this network range.

◉ port=0 : Disables the DNS Service
◉ log-dhcp=192.168.56.0,proxy : Log DHCP traffic
◉ dhcp-range : The network range that we want to listen to DHCP requests on. The proxy options ensures we only send DHCP options and not the main IP address and mask. This is used so we can interoperate with and existing DHCP Server on the network
◉ dhcp-boot=pxelinux.0: Set the DHCP Option for the boot filename used as the network bootstrap file
pxe-service=x86PC,’Network Boot’,pxelinux : Here we set the 2nd DHCP Option we deliver to DHCP clients and specify this is for our bios based systems, x86PC, a boot message and the name of the bootstrap file omitting the .0 from the end of the name.
◉ enable-tftp : We need the TFTP server to deliver files after the bootstrap files has been delivered by PXELinux using Proxy DHCP.
◉ tftp-root=/tftpboot : We set the path to the root directory that will be used by the TFTP Server

Fix the /etc/resolv.conf


When DNSMASQ was installed the resolv.conf will point to the localhost for DNS name resolution. This will be fine if we leave the DNS Server running but we want to disable it, as we have set with the port=0 setting in the dnsmasq.conf. To ensure that when using PXELinux with Proxy DHCP we do not need DNS we must reconfigure DNSMASQ to ignore the local interface. This is set in the file /etc/default/dnsmasq. And we need to add a line to this file:

$ sudo vim /etc/default/dnsmasq

#Add this as the last line
DNSMASQ_EXCEPT=lo
Create the TFTP Root

We can create the TFTP Server root directory and a subdirectory that we will need:

$ sudo mkdir -p /tftpboot/pxelinux.cfg

We can now restart the services. Restarting the networking service will ensure that the resolv.conf is rewritten as well:

$ sudo systemctl restart dnsmasq.service networking.service

Populate the TFTP Root


We now need to make sure the the bootstrap file that the DHCP options refer to is present. We will also need some other files from the system Linux package. We will add these all to the /tftpboot directory we have recently created.

sudo cp /usr/lib/PXELINUX/pxelinux.0 /tftpboot/
sudo cp /usr/lib/syslinux/modules/bios/{menu,ldlinux,libmenu,libutil}.c32 /tftpboot/
ls -l /tftpboot/
total 240
-rw-r--r-- 1 root root 116492 Oct 29 13:15 ldlinux.c32
-rw-r--r-- 1 root root  24196 Oct 29 13:15 libmenu.c32
-rw-r--r-- 1 root root  23700 Oct 29 13:15 libutil.c32
-rw-r--r-- 1 root root  26208 Oct 29 13:15 menu.c32
-rw-r--r-- 1 root root  42788 Oct 29 13:14 pxelinux.0
drwxr-xr-x 2 root root   4096 Oct 29 13:18 pxelinux.cfg

Create the PXELinux Configuration


When using PXELinux using Proxy DHCP the boot process will look for configurations for the client MAC address it IP address. If a specific file is not found then it can fall back to the default configuration. We will use the default configuration for all the clients at this stage and create a configuration files /tftpboot/pxelinux.cfg/default

$ sudo vim /tftpboot/pxelinux.cfg/default

default menu.c32
prompt 0
menu title Boot Menu
  label localboot
    menu label Boot Local Disk
    localboot 0

We load the menu program first and display the title. We have just one menu item that boots to the local disk. There will be more on installing Linux with these menus in another blog.

All we need to do is boot from a device on the network and test that NetBoot is working for that client. The video shows the process from start to finish.