Friday 29 December 2017

LPIC-3 Host Configuration Management Part 1

Weight: 2
Description: Candidates should be familiar with the use of RCS and Puppet for host configuration management.

Key Knowledge Areas


◈ RCS
◈ Puppet

Terms and Utilities


◈ RCS
◈ ci/co
◈ rcsdiff
◈ puppet
◈ puppetd
◈ puppetmasterd
◈ /etc/puppet/

Configuring Puppet Master


Centralized server management can be achieved on your Linux Server with products such as the long established Puppet project. The puppet server is rather aptly names the Puppet Master and this acts as a central configuration server that can be used to keep configuration files maintained across your server estate and ensure services are installed and running. Along with Puppet we look at RCS, version control software that allows you to check-out and check-in documents to both provide version control and effective access control where perhaps many administrators may update scripts.

What is Puppet


Puppet is an open source framework based on Ruby for managing the configuration of computer system. Puppet is licensed under GPLv2 and can be used as a standalone or client-server model. We will use both models in the tutorials the first video being with the Puppet Master (Server) and applying local policies then extending to bring in more clients tutorial. To see information relating to the puppet master package:

Default host “puppet”


The default configuration of the clients or puppet agents will look for the puppet server or puppet master as the host puppet or puppet.yourdomain.com. it is therefore easiest to ensure that the host that will act as the puppet master be configured with DNS or hosts entries as puppet. In the example the puppet master will be configured on the host 192.168.0.200 so the clients will have host file entries:

192.168.0.200     puppet

The lab does not use DNS.

Install the package puppetmaster


The central puppet server is known as the puppetmaster and should be the host with the entry puppet in the hosts file or DNS. To install the puppetmaster package on Ubuntu

apt-get update
apt-get install puppetmaster

To install on SUSE and we use openSUSE in the video

zypper in puppet-server

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

With this in place we are ready to configure the server and in the first video we will use this as a standalone deployment bringing in clients in the second video. First we will check the resolution of the puppet master. The file /etc/sysconfig/puppet will define the hostname that the client expects the puppet master to be; it default to puppet. I have hostname records point to this machine as puppet. In your own environment you may use local host entries or a CNAME entry in DNS

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

Correct Permissions


On some distributions this is not required but on other such as SUSE the directory permissions need to be implemented correctly for the puppet master to work correctly. The directory /var/lib/puppet is owned by root and should be owned by the account that puppet users: puppet. The following command will correct the issue and don’t forget to double check on the user and group name from on your system from the /etc/passwd and /etc/group file. The following command is correct on the openSUSE system.

chown puppet:puppet /var/lib/puppet

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

We can now start the puppetmasterd, on SUSE we can use the sym-link:

rcpuppetmasterd start

On other systems

service puppetmasterd start

This will populate the directory /var/lib/puppet and will also create the /etc/puppet/manifests sub directory if it does not exists.

Configure the Puppet File Server


Some distributions will have the /etc/puppet/fileserver.conf already created and it will need to be modified. In SUSE you will need to create and populate the file. This is the default file that defines the puppet master’s fileserver configuration; you can specify a different file using the puppetmasterd –fsconfig flag.

As the name suggests it allows the puppet server to share files to clients. This was configuration files can be kept up-to-date on client machines. In the example we will distribute the /etc/motd file.

From the following /etc/puppet/fileserver.conf we have defines a share simply called files and have the path pointing though to /etc/puppet/files; if you are thinking of sharing more files then perhaps /var/puppet/files may be more appropriate than the /etc directory.

[files]
 path /etc/puppet/files
 allow *

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

The path directive may also take variables such as:

◈ %h : To represent the client’s host name
◈ %H : To represent the client’s FQDN
◈ %d : to represent the client’s domain name

Which can subsequently allow for more granularity when distributing files.

For security we can add allow and deny directives and specify hostnames or IP addresses as required. Here we allow all devices.

To deploy a file from this share we must add the file to the share and restart the puppetmasterd. We are distributing a standard motd file so we will create the directory:

mkdir /etc/puppet/files

We then create the file to distribute with:

echo “Only authorized access is a allowed” > /etc/puppet/files/motd

Now we can restart the service with the following command on SUSE. We will then be ready to create the first puppet manifest.

rcpuppetmasterd restart

Define the site manifest


Normally we would need to create the file /etc/puppet/manifests/site.pp. This file is the manifest or policy that all clients will look for when connecting to the Puppet Master. As we are first applying only local manifest we can create files with the names of our choice. We will create three files to demonstrate deployment of files, control of services, and finally a manifest to ensure a package is installed. These all could be in a single manifest but for the video demonstration in the video it will be shown as three files.

For ease of page space we will create a single file /etc/puppet/manifests/test.pp. PP is the normal suffix for manifests and these are just etxt files so using your text editor of choice is just fine.

package {'nmap': ensure => installed }
service { 'sshd': ensure => running, enable => true, }
file { “/etc/motd” : source => “puppet:///files/motd”, }

In the file we create three instructions:

◈ package
◈ service
◈ file

Each instruction has a name and attributes to look for. With the package we are just looking to installed nmap if it is not installed. We do not mind how it is installed but it just needs to exist in a repository. This could easily be Solaris, SUSE, Red Hat or Debian as we do not concern ourselves with the “how it is installed”.

The same for the service we just want to make sure that it is running and is enabled for auto start. How it is started does not matter, using the service command or rc symlinks and nor are we concerned of the chkconfig command ad the switches used. This is OS specific and not a concern for our instruction. In this way we keep as open as possible onto which platforms this will work.

The file command then is ensuring the centralized motd file is used from the puppet server. We have only set the source attribute but we could also add in the owner, group, and mode if we needed.

Manually applying the manifest


To test the manifest we can use the apply option to the client. This is used only in standalone configurations and is useful for testing. As we have not added in any client yet this is a great way to prove the manifest is working

puppet apply /etc/puppet/manifests/test.pp

This command will check the named manifest and the associated tasks; we then should see the file being updated, the service sshd will run and nmap will be installed. Certainly this shows how powerful Puppet can be and will give you some ideas of centralized management and machine configuration for your environment. In the next video we will add in more clients to test the full potential of puppet.

LPIC-3 Exam, LPIC-3 Guides, LPI Tutorials and Materials

Wednesday 27 December 2017

Linux Essentials - What is the Certification

Linux Essentials, LPIC Certification, LPIC Exam

Before Linux Essentials


Over the last few years, I have been writing many free video tutorials for learning Linux. In the main, they have been aimed at existing system administrators who may be new to Linux but versed in IT management tasks. Linux is not just for seasoned administrators and not only for administrators. There are many people that can benefit from what the Operating System has to offer both. This includes system administration folk as well as those in a pure user based function. The LPI has introduced the Linux Essentials certification aiming it at young people through schools and colleges, and academics as well as those just wanting to make a start in Linux.

Making Linux Certification Open to All


Certainly having the mindset to involve young people and their teachers will help. The more secondary and college tutors that have an understanding of Linux then the more Linux will naturally be taught within schools. This is important when you consider that Linux is FREE and the OS and Applications used in schools  should not matter. It is the cost and freedom to manage the software the way you want that is important. We want a generation growing up with the understanding of how the software works and how collaboration empowers.

Free Training


During this course, we will follow the key objectives of the exam. This will involve investigating Linux both from a  simple administration and usage perspectives. The Linux Essentials exam is a recommendation, and not required for pre-requisite for training in the LPIC professional program. Exams are delivered in schools and training centres around the world. To locate the centre nearest you, please contact your local LPI Affiliate.

In order to pass the exam and gain the Certificate of Achievement in Linux Essentials, you should be able to demonstrate:

◉ Understanding of the basic concepts of processes, programs and the components of an Operating System.
◉ Having a basic knowledge of computer hardware.
◉ A knowledge of Open Source Applications such as OpenOffice in the Workplace as they relate to Closed Source or proprietary equivalents from other software vendors.
◉ An understanding of navigation systems on a Linux Desktop and what tools can be used to locate help.
◉ Basic skills of using a command line interface.
◉ The skills needed to use basic functions in command line text editors such as vi or nano.

Wednesday 20 December 2017

Setup a TeamSpeak 3 Server on Linux (Ubuntu / Debian)

TeamSpeak 3 is a heavily used solution (if not the most used one) to do low latency voice chat while gaming. For e.g. if you use Skype, the delay and the traffic between the talking people will be much higher, besides the Skype client being way more bloated than TeamSpeak. Besides TeamSpeak 3 there are other gaming based low latency solutions like Discord (which uses central servers without the possibility to setup your own instance) and Mumble.

However, this tutorial is about how to setup a TeamSpeak 3 server on your Linux box. Thanks to the TeamSpeak 3 developers, this process is rather easy and you should have a running TeamSpeak 3 server within minutes. So, let’s start.

Install requirements


The TeamSpeak 3 Server doesn’t really need any extra libraries in order to work. With a new Debian 9 setup for e.g. it start without any additional libraries. However to download and extract the server software we need some additional software, in this case a download manger (wget) and the utility to extract the compromised server software (bzip2). With the following command you will install this needed utilities. In this case we use Debian / Ubuntus package manager APT:

user@server:~$ sudo apt-get update
user@server:~$ sudo apt-get install wget bzip2

Now that all the needed utilities are on board, let’s move forward and install the server software itself.

Download and install the TeamSpeak 3 Server


TeamSpeak 3 is a proprietary software solution. Due to this fact you will not be able to install it from the repositories of your Linux distribution. So this means you have to download it from the developers homepage onto your server. You can download the latest TeamSpeak 3 Server software. As of writing this tutorial the latest and greatest TeamSpeak 3 Server version was 3.0.13.8. Whenever you go through this tutorial, your version number may be a newer one. The following command downloads version 3.0.13.8 to your server:

user@server:~$ wget http://dl.4players.de/ts/releases/3.0.13.8/teamspeak3-server_linux_amd64-3.0.13.8.tar.bz2

After the download is finished (which can take some time depending on your network speed), we can extract the downloaded server software. The following command is doing this:

user@server:~$ tar xfvj teamspeak3-server_linux_amd64-3.0.13.8.tar.bz2

Now it’s time to start the server for the first time.

Starting the TeamSpeak 3 Server


Now, that we’ve downloaded and extracted the server software, we will be able to start the server software. To do so, we have to change into the TeamSpeak Server directory (which has been automatically created with extracting the server software) and issue the command to start the server:

user@server:~$ cd teamspeak3-server_linux_amd64
user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh start

The first start takes some time, approximate 1-3 minutes. After the first start is finished, you will get an output like this:

------------------------------------------------------------------
 I M P O R T A N T
------------------------------------------------------------------
 Server Query Admin Account created
 loginname= "serveradmin", password= "BVV2YUIJ"
------------------------------------------------------------------


------------------------------------------------------------------
 I M P O R T A N T
------------------------------------------------------------------
 ServerAdmin privilege key created, please use it to gain
 serveradmin rights for your virtualserver. please
 also check the doc/privilegekey_guide.txt for details.

token=zvCYfUTRlcYl12dviAMuGKR7e+nQYKSE0bD9O4CI
------------------------------------------------------------------

Important: You should write down the server query admin account on a piece of paper, or you save these informations in a password database. This account is needed in emergency cases, like lost TeamSpeak user data or hacking attempts.

In this case we only need the privilege key for now. Store the line, starting with token= in a text file. We need this token later on.

To finally ensure if you’re server is running correctly, you can issue the following command:

user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh status
Server is running

If the output Server is running is welcoming you, it’s time to connect to your new server.

Connect to your server and give yourself admin rights


At this point I assume, that you’ve already installed the TeamSpeak 3 client onto your computer. If you didn’t, you should download it. If you’re a Linux user, you have to download the TeamSpeak 3 client through the link. You will not find the TeamSpeak 3 client in the distribution repositories due to the same reason as you will not find the TeamSpeak 3 server software.

To connect to your server, start the TeamSpeak 3 client and click on Connections –> Connect or use the hotkey CTRL+S. In the upcoming dialog, enter the IP address or name of your server and pick a nickname which you want to use on that server and hit the Connect button.

Linux Tutorials and Materials, Linux Guides, Linux Learning
Connection dialog

The server recognizes that the server was initially setup and pops up another dialog where it asks for a so called Privilege Key. This Privilege Key is the generated token we’ve saved a few steps before in a text file. Open the text file (if not already) and copy everything after token= and insert this key into the dialog box like this:

Linux Tutorials and Materials, Linux Guides, Linux Learning
TeamSpeak privilege key

After you’ve used the privilege key you can delete the text file. A privilege key is for onetime use only. However, you should now see a new symbol besides your nickname which states that you’re an Administrator. From now on, you should be able to create channels, server groups, edit the servers name and so on.

Linux Tutorials and Materials, Linux Guides, Linux Learning
Indicator that you’re an Admin (click to enlarge)

After this step your TeamSpeak 3 server is completely and fully setup. You can now close the SSH connection to your server and start to share your servers address with your friends and start talking.

Useful tips


While the TeamSpeak 3 software is mainly rock solid, you should take care that your server is always up to date. To update the TeamSpeak 3 server software go to their official homepage, download the newest version (like you did before in this tutorial with wget) and extract it. The files will be overwritten besides the database files. This ensures that you don’t have to start all over again when you do an update. However, you have to stop the TeamSpeak 3 server before you update it. You can do this easily like this:

user@server:~$ cd teamspeak3-server_linux_amd64
user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh stop

After you’ve extracted the updated server files you can start the server again:

user@server:~/teamspeak3-server_linux_amd64$ ./ts3server_startscript.sh start

Please be also aware that you should use a firewall or package filter solution like IPTables. A server with the latest security patches is good, but a firewall solution will always increases the security these days.

Final words


In times where almost everything goes more and more centralized. I feel that a solution like TeamSpeak 3 is really needed. I know there are other solutions like Mumble which has the additional benefit of being Open Source, however, we can’t have enough decentralized solutions if you ask me.

Sunday 17 December 2017

Proxy over SSH on Windows, Mac or Linux

SSH: A tool not only to do remote work


SSH (Secure Shell) is mostly used to do maintenance on your Linux machines. However, over the years the capabilities of SSH has been extended from a simple secure “remote maintenance protocol” to a utility which is capable of doing things like X-Forwarding (for forwarding graphical application), port forwarding or providing a SOCKS proxy.

Why do you even want to use an proxy server?


Proxy servers are helpful in a lot of ways. For e.g. if you’re staying some nights in a hotel or you’re in any other public Wireless LAN which blocks a specific website you want to visit a proxy will help you to surpass the filter. Or if you are forced to use techniques like DSLight, were you have to share a single IPv4 address with other users. Or to unblock videos on Netflix which are blocked in your country. You see, the situations where a proxy server is helping you are almost countless.

But why would you want to “setup” an proxy server on your own? The simple answer is, that a lot of the public proxy servers are simply overloaded. They have to handle so much traffic that you sometime barely be able to get 50% of your normal internet speed while using one of these public proxy servers. Besides this, using SSH as a proxy is really easy.

How start a SOCKS proxy server by using SSH


In order to establish a SSH connection to your server which will then be an SOCKS proxy, you have to have the SSH server installed on the server side and the client software on the client side of course.

Using SSH as a proxy on Linux or Mac

For Linux or Mac you can use the SSH client command which is integrated in both systems. The following command would start an SSH connection, where your SOCKS proxy would then be locally reachable on port 19999 (19999 is just an suggestion and can be changed to almost everything starting from 1024 to 49151 (so called “user ports”)) :

user@client:~$ ssh -D 19999 user@server

After the connection has been successfully established, configure your browser to use the proxy server (follow the instructions below).

Using SSH as a proxy on Windows

Windows doesn’t comes with an SSH command integrated. This means we need an additional software in order to get connected and use the SSH server as a proxy. My recommendation here is PuTTY. PuTTY is a lightweight SSH client for Windows, which is the counterpart of the SSH command on Linux / Mac. You can download it here. After the download is finished, start PuTTY and enter the server you want to connect to like this:

Hostname you want to connect to

Navigate to Connection –> SSH –> Tunnels and enter the port 19999 in the Source port field (19999 is just an suggestion and can be almost everything starting from 1024 to 49151 (so called “user ports”)). After you’ve entered the desired port number, ensure that you’ve selected Dynamic instead of Local:

Settings to tell SSH to create a SOCKS proxy

Click on the button Add in order to tell PuTTY to actually use the given information for the next connection. If you clicked on Add, you should see the port number you have chosen with the letter D in the upper box. If you’ve done this as well, you’re ready to connect to your server. After the connection is successfully established, go on and configure your browser (follow the instructions below).

Configure Firefox / Google Chrome to use the SOCKS proxy

Now that we’ve connected successfully to our server via SSH, we can actually use the SOCKS proxy which has been provided with the actual SSH connection.

Configuring Firefox to use the SOCKS proxy

Click on the upper right options Symbol (represented as three horizontal lines) and click on Preferences. On the upcoming window, select General and scroll down until you see the context Network proxy. Click on Settings and enter your SOCKS proxy details like this:

Firefox proxy settings

Ensure that you’ve checked the box Use this proxy server for all protocols. After you’ve clicked on OK you’re ready to go. Use portals like BearsMyIp to check if you’re actually surfing through your SSH SOCKS proxy tunnel.

Configuring Google Chrome (or Chromium) to use the SOCKS proxy

For Googles Chrome browser you have to use the command line in order to set your SOCKS proxy. This includes Windows users as well. To start Googles Chrome using your SSH SOCKS proxy start the browser like this:

google-chrome --socks-proxy="socks5://localhost:19999"

The windows command line may look like this:

google-chrome.exe --socks-proxy="socks5://localhost:19999"

Of course you can change google-chrome to chromium if you’re an Chromium user instead.

Final words


An proxy server does have it’s advantages. However, public proxies are sometimes overloaded and you will recognize that as a significantly slow down of your internet connection when you start using them. As an alternative you can use SSH as a simple and fast way to make yourself an SOCKS proxy. Using SSH as a SOCKS proxy is a lot easier than configuring an Apache with Squid for e.g.. If you have a server and you need a proxy, I highly recommend you to use SSH in order to get a safe, fast and stable proxy server with a single command or a few clicks.

Friday 15 December 2017

Exploring DevOps Tools - How to Choose the Tools Right for You

LPI Guides, LPI Tutorials and Materials, LPI Certifications, LPI Learning

The popularity of the DevOps movement has resulted in a wide range of tools in the marketplace; the XebiaLabs DevOps Tool Chest alone lists over 200 different individual tools. And while DevOps is about more than just which tools you use, they are essential to benefiting from the improved speed, agility, and automation that DevOps offers.

To choose the DevOps tools that are most appropriate for you, your projects, and your organization, it makes sense to begin with exploring how they are categorized.

How DevOps Tools are Categorized


LPI Guides, LPI Tutorials and Materials, LPI Certifications, LPI Learning
The types of tools emerge directly from the activities required to deliver software to users via the Continuous Delivery (CD) pipeline model. Each stage in the CD model corresponds to an activity in the software development lifecycle which moves software from development towards production.

Every software team’s CD pipeline – or toolchain – is a mirror of their software development processes, which means there are many possible configurations. DevOps principles involve collaboratively delivering high-quality software, and that means that tools naturally fall into more than one category because they are used throughout DevOps teams. Having said that, there are tool types that are common to all pipelines. These are: build tools, test tools, deploy tools, and monitor tools.

DevOps ‘Build’ Tools

‘Build’ tools assist in the creation of the software, and they make up the beginning stages of all pipelines. Included under this category are tools for pulling source code from Version Control Systems (VCS), handling build-time dependencies among components, and building entire software products. Such tools automatically send reports if any errors are encountered and prevent software changes from moving down the pipeline. It is, by far, the largest category of DevOps tools.

DevOps ‘Test’ Tools

To ensure quality, automated testing is a vital stage of the CD pipeline. These tools test whether or not software features work as expected, previous software ‘bugs’ have reappeared (through regression testing), and check that performance is maintained. Failing tests should prevent software from reaching further stages, but the severity of the test failure is taken into account.

DevOps ‘Deploy’ Tools

Once code changes have passed all the quality checks from testing, they are packaged and tagged for release, and deployed to the production environment. This stage incorporates all tasks required to configure and provision the target environment, and install the software on the machines.

Deployment tools are increasingly working directly through cloud services to automatically create and configure virtual machines. The steps for creating the environment are increasingly written as code, giving rise to the term Infrastructure as Code.

DevOps ‘Monitor’ Tools

Once the latest code is running in the production environment, its operation needs to be monitored for signs of bugs, performance issues, and anything negatively impacting the user experience. Issues appear when users are engaging with the software, and therefore it is important to capture information through logging, alerting, and performance monitoring for analysis. DevOps monitor tools capture this data.

How to Choose DevOps Tools


With these categorizations in mind, here are the items you should consider when reviewing, evaluating, and choosing DevOps tools that will be right for you, your projects, and your organization:

Common Considerations (across all DevOps tools)
  • Track record of the tool working across different projects of various sizes and complexity
  • Time and/or cost involved in getting team members up-and-running on the tool – taking into account project deadlines and budgets
  • The expected return on investment, cost-savings, or cost-recovery expected from the tool.
  • The ability of the tool to integrate seamlessly with other tools along the Continuous Delivery Pipeline
  • The tool’s ability to keep project/client data secure (i.e within project groups or the organization)

Category-Specific Considerations


For ‘build’ tools – consider the programming language and runtime environment of your software product.

For ‘test’ tools – consider the scale and type(s) of testing you are conducting, e.g. functional testing, performance testing, accessibility testing, data testing, security testing, etc.

For ‘deployment’ tools – consider the reliability you need and whether a master-client or decentralized model would meet the requirements of your production environment.

For ‘monitoring’ tools – consider the degree to which they support your software architecture and their scalability.

Knowing the categories of DevOps tools and key considerations should help you optimize your DevOps processes and its outcomes.

Wednesday 13 December 2017

How OpenBSD and Linux Mitigate Security Bugs

At Open Source Summit in Prague, Giovanni Bechis will discuss tools that improve software security by blocking unwanted syscalls.

Bechis is CEO and DevOps engineer at SNB s.r.l., a hosting provider and develops web applications based on Linux/BSD operating systems that is mainly focused on integrating web applications with legacy softwares. In this interview, Bechis explained more about his approach to software security.

Linux.com: What’s the focus of your talk?


The talk will focus on two similar solutions implemented in Linux and OpenBSD kernels, designed to prevent a program from calling syscalls they should not call to improve security of software.

In both kernels (Linux and OpenBSD), unwanted syscalls can be blocked and the offending program terminated, but there are some differences between Linux and OpenBSD’s solution of the problem.

During my talk, I will analyze the differences between two similar techniques that are present in Linux and OpenBSD kernels that are used to mitigate security bugs (that could be used to attack  software and escalate privileges on a machine).

Linux.com: Who should attend?


The scope of the talk is to teach developers how they can develop better and more secure software by adding just few lines to their code. The target audience is mainly developers interested in securing applications.

Linux.com: Can you please explain both solutions and what problems they actually solve?


The main problem that these solutions are trying to solve is that bugs can be exploited to let software do something that it is not designed to do. For example, with some crafty parameters or some crafty TCP/IP packet, it could be possible to let a program read a password file; it should not read or delete some files that it should not delete.

This is more dangerous if the program is running as root instead of a dedicated user because it will have access to all files of the machine if proper security techniques have not been applied.

With these solutions, if a program tries to do something it is not designed for, it will be killed by the kernel and the execution of the program will terminate.

To do that, the source code of the program should be modified with some “more or less” simple lines of code that will “describe” which system calls the program is allowed to request.

A system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on, by allowing only a subset of the system calls we can mitigate security bugs.

Last year, for example, memcached, a popular application designed to speed up dynamic web applications, has suffered by a remote code execution bug that could be exploited to remotely run arbitrary code on the targeted system, thereby compromising the many websites that expose Memcache servers accessible over the Internet.

With a solution like seccomp(2) or pledge(2), a similar bug could be mitigated, the remote code would never be executed, and the memcached process would be terminated.

Linux.com: What’s the main difference between the two solutions?


The main difference (at least the more visible one without viewing under the hood) between Linux and OpenBSD implementation is that, with Linux seccomp(2), you can instruct the program in a very granular way, and you can create very complex policies, while on OpenBSD pledge(2) permitted syscalls have been grouped so policies will be simpler.

On the other hand, using seccomp(2) in Linux could be difficult, while OpenBSD pledge(2) is far easier to use.

On both operating systems, every program should be studied in order to decide which system call the application could use, and there are some facilities that can help understand how a program is operating, what it is doing, and which operations it should be allowed to do.

Saturday 9 December 2017

Installing the Perl Interpreter on Windows and Customize Notepad++

The Perl Interpreter on Windows


Of course if you are learning Perl on Linux it is usually included in the OS and does not need to be added by yourself but, if you would like to try it on Windows then you are going to need to install the program. You have a choice of Strawberry Perl and Active Perl, this is a personal choice and should not make a whole heap of difference. In the demonstration we install Strawberry Perl and configure Notepad ++ so that it can run Perl scripts from inside the editor program.

LPI Tutorials and Materials, LPI Guides, LPI Learning, LPI Certifications


The installation program can be install from strawberryperl.com and once downloaded the MSI installer is a simple process to install. Once installed from the command line prompt we can type the command :

perl -v

To verify the program works and displays the version of perl. We have installed version 5.20.1.

The run a perl program we can execute it direct from the command line where the code is not long:

perl -e "print 'Hello';"

This will print the word hello to the screen. More complex code will be written and preserved into a script with a text editor and then run as similar to this:

perl c:\temp\test.pl

However, if we choose the use the popular and free Notepad++ text editor we can add a plugin that will allow us to run Perl code from within the editor itself.

Firtst we need to start Notepadd++ and from the Plugins Menu Select Plugin Manager. From there we need to install the NppExec plugin. Once restarted we can return the the Plugin Menu and this time select NppExec > Execute and add a script.

NPP_SAVE
cd "$(CURRENT_DIRECTORY)"
C:\strawberryperl\bin\perl "$(FILE_NAME)"

Save this and name it Run Perl

From Plugin > NppExec > Advance Options We can choose to add this script to the Macron Menu.

Once restarted we can then create a Perl Script in he editor and run it directly without the need of the command line.

Wednesday 6 December 2017

LPIC-1 Using the command type

LPIC-1 Using the command type, LPIC-1

For objective 104.7 of the LPIC-1 101 exam there are a few commands to look at, one of which is type. The command type itself is a shell builtin, in other words the program is part of the BASH shell and not a stand-alone program. Try the following command :

$ type type

In the above example we are using the command type with an argument of type. I know it looks strange but it is an introduction to the command. We should be able to see the output similar to the following:

type is a shell builtin

Now that we have been able to determine the type of command it is we can realise other information about the command. Being a shell builtin there will not be a man page. We will need to use man bash and search for the type in the man page. There is no explicit help for type but using type –help is an invalid option and display the usage statement

As we saw for the command type it was a shell builtin; however It could be 1 of the following 5 types

1. Alias
2. Shell Builtin
3. Keyword
4. Function
5. Command File

LPIC-1 Using the command type, LPIC-1If we use check on type ls, most often this will report as an alias. Trying the command type case should return a shell keyword. If we unalias ls, and then try type ls again we should see that it is hashed to a file.

Lets start to look at some of the useful options. We can also use type to show all types for a given command; for example with ls aliased as normal we can use type -a ls and the output will show that is found as an alias first and then the file:

$ type -a ls
ls is aliased to 'ls --color=auto'
ls is /bin/ls

The use of type -t will just show the the single word type of a command:

$ type -t if
keyword