Thursday, 30 September 2021

The Many Meanings of Linux, Part 2 of 2

LPI Exam Prep, LPI Tutorial and Material, LPI Exam Preparation, LPI Study Materials, LPI Career, LPI Learning, LPI Guides

The first part of this article explored the astonishing impact of Linux on allowing an entirely free computer system, proving the viability of free and open source software, Triggering a move toward open standards and open source software, and bringing modern software within the reach of millions more people. This part concludes the article by covering the following trends:

Read More: 010-160: LPI Linux Essentials (Linux Essentials 010)

◉ Restructuring the computer industry through virtualization, containers, and the cloud

◉ Sparking interest in the newly recognized phenomenon of crowdsourcing

◉ Accelerating development of new, low-cost hardware platforms

Restructuring the Computer Industry through Virtualization, Containers, and the Cloud

No one nowadays can escape cloud computing, a vague term but a useful one. The cloud is everywhere, whether you upload your photos and videos to an online service or run a business on virtual systems in the cloud. There are cloud services based on other cloud services, like the proverbial turtles that stand on other turtles. It seems like the easy thing for any company running a service is to sign up with AWS or Microsoft Azure.

Virtualization, represented by companies such as VMware, offered a proposition decades ago that corporate data center administrators couldn't refuse: a more efficient use of hardware with significant cost savings. Then AWS launched the modern cloud era. Now containers have an even bigger impact, driving a re-assessment of what constitutes a computer application.

Besides their quest for efficiency and robust, scalable computing, the thing that draws all these trends together is their reliance on free software, notably Linux. The choice is totally practical. Who wants to count up instances of running systems and pay license fees when the systems can start up in seconds, disappear, and be replaced? Proprietary systems have defined special licenses for such situations, but the obvious choice is just to go with Linux, Xen, Kubernetes, and other free software with no baggage.

Linux is the most popular choice in data centers for another reason: it's customizable to a degree that's unimaginable for most operating systems. Administrators build in just the features and libraries they need for their data center. The kernel is draped in obscure configuration options meant for perfecting the use of the system for particular situations—options craved by administrators but meaningless to an end user, such as an option that chooses the queueing discipline for sending network packets. If you don't understand what I said in that previous sentence, it just illustrates my point.

Torvalds started Linux as an alternative to the unsatisfying free software options for personal computers. When data centers grasped the value of Linux, the development team focused on meeting their needs. Linux has now had a couple decades to refine its appeal to people concerned with virtualization, containers, and the cloud.  Recently, 72% of enterprises "described their cloud strategy as hybrid-first or private-first." Meanwhile, 54% of systems in the cloud were running Linux, and Linux is probably nearly universal on containers (thanks to Docker). Virtualization, containers, and the cloud couldn’t have come so far without Linux.

Interest in Crowdsourcing

For a long time, ordinary people weren't allowed to participate in decision-making on a corporate or government scale. Beth Simone Noveck, in Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing, writes that in the twentieth century, universities "gradually redefined public service as a position demanding the expertise of university-educated professionals" (pp. 70-71). She underlines that professions "exclude those who do not belong to them from sharing in the body of knowledge their members claim" (p.47). And she goes on to show the limitations of those assumptions, along with remedies.

Change really started in the 1950s and 1960s, when civil rights activists were calling for a wholesale re-evaluation of "expertise." Citing catastrophes from the Tuskegee syphilis study to the destruction of neighborhoods by highways, African-Americans started a movement for bringing into policy-making the voices of those directly affected by it. They were soon joined by other underheard populations: indigenous peoples, other people of color, women, youth, lesbians and gays, the disabled (or differently abled), mental health patients, and the poor.

I think the lessons of these movements underpinned an interest in what journalist James Surowiecki called The Wisdom of Crowds in 2004. The trend eventually picked up its own label: crowdsourcing, documented in a 2009 book by Jeff Howe.

Like Benkler, mentioned at the beginning of this article, researchers trying to grasp the new, nonintuitive practice looked at free software, because developers had spontaneously developed productive collaboration over long distances before the researchers discovered it. Everybody focused on the Linux kernel community because what they achieved in just a few years was spectacular: a re-creating of Unix that blasted its way past all the existing Unix implementations, including the BSD variants.

The Linux community was not a perfect exemplar of collaboration—far from it. Rather, we must see it as a step along the way to better online communities. Later free software developers built on the experience of the Linux community, along with other important communities such as the Apache Project. Later communities developed best practices, such as fostering diversity and excluding toxic communications. (Linus Torvalds, unfortunately, generated many such toxic communications himself—a strong contrast to the self-deprecating, jocular character he displayed in person.)

Crowdsourcing is now a major movement among governments at all levels, from the municipal to the international. One can see the health of this movement in the many projects undertaken by The GovLab, an organization founded by the Professor Noveck quoted earlier. 

LPI Exam Prep, LPI Tutorial and Material, LPI Exam Preparation, LPI Study Materials, LPI Career, LPI Learning, LPI Guides

Meanwhile, the Linux kernel still ranks as one of the most popular and productive free software development projects. Recent developer statistics heralded "the first time more than 2,000 developers have participated in a single release cycle." The Linux Foundation, originally created to protect the Linux trademark and other formal interests, has become the largest and most influential organization in free software, and a host for many other free software projects.

The dominance of developers paid by large corporations doesn't invalidate the idea of peer collaboration—on the contrary, the corporate uptake underscores its enduring value. We'll explore the corporate aspects in the next section.

Accelerated Development of New Hardware

Linux has also freed hardware. The Raspberry Pi, running a Linux distribution they call Raspbian, inspires thousands of tinkerers as well as commercial developers. To get a sense of the possibilities opened up by this hardware, peruse their magazine or Make (a magazine that originated at O'Reilly Media).

Computers are in everything; middle-class people can see that just by looking around their homes. Vendors promise "smart devices" that can track their preferences and react intelligently to their environment. There are even big hopes for the environmental and economic advantages of "smart cities."

Now, computing in our everyday existence is brought to life by cheap hardware, enhanced by the many libraries of powerful software developed by Linux, Python, and other communities. While proprietary smart devices are likely to collect information on our behavior, free devices offer a different contract. Programmers can check the free devices and let the public know whether private data is at risk. The device manufacturers, knowing that their tracking can be tracked, may think twice about spying on users. (Security experts do track proprietary devices, too, by watching the traffic over wires or the network.)

Android must rank as the most successful hardware partner of Linux. This phone software, which runs on more than 70% of mobile phones, is not actually very open because Google controls its development, as well as the lucrative "Google Play Services" required to market most commercial Android apps. But the license is a free one, so a worldwide community of embedded systems developers are adapting Android to their devices.

Finally, Linux has been ported to more processors than any operating system in history. The wealth of software available on Linux provides an entry point for people who want to use these chips, a reassuring hand-up for both developers and chip manufacturers.

A Total Revolution

Linux is a unique phenomenon, just as liberating for under-represented world populations as it is for the world's largest corporations. Microsoft, Amazon, and IBM, and other giant companies contribute large sums of money toward software that lets lower-class and isolated communities build knowledge and collective will.

Like all revolutions, Linux has profoundly changed ways of working and thinking for people who originally never heard of it. The economic contributions of Linux and free software are incalculable—literally, because of their uncontrolled distribution—but undeniably enormous.

Looking over the changes made by 30 years of Linux, one suspects that the next 30 will be even less predictable.

Source: lpi.org

Tuesday, 28 September 2021

New Learning Materials in Italian, Spanish, and Polish

LPI Tutorial and Material, LPI Guides, LPI Certification, LPI Learning, LPI Preparation, LPI Career

Linux Professional Institute (LPI) recently released Italian and Spanish language versions of Learning Materials for LPIC-1, as well as the Learning Materials for the Linux Essentials exam in Polish.

Andrea Polidori finalized the translation into Italian of the LPIC-1 102 Lessons, edited by Max Roveri. Andrea says, “Often the students in my courses ask me for material in Italian to improve their preparation for the LPIC-1 certification exams. After months of working with my colleague and friend Max Roveri, I can finally tell them that everything they need is on the LPI Learning portal.”

José Alvarez is the Spanish translator of the LPIC-1 102 Learning Materials, honed by the editors Yoel Torres and Juan Ibarra. José says, “When I first started working with the Learning Materials, I realized what excellent support they are for my colleagues and everyone who works with Linux. Translating them into Spanish, therefore, became a major goal for me and a great experience with the LPI team,”

A profitable and busy summer also saw the release of the Polish translation of the Learning Portal, along with the translation of the Linux Essentials Materials in the same language. Krzysztof Lorenz, the translator, commented. “The translation of the Learning Materials into Polish is a good signal for the free and open source software (FOSS) movement in Poland and I am very pleased to be able to contribute to this achievement. I really enjoyed working with the LPI team.”

Source: lpi.org

Saturday, 25 September 2021

LPIC-3 Mixed Environments 3.0 Introduction #04: 304: Samba Client Configuration

LPIC-3 Mixed Environments 3.0, Samba Client Configuration, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Certification, LPI Preparation, LPI Guides, LPI Learning

This blog posting is the fourth in a series that helps you prepare for the new version 3.0 of the LPIC-3 Mixed Environments exam. In the previous post we set up an entire infrastructure, containing of two Active Directory domain controllers, a file server and a Windows client. This week we will learn how to add even more systems to our domain.

Sources for User Information

An Active Directory holds information about user accounts, including the credentials used to authenticate each user. The overall idea is to maintain this information in the directory and then get access to it on each and every computer that is joined into the domain. We have already achieved this for the virtual Windows client, but we have not considered how we can make the average Linux workstation recognize the domain users as well.

On Linux, there are multiple approaches to authenticate users against a remote repository of user information. All such approaches have a common approach: they add additional sources of user data that Linux queries when it looks up a user. Once these lookups are successful, Linux queries a single set of user information, no matter where this information came from.

Read More: LPIC-3 300: Linux Enterprise Professional Mixed Environment

One technology that merges many data sources is the Name Service Switch (NSS). It provides various databases, including those holding user and group information. NSS passes queries on to various associated modules, which then implement the procurement of this information from a specific source such as a local file (think of /etc/passwd) or remote services (like our Active Directory domain). Similarly, Pluggable Authentication Modules (PAM) execute a series of modules to authenticate a user, prepare their session, or change their password. Again, individual modules handle users from different sources, including Active Directory.

User Identities and Authentication via LDAP

Each Active Directory domain controller runs an LDAP server which provides access to the directory’s contents. This allows the NSS and PAM modules (nss_ldap.so and pam_ldap.so respectively) to make the directory’s users available to Linux. Configuration includes adjusting the [ nsswitch.conf file and the PAM configuration, as well as creating the ldap.conf file that defines the properties of the directory.

Some more PAM modules come in handy when authenticating remote users. These modules allow users to authenticate using Kerberos, create a home directory during their login, lock accounts after too many failed login attempts, and enforce a miniimum complexity for new passwords. When experimenting with these modules, remember also to take a look at the chage command, which allows adjustments to the password expiry information for a specific user.

Authentication via SSSD

A modern approach to configuring user databases and authentication in Linux is the System Security Services Daemon (SSSD). SSSD provides NSS and PAM modules as part of the various Linux databases. However, instead of just querying, for example, Active Directory directly, these modules forward queries to the locally running SSSD. The SSSD documentation provides a comprehensive overview of SSSD’s architecture. SSSD itself has a configuration file that defines one or more sources of user information. One of the supported back ends is Active Directory.

Besides just forwarding queries, SSSD provides more features. Upon successful login of a user, SSSD can cache their credentials for a specific time to allow these users to log in again, even when there is no connection to the authentication back end. This is especially useful on mobile devices that might be offline from time to time.

SSSD comes with a set of command line tools. A special feature of these commands is the ability to overwrite specific aspects of user accounts. This feature allows you, for example, to adjust the UID, the path of the home directory, or the login shell for a directory user. These adjustments are injected by the local SSSD and do not affect other computers, even if they query the same directory. SSSD also allows the management of local accounts, which do not even appear in a remote repository.

Accessing SMB File Shares

Once a user is logged in, they usually want to somehow process data. We have already set up a file server which can store data of any kind. The easiest way to connect to an SMB server is the smbclient command. It provides an interactive command line, similar to common FTP or SFTP clients. You should practice the basic operations, such as uploading and downloading single files as well as multiple files, creating directories, and moving files. Take a special look at smbclient’s command-line options, which allow you to enumerate the shares available on a specific server or adjust the SMB protocol used.

Although smbclient is easy to use, it is inconvenient to download each file before using it and to upload it again once it is changed. The mount.cifs command mounts an SMB share into the Linux file system tree. Again, review all of the available mount options.

Keep in mind that each SMB connection is authenticated for one specific user. All operations performed over this connection are executed as the same user on the SMB server, no matter what user performs the change on the client. If multiple users need to access their respective data on the server, each user must mount this data on their own. The pam_mount module triggers the mount of a specific share whenever a user logs in.

Besides smbclient, some more commands interact with SMB shares. The exam objectives explicitly mention smbget, smbtar, smbcquotas, getcifsacl and setfactl, as well as cifsiostat. You should try all of these and, as usual, review their individual options.

Two More Linux Playgrounds

To practise the setup of the various authentication approaches, two additional Linux virtual machines would be helpful. We will need these systems only this week; you can delete them once you’ve completed your experiments. Don’t use the file server for your experiments, as it is already a domain member.

Set up one of the new virtual machines to use nss_ldap.so and pam_ldap.so to allow directory users to sign in. This is also a great chance to get familiar with the most important Kerberos tools, such as kinit and klist. Create a Kerberos configuration, procure a Kerberos ticket for the file server, and confirm that you are able to log into the server using the Kerberos ticket. You could also use this virtual machine to test the various PAM modules and, for example, extend this system’s PAM configuration to mount the user’s home directory from a SMB share upon login.

In the other virtual machine, install SSSD and configure it to recognize Active Directory users. Test the various SSSD commands mentioned in the exam objectives and see how they affect the appearance of the users on the Linux side. Add and modify some users in the Active Directory and see how these changes become available on the two virtual machines. Also test the overwrite features of SSSD and create some local users in SSSD.

Handling Windows Domain Members

Some organizations tend to run Windows on their desktop computers. We have already joined a Windows virtual machine to the domain. After the virtual machine is added, domain users are able to log into it. Using SMB shares is quite easy: after a user enters the UNC path to a share in the Explorer, a connection to the respective server is established and the share is opened.

LPIC-3 Mixed Environments 3.0, Samba Client Configuration, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Certification, LPI Preparation, LPI Guides, LPI Learning
When a Windows computer joins an Active Directory domain, it becomes subject to several management features of Active Directory. One such feature is logon scripts, which run on the client when a user logs in. Samba can host such logon scripts and instruct Windows clients to execute them.

A more complex approach to Windows management is Group Policy Objects (GPO). This is a complex topic. GPOs can specify a vast amount of properties of Windows systems. You can use various criteria to define whether a GPO applies to a specific computer or a specific user. Microsoft provides a Group Policy for Beginners guide, which is a good first step into GPOs.

Samba Active Directory controllers can host GPOs. GPOs are stored on the SYSVOL share, which is replicated between the domain controllers. In the case of Samba, this replication could be unidirectional from the domain controller holding the PDC emulator FSMO role. In this case, make sure to run the GPO Management utility against that specific server.

To learn more about GPOs, try to define a GPO that mounts a CIFS share and another GPO that restricts access to the Control Panel. Try to assign these GPOs to some of your users and confirm that they are effective after logging into the Windows client as the respective users. Take some time to review the various options and become familiar with the handling of GPOs.

For each user, Windows creates a profile that stores configuration information as well as files, such as those placed on the user’s desktop. When a user uses multiple computers, it is beneficial to make sure all computers access the same profile. The Samba wiki explains how to configure the necessary profile folder redirections.

One More Step to Take

Today, we have learned a lot about the configuration of Samba clients. As usual, don’t forget to review the exam objectives and take a look at the aspects that weren’t explicitly mentioned in this post.

The next post in this series will complete the preparations for the LPIC-3 Mixed Environments exam. We will see how we can use FreeIPA to create a domain that allows the centralized management of Linux authentication clients and how to set up that domain to coexist with Active Directory. We will also review the NFS protocol, which is an alternative to SMB, especially when serving files to clients running an operating system other than Windows.

More Info: LPIC-3 Mixed Environments 3.0 Introduction #03: 303 Samba Share Configuration

Source: lpi.org

Thursday, 23 September 2021

What the “Glocal history of FOSS” project is and what you can do for it

LPI Exam Prep, LPI Tutorial and Materials, LPI Certification, LPI Career, LPI Learning, LPI Guides
As you know already the Italian Linux Society, fresh from Linux Professional Institute (LPI) Community Partnership, teamed up with LPI and the broader Brazilian FOSS community on the “Glocal history of FOSS” project.

Learn more, reading this post, about how you can interact with the project. And why doing that would end up in one of the nerdiest and coolest experiences of yours!

The story so far

This project is a spin-off from our collaboration with the 2021 Open Anniversary. It’s a brilliant initiative, and we started playing around the idea of making a legacy of it: 

Why not try to set up a framework that is “glocal” because, with the help of the FOSS local communities we will be writing the global history of the FOSS movement as the outcome of the local history of its development in various international regions?

Roberto Guido, the president of ILS - The Italian Linux Society - an LPI Community Partner, immediately joined this thread, starting to work on a .json timeline format derived from the format adopted by Timeline.JS, a popular Javascript library to visualize interactive sequences of events. The choice has been to extend the existing format to include translations in different languages for the contents, as required by a project of global and multi-cultural involvement.

Meanwhile, Cesar Brod, LPI’s Director of Community Engagement for Spanish and Portuguese speaking regions, started injecting in the framework Roberto created data from his own (long…) experience in the Brazilian FOSS landscape. Cesar was a Linux user before the kernel reached version 1.0 and since then he has been able to span several FOSS projects and entrepreneurial initiatives, mostly partnering with universities. He is working with Diolinux, an LPI Community Partner to organize the Brazilian community around the Country’s timeline.

LPI Exam Prep, LPI Tutorial and Materials, LPI Certification, LPI Career, LPI Learning, LPI Guides

The “Glocal history of FOSS” project was initially born from a request from Nick Vidal, from the Open Anniversary team, who asked LPI to help with a timeline for the Linux project to be portrayed in their web portal. LPI joined Open Anniversary from the beginning and it was already contributing with content to the project, under the coordination of Kaitlin Edwards and the participation of LPI’s Editorial Board. Cesar Brod started experimenting with an open-source JavaScript library and experimented with his own professional Linux timeline and both Max and he thought it was a very good place to start and get the broader FOSS community involved.

What’s next?

With this very post, we are bringing the whole project a (huge) step further: according to the Torvaldsian principle: “release early and often” we are releasing the project and its framework into the FOSS community. “LPI will be pleased to host a project that belongs to the whole community and by exposing their local achievements we believe even more connections and new and exciting free knowledge-based projects will evolve”, says Max Roveri, chief-editor of the project.

The Italian job

As this project has a few bits of Italian DNA, and as the Italian LinuxDay managed by the Italian Linux Society is at doors (Saturday, October 23rd), we decided to link GHOFOSS to the Italian celebration of Linux.

With the Italian LinuxDay we will be gathering information - “atoms” of Italian history of FOSS, until the LinuxDay. Data will be gathered via this form.

Those data will be used the day after for a hackathon in which the Italian Linux community (no worries: more will come for other geographical areas!) will work on the GHOFOSS mockup and backend.

Source: lpi.org

Tuesday, 21 September 2021

Kali Linux – Terminal and Shell

Generally, operating systems have 2 interfaces GUI(Graphical User Interface) and CLI(Command Line Interface) and the same is the case with Linux Based Operating Systems. Linux Operating Systems are generally packed with terminal emulator packages for CLI based functioning and Desktop environment packages for GUI based functioning. Some common ones are listed below:

Read More: 101-500: Linux Administrator - 101 (LPIC-1 101)

Terminals:

◉ Qterminal

◉ gnome-terminal

◉ MATE terminal

◉ xterm

◉ Terminator

◉ konsole

Desktop Environments:

◉ Xfce/Xfce server Desktop

◉ GNOME3

◉ KDE plasma 5

◉ cinnamon Desktop

◉ MATE Desktop

So being one of the Linux based Operating Systems Kali comes packed with a few of these terminals and Desktop environments. By default, the terminal of Kali 2020.2 Linux is Qterminal and the Desktop environment is Xfce/Xfce server.

CLI(Command Line Interface) vs GUI(Graphical User Interface)

Now most of us think that when we have Graphical User Interface what is the need for Command Line Interface. Our hardware understands instructions in the form of bits(0 or 1), which are to be processed by the kernel in the form of system calls and those system calls are to be made by some code or some commands. So in order to work with them, it is necessary to have a good hands-on Command Line interface. And when we host a server over Linux, there we only have Command Line Interface without any GUI based environment. So in order to work there, we should have a good command on Linux commands which could be done with the help of Linux Terminals.

Though in many cases GUI is better still, if it is the case of Linux then the terminal and Command Line interface plays a vital role as Linux has many tools that are command based and have no GUI interface.

So concluding from this, it depends on what is the task which is to be performed. Sometimes a task could be performed easily with GUI while other times it could be performed with feasibility through terminal.

Terminals vs Shells

Many people confuse between a shell and a terminal emulator. They both are different. Linux based Operating Systems come pre-packed with some shells. In these shells, we need to input the commands, then these shells send these commands to the processor for processing, and then it returns back output to the terminal. Now, Terminal emulator packages allow us to input commands to shell and it reflects the output by the shell.

In simple words, the shell is a program that is responsible for the execution of an instruction and returning the output while the terminal is responsible to send instructions to the shell by taking input from the user and displaying the output of the instruction to the user.

Examples of shells:

◉ bash

◉ Borne

◉ cshell

◉ Korn

◉ POSIX

Working with Kali Linux Terminal

1. Customizing the terminal. In order to customize the kali Linux Terminal. Go to the File menu and select the preferences option. It has a lot of options to customize your terminal, customize the terminal as per your convenience.

Kali Linux – Terminal and Shell, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Guides, LPI Learning

2. Executing a command through terminal. To execute a command in the terminal, just enter a command there and provide the appropriate input, the terminal will execute the command through the shell and will return the output. Just type the following lines in the terminal.

echo "This is a terminal"
pwd

Kali Linux – Terminal and Shell, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Guides, LPI Learning

3. Using comments in terminal. To put a comment in the terminal we use “#” character. Following is the example of a comment.

#this is a comment.

Kali Linux – Terminal and Shell, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Guides, LPI Learning

Source: geeksforgeeks.org

Saturday, 18 September 2021

The Many Meanings of Linux, Part 1 of 2

LPI Exam Prep, LPI Tutorial and Materials, LPI Material, LPI Career, LPI Guides, LPI Study Materials, LPI Preparation

A startling title appeared in the prestigious Yale Law Journal in 2002. At that time, academics, governments, and companies were exploring an exciting and potentially liberating idea: to take input about policies and products not just from duly credentialed experts, but from the general public. Pushing forward that narrative, Harvard Law Professor Yochai Benkler published a Yale Law Journal paper with the title "Coase's Penguin, or, Linux and The Nature of the Firm."

How did Linux make it into a leading law professor's research? Why did Benkler feature Linux as a data point in "a much broader social-economic phenomenon" that could overturn centuries of corporate and government behavior?

More Info: 010-160: LPI Linux Essentials (Linux Essentials 010)

Benkler was not alone in elevating Linux to a principle and an exemplar. This article explores the many meanings that Linux has had since its emergence in the early 1990s. I explain how Linux altered history by:

◉ Providing a foundation for an entirely free computer system

◉ Proving to observers in all fields the viability of free and open source software

◉ Triggering a move of companies toward open standards and open source implementations of core parts of their software

◉ Bringing modern software within the reach of millions more people

◉ Restructuring the computer industry through virtualization, containers, and the cloud

◉ Sparking interest in the newly recognized phenomenon of crowdsourcing

◉ Accelerating development of new, low-cost hardware platforms

This part of the article will cover the first four points in the list, and an upcoming part will cover the other three.

Foundation for a Free Operating System

People have released free software since the beginning of computing, but few have comprehensively assessed user needs and addressed their wide range. Most free software developers were happy to contribute a library or tool that ran along with some vendor's computer system. The early free software project with the grandest vision was Berkeley Software Distribution (BSD), which started as a set of tweaks to Bell Labs' non-free Unix and evolved into an independent project with a broad mission. Although variants of BSD played important roles in many computing companies, it was a niche phenomenon compared to Microsoft and later Apple.

The GNU project was even more of a niche affair. This one also had a big scope: a band of developers methodically turned out one tool after another to recreate Unix from the ground up. Although the tools were all important developers—particularly the impressive compiler and C/C++ libraries—few held any interest for the average computer user. End-user projects such as GNU Cash accounting software were rare and difficult to use. The central selling point for the GNU tools was the GNU license, which guaranteed their freedom for all to use and share.

Whether because of their license, their quality, or their widespread use by C programmers, it was the GNU tools that Linus Torvalds used to create Linux. And as the importance of Linux grew, so did GNU. The development of GNU and Linux cannot be disentangled; that is why I agree with GNU proponents that full distributions of the operating system should be called GNU/Linux. (But I use the term Linux loosely throughout this article for convenience.)

What did a fully free software stack mean for the general public? It created an explosion of experimentation, especially in the areas of embedded systems, small Internet service providers and other Internet services, and cheap computers for underserved populations around the world. The free stack did even more for corporate data centers. We'll examine all these phenomena in later sections.

People running a completely free GNU/Linux stack would still need a proprietary computer and proprietary firmware. But eventually, both of these gaps were addressed as well. A growing open hardware movement, covered in another article, allows the distribution and customization of open designs. Many free firmware projects also exist.

Proving the Viability of Free and Open Source Software

Until the 1990s there was no debate among businesspeople: to bring an idea to life in software, you needed to form a company and hire a team of experts to code up the product. The waterfall model, where a team moved ponderously from requirements through development and testing to production, was almost universal. Until the dot-com boom (a bubble in the late 1990s created by irrational exuberance among investors), a software project couldn't get off the ground until the accountants and marketing staff had figured out how to make a profit from it.

Free software seemed to be taking place in a parallel universe. No one with money and institutional clout took it seriously.

Yes, Microsoft and AT&T and other companies shoved BSD innovations into their own proprietary software. And in the mid-1980s, the GNU C/C++ compiler jolted developers by outperforming all the commercial compilers. These isolated phenomena were hints that something powerful was going on with free software—but they were technically obscure enough to be ignored by policy-makers.

It was finally Linux that blew apart the complacence at the CxO level. Companies came to appreciate the perennial benefits of free software. After these companies get used to depending on free software and find it robust and reliable, they are more ready to open their own software. By then, they may be employing many developers who understand and love free software, and who urge the companies to contribute to it. I'll tell more of this story in the next section ("The Move Toward Open Standards and Open Source Implementations") and cover the importance of Linux to governments in the one that follows ("Bringing Software to Millions of People").

The Move Toward Open Standards and Open Source Implementations

Who would spend hard cash to develop software and give it away? This was the taunt aimed at free software by its critics for decades. But now it happens every day. Let's look at how the shift has taken place.

As explained in the section "Proving the Viability of Free and Open Source Software," businesses used to hide their source code and make sure no one else could derive benefit from their investment; no other course of action seemed rational. Linux taught them the opposite approach: if they share software, everyone moves ahead faster. Moving fast is critical to success in business during the twenty-first century, so free software becomes crucial.

LPI Exam Prep, LPI Tutorial and Materials, LPI Material, LPI Career, LPI Guides, LPI Study Materials, LPI Preparation
Historically, computer companies were the first to learn the importance of collaborative development. Name a  large, successful computer company—Intel, Amazon, Microsoft, Oracle, whatever—and you can find them working on free software projects. They may keep their core software proprietary (a trend I covered a few years under the term closed core), but they contribute a lot of their work to the community for a number of reasons, including the hope that it will be enhanced by other companies' and individuals' contributions. The demonstrable business value of free software propelled large corporate conferences such as LinuxWorld (Figure 1). Google's Android, an important but different kind of project, will be mentioned in a later section.

Figure 1: Golden penguin awarded for a trivia contest held at LinuxWorld 200 4 

Every company is a bit of a computing company nowadays, so free software appeals to them too. A good example is the automobile industry, which is loading new cars with software, and which has an alliance dedicated to free software in cars. Naturally, their output is based on Linux.

The open source movement has democratized hiring, to some extent. Aspiring developers contribute to free software projects and cite those contributions in job interviews. The popular GitHub and GitLab sites expose each person’s contributions to make them highly visible to employers.

Finally, this rush to open source drives the creation of professional organizations such as the Linux Professional Institute. When companies depend on skills, they want to see demonstrated proficiency among job applications, hence the development of certification programs such as those offered by LPI.

Bringing Software to Millions of People

Like so many things affluent people take for granted—drinkable water, for instance—computer access is strongly associated with economics. Middle-class people in developed countries automatically license a copy of Windows for home use. But in less wealthy countries, access is much more difficult—even in government offices. That's why a 2004 study determined that, "For every two dollars' worth of software purchased legitimately, one dollar's worth was obtained illegally."

Yes, huge swaths of the world's population use software in violation of the software vendors' rules. At times this is tolerated (because the vendors hope the users will eventually turn into paying customers); at other times crackdowns occur. But there seemed to be no alternative until Linux came along.

Nobody has to feel guilty or furtive using Linux, because the whole system is free software. Many governments—particularly in Latin America—declared a preference for free software in the decade or so after Linux became well known.For various reasons, the most idealistic free software adoptions failed (Munich was one highly publicized case of migration that has undergone turmoil), but Linux makes freedom possible.

Special computer systems were designed for low-income and underprivileged areas. Nicholas Negroponte's One Laptop Per Child garnered the most hype, but it didn't live up to the promise. More relevant now are the many distributions for schools and schoolchildren, covered in another article.

The Linux Professional Institute works with dozens of companies, particularly professional training firms, who base their business models on Linux and free software. Located in places as different as Brazil, Bulgaria, and Japan, these companies know that Linux and free software provide a universal platform for education and advancement. The people who take these courses and obtain LPI certifications can build a better economy in their countries. (Many immigrate to more developed countries with higher salaries, but a good number stay at home.)

Source: lpi.org

Thursday, 16 September 2021

Difference between Kali Linux and Parrot OS

Kali Linux, Parrot OS, LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Career, LPI Guides, LPI Leaning

Kali Linux:

Kali Linux is an operating system used for penetration testing and digital forensics, with Linux at its core. It is developed according to Debian (a Linux distribution) standards. It was first released in March 2013 with the aim to be the replacement of the BackTrackOS.

Parrot OS:

Parrot OS is similar to Kali Linux, and is an open-source Debian-based operating system. It is used for cloud pentesting, computer forensics, hacking and privacy/anonymity. It was first released in April 2013.

Read More: LPI Certifications

There are some similarities in these two operating systems:

◉ Both are useful for penetration testing.

◉ Both are developed on Debian standards.

◉ Both can support 32-bit and 64-bit architecture.

Let’s see the difference between Kali Linux and Parrot OS:

Kali Linux Parrot OS
It needs more RAM, about 1 GB. While it requires lesser RAM, about 320 MB.
In terms of GPU, it requires a graphical card, as it needs graphical acceleration.  While it does not need a graphical acceleration, hence no graphic card is needed. 
It requires about 20 GB free space for installation.  While it requires about 16 GB free space for installation. 
Its interface follows the Gnome desktop interface.  While its interface is built the Ubuntu-Matte-Desktop-Environment. 
It does not have pre-installed compilers and IDEs.  While it comes pre-installed with a bunch of compilers and IDEs. 
It has a simpler user interface.  While it has a much better user interface. 
It has heavyweight requirements and is a bit laggy.  While it is very lightweight and doesn’t lag much. 
It has all basic tools needed for hacking.  While it has all the tools that are available in Kali and also adds its own tools. Ex. AnonSurf, Wifiphisher, Airgeddon. 

Source: geeksforgeeks.org

Tuesday, 14 September 2021

Difference Between Ubuntu and Kali Linux

Ubuntu is a Linux based Operating System and belongs to the Debian family of Linux. As it is Linux based, so it is freely available for use and is open source. It was developed by a team “Canonical” lead by Mark Shuttleworth. The term “ubuntu” is derived from an African word meaning ‘humanity to others’. The Chinese version of Ubuntu is used for running the world’s fastest supercomputer. Google’s self-driving car uses the stripped version of ubuntu.

Ubuntu, LPI Tutorial and Material, LPI Exam Prep, Kali Linux, LPI Preparation, LPI Certification, LPI Career, LPI Guides, LPI Learning

Difference between Ubuntu and Kali Linux


Ubuntu Kali Linux 
Developed by canonical. Developed by Offensive Security.
Ubuntu was initially released on 20 October 2004.  Kali was initially released on 13 March 2013. 
Ubuntu is used for daily use or on server.  Kali is used by security researchers or ethical hackers for security purposes 
Latest version(2020.04) of ubuntu uses Gnome-terminal by default.  Latest version(2020.2) of kali uses qterminal by default. 
Latest Ubuntu consists of the Gnome environment by default, though it allows you to change the same.  Latest Kali consists of the xfce environment by default, though it allows you to change the same. 
Ubuntu doesn’t comes packed with hacking and penetration testing tools.  Kali comes packed with hacking and penetration testing tools. 
Comes with a user friendly Interface  Comes with a less user friendly Interface as compared to ubuntu. 
Ubuntu is a good option for beginners to Linux.  Kali Linux is a good option for those who are intermediate in Linux. 
Latest Ubuntu live has the default username as root.  Latest Kali Linux has a default username as kali. 
Latest Ubuntu live has the default password as (blank).  Latest Kali Linux has a default password as kali. 

Source: geeksforgeeks.org

Saturday, 11 September 2021

LPIC-3 Mixed Environments 3.0 Introduction #03: 303 Samba Share Configuration

LPIC-3 Mixed Environments 3.0, Samba Share Configuration, LPI Exam Prep, LPI Tutorial and Material, LPI Exam Preparation, LPI Preparation, LPI Certification
This blog posting is the third in a series that will help you to prepare for the new version 3.0 of the LPIC-3 Mixed Environments exam. In the previous posts we set up a virtual lab, installed Samba, and set up an Active Directory domain. The lab also contains a file server as a domain member. This week’s posting is all about the file server’s share configuration.

Samba File Share Configuration

To get started, let’s review how shares are declared. Usually, each share is a dedicated section in the smb.conf file. The name of the share is the section name surrounded by square brackets. Within each file share, the path option specifies what part of the server’s file system is accessible through the share.

Let’s first of all determine who can connect to the share at all. The smb.conf options valid users and invalid users are the initial doorman deciding who can connect. Try to configure a share to allow or reject connections from specific users and test that the users are connected or rejected as expected. Remember the notation of users mapped from your domain as well as the ability to specify groups in these options.

File Access and Permissions

Once a user is connected, access to the individual files requires further permissions. These permissions are managed in multiple layers. First of all, Samba uses the read list and write list options in smb.conf to determine which users have which kind of access in general. Once a user passes this hurdle, the access to a specific file is subject to file system permissions. The simplest form of these permissions are the classic file ownership and permissions, as managed by chown and chmod.

When accessing a share, Samba uses the identity of the user connected to the share to perform operations on the Linux file system. Thanks to ID mapping on the file server, each domain user has an equivalent in the Linux file server’s user database.

When multiple users access the same share, they might end up creating files they can not mutually access, due to the files’ different owners and permissions assigned to the owning user and group only. An easy way to force a standard owner and permissions is to use the smb.conf options create mask / create mode, directory mask / directory mode, force create mode, force directory mode, force user, and force group / group, which manage the ownership and permissions of files stored in a share independent from the connecting user. If you would like to practice, create a file share for your accounting department and grant several users access to it. Now configure the share to enforce that all files belong to the same group (maybe create an accounting group in your AD and add the users) and that all files are writable by the group.

Access Control Lists

More complex than classic Unix permissions, Access Control Lists (ACLs) allow you to set individual permissions for specific users and groups. Linux uses Extended POSIX ACLs, which are managed by getfacl and setfacl. Adopt the previous example by creating another share that uses ACLs to grant access to all group members, without Samba enforcing any specific ownership and permissions. Review the smb.conf option inherit acls and enable it if necessary. The Samba wiki contains more information about POSIX ACLs on file shares.

It is perfectly fine to use POSIX ACLs on the Linux side to manage access to files on a Samba share. However, POSIX ACLs are different from Windows ACLs. To use Windows ACLs, Samba needs to store additional ACL information besides the POSIX ACLs. The VFS module acl_xattr uses extended file system attributes for this information. setfacl does not update the respective Windows ACLs, which is why ACLs on a share using Windows ACLs should always be set through Samba, by using either a Windows client or the samba-tool ntacl command. The smbcacls command is another tool that manages ACLs on SMB shares. Again, the Samba wiki has some great information about Windows ACLs on SMB shares.

Make Shares Appear Nicely

LPIC-3 Mixed Environments 3.0, Samba Share Configuration, LPI Exam Prep, LPI Tutorial and Material, LPI Exam Preparation, LPI Preparation, LPI Certification
Once permissions are set, situations may occur where users see files they cannot access. The smb.conf options hide unreadable and hide unwritable files are used to hide these files from the user. Similarly, the options hide dot files and hide special files ensure that users are not bothered by other irrelevant files. Some file managers tend to create hidden files, such as indexes or file thumbnails. Uploading these files to a file share is usually undesirable, so the options veto files and delete veto files can reject or delete such files.

In a real world scenario, multiple file servers likely exist to satisfy all storage requirements. In this case, users need to know which shares are located on which servers. In addition, when shares are reorganized between servers, client configurations need to be changed. The Distributed File System (DFS) allows a server to offer shares that are actually just redirects to the actual share, potentially on another server. This allows users to always access their data using the same path via the DFS share, even if the data is reorganized. Again, the Samba wiki has more information about DFS on Samba.

Printer Shares and Print Drivers

In addition to files, Samba can also provide printable shares. The pattern is the same as for files: for each printer, a share with the printer’s name exists. Samba, however, needs a print server to handle the actual printing. Nowadays, CUPS is the de facto standard for printing on Linux. Samba can query CUPS to determine which printers are available and automatically offer the printers as individual shares. However, it is also possible to manually configure printer shares. The Samba wiki provides an example for configuring a PDF printer share.

In most cases, the printer clients prepare the print jobs so that they are ready to be forwarded to the printer without additional processing. This is called “raw printing”, because the print server passes the print jobs on as they are. In order to prepare the print jobs, the client needs the drivers of the specific printer. These drivers can be distributed by Samba. The Samba wiki has a guide to setting up this configuration.

A security issue called Printnightmare recently used Microsoft’s driver distribution mechanisms to allow users to escalate their privileges. You can read about how this issue was discovered but not yet fixed. But keep in mind that the current exam was developed before the issue was discovered. Therefore, for now, assume that there are no restrictions in place to mitigate Printnightmare.

Moving On

This post concludes the part of this series about Samba server configuration. As usual, the exam objectives include some additional options and commands in addition to those covered in here. Take some time to review the exam objectives and make sure you’ve tried everything mentioned there. The result will be a fully configured Active Directory domain along with a file server.

Next week we will focus on the client side and learn how we can access our shares from Linux and Windows, how to authenticate against an Active Directory domain, and how to use Active Directory to manage Windows systems.

Read More: LPIC-3 Mixed Environments 3.0 Introduction #02: 302 Samba and Active Directory Domains

Source: lpi.org

Thursday, 9 September 2021

The People behind the Learning Portal: Dr. Markus Wirtz - Manager Learning Materials

LPI Exam Prep, LPI Tutorial and Material, LPI Learning, LPI Career, LPI Certification, LPI Exam Preparation

Linux Professional Institute (LPI) launched the Learning Portal in June 2019. The Learning Portal is the repository of all the Learning Materials for our exams. The whole project is managed by Dr. Markus Wirtz, Manager Learning Materials at LPI. We designed it as an international endeavour - learning is easier in your mother tongue! - hence we needed a team of authors, editors, and translators to design, write, and localize the body of lessons.

This series of interviews is a journey toward knowing better the People behind the Portal: the Linux and Open Source enthusiastic professionals who are making the Learning Portal possible.

By reading this series of interviews, you will know more about Contributors’ work, the peculiarities of translating IT educational material, and the challenges Contributors have to face restoring what could be lost in translation. And about why working on the Learning Portal is quite cool and nerdy. 

This is the interview with Markus Wirtz. Learn more about the Learning Portal and him here!

Was the idea of doing translations planned from the start, when you decided to write Learning Materials? Is the same team translating both Learning Materials and exams? How do they work together?

With our Learning Materials at learning.lpi.org, we want to make preparing for the LPI exams as easy as possible. In addition to the clear didactic concept that works both in class and in self-study, this also includes avoiding language barriers – and here translations into as many languages as possible is crucial. Thus, we published the first Learning Materials for “Linux Essentials” 2019 in English and German at the same time. There are now 9 languages, and counting. So, yes, translations have been an important part of the concept from the beginning.

In terms of translating Learning Materials and exams in fact there are very different requirements and processes behind it. The development as well as the translation of the pool of exam questions is by far the most important task in the area of product development. What is created here must meet the high standards for which LPI certifications have stood worldwide for two decades. Evaluation, standardization, but also confidentiality, for example, are aspects that must be taken into account here at every step of the process. The translation of Learning Materials, on the other hand, is much less critical: here, an error can be corrected in a matter of minutes – this is not so easy with exam questions, which are delivered worldwide and are crucial for candidates and their exam results.

LPI Exam Prep, LPI Tutorial and Material, LPI Learning, LPI Career, LPI Certification, LPI Exam Preparation

Nevertheless both areas are of course closely related and therefore both belong to LPI’s product development. To give a concrete example, technical terms should always be translated in the same way within a language - both in the exam questions and in the Learning Materials. Attentive translators, but also appropriate software (keyword: translation memory) support us in this.

Are regional staff (Europe, East Asia, etc.) coordinating the translations, recruiting people, etc.?

LPI is a worldwide network of experts – this applies to the small team of employees, the numerous partners, for example in the training sector, as well as the many volunteers. Of course, the translation projects also benefit from this, in that someone always knows someone who is qualified for a specific task. This is wonderful! But the actual coordination and organization of all Learning Materials is actually done in the Product Development department. An author once described our task quite aptly as "herding cats": From the search for translators to the necessary contracts and the familiarization with our firmly defined technical processes to the presentation of our supporters on learning.lpi.org, everything runs through us. With currently about 50 projects on which authors, reviewers and translators are working, there is quite a bit to do in terms of coordination and communication.

Do translators find errors in the English text that tech reviewers failed to find? This was often true when I worked at O'Reilly.

Yes, this happens, of course. Just like the numerous readers who use our Learning Materials, the translators also find mistakes from time to time. But this is hardly surprising, because translators are certainly the text workers who have to deal with every single word most carefully. Fortunately, these are rarely technical errors, but rather inaccuracies that can lead to misunderstandings.

Besides helping people learn skills and pass exams, have the translations helped LPI's reputation and provided less tangible benefits?

Since these benefits are not tangible, they are also hard to describe in concrete terms ;). But, yes, I am firmly convinced of that! The fact that we provide free Learning Materials as a non-profit organization underlines our real concern: “promote the use of open source by supporting the people who work with it”. That can only be a good thing. :) Furthermore, the Learning Materials enable a lot of new ways of collaboration: For our partners all over the world, translations into their respective languages are of course very welcome and often an incentive to support us.

The translation projects are also interesting for the many helpers from the community: not only to deal intensively with the content, but also as an opportunity to network even more closely with experts worldwide or to document their own commitment.

Source: lpi.org

Tuesday, 7 September 2021

What is Arch Linux?

Arch Linux, LPI Tutorial and Materials, LPI Exam Prep, LPI Certification, LPI Preparation, LPI Career, LPI Learning, LPI Guides

Arch Linux is an independent Linux distribution that adheres to the principles of simplicity, modernity, pragmatism, user centrality, and versatility. It is a minimalist, lightweight, and bleeding edge distro targeting proficient GNU/Linux users over the idea of trying to be appealing to as many users as possible. Arch promotes the do-it-yourself (DIY) attitude among its users and thus provides you with the freedom to tweak your system according to your needs.

Advantages of Arch Linux:

Arch is bleeding-edge:

Arch Linux follows a rolling release model, this essentially means that you get all the new features and updates as soon as they roll out. There is no need for versions when updating and upgrading your system boils down to a simple command mentioned below.

pacman -Syu

Arch is what you want it to be:

Arch Linux offers absurd amounts of customizability to its users. A clean installation of Arch doesn’t even include a Desktop Environment or a Window Manager. The user builds their system from the ground up. This approach also makes Arch extremely lightweight because there is no preinstalled bloat on the system, you the user have full freedom of what you want and when you want it.

The Arch User Repository (AUR):

A unique feature that makes Arch stand out among other distros is the Arch User Repository (AUR). It is a community-driven repository for Arch users. It contains package descriptions (PKGBUILDs) that allow you to compile a package from source with makepkg and then install it via pacman. The AUR was created to organize and share new packages from the community and to help expedite popular packages’ inclusion into the community repository. AUR extends the software offerings of Arch’s official repository much further and beyond.

The Holy Arch Wiki:

Arch Linux is one of the, if not the most well documented Linux distros out there. The Arch wiki is the stuff of legends among the Linux enthusiasts. It is extremely well documented and massive. Its offerings extend beyond Arch Linux itself at times. If you run into some trouble with your system, the Arch Wiki probably has the solution already.

It is a bridge:

Package Manager (pacman), the package manager of Arch Linux is pretty unique in its own right. It is flexible enough to support the installation of binary packages from the Arch repository, as well as binaries compiled from source via makepkg. This makes Arch a bridge between the distros which allow the installation of binary packages via their package management systems and the distros which trade ease of users to allow their users to compile binaries from source with variable configurations. 

Improve your understanding of Linux:

You won’t know how rewarding it is to get a clean installation of an Arch system unless you experience it yourself. The installation process is pretty complex since most of the things you will be doing won’t be GUI-assisted and you will be using CLI commands. Although this kind of complexity might sound scary to new users, it still has its own perks. The installation teaches you a lot about how Linux actually works, which you won’t bother learning because modern-day GUI installers take care of that for you. You are introduced to concepts like display managers, chroot, configuring networks, and much more during the installation itself. 

Note: Arch Linux still has GUI installers for new Linux users who are not ready to do it the hard way but where is the fun?

Bonus:

If you are into cybersecurity, you must have heard of Black Arch. The Black Arch repository contains a massive list of security tools for penetration testers and security researchers. The downside of installing Black Arch for some users might be its massive size as it comes with all the tools which include the ones you are never going to use. The good news is you can integrate the Black Arch repository in your Arch system and fetch tools you need on-demand from the repository.

Other popular Linux distributions based on Arch:

◉ Manjaro Linux

◉ ArcoLinux

◉ EndeavourOS

◉ RebornOS

Disadvantages of Arch Linux:

It is an advanced distribution:

Although, you might find Arch to be a very likely contender for your next distro hop, let me remind you that it is not at all a newbie-friendly distro. It is not recommended that an absolute Linux newbie tries out Arch. With the amount of customizability Arch offers in question, it is highly likely that a new user might potentially break their system trying to configure it in a totally wrong way. If you really want to try out Arch and you are not confident with your Linux skills, it is a much better idea to try out the installation in a virtual machine and then make the jump on a real system once you are confident enough.

Source: geeksforgeeks.org

Friday, 3 September 2021

LPIC-3 Mixed Environments 3.0 Introduction #02: 302 Samba and Active Directory Domains

LPIC-3 Mixed Environments 3.0, Active Directory Domains, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Certification, LPI Guides
This blog posting is the second in a series that will help you to prepare for the new version 3.0 of the LPIC-3 Mixed Environments exam. Active Directory is one of the major topics on LPI’s LPIC-3 Mixed Environments exam. While preparing tof the exam, you should not just understand the concepts, but actually implement an Active Directory domain using Samba 4.

Understand and Plan an Active Directory Domain

First of all, focus on the architecture and the various components of Active Directory. This is not easy, since Active Directory integrates various services such as DNS, LDAP, Kerberos, and CIFS, along with a very specific layout of the contents served through these components. Microsoft offers a long, but comprehensive read on the Active Directory Architecture. Don’t worry about the age of that document: the principles are still the same and it is one of the few places where you can get all of the information about the topic in a single document.

After you have worked through the dry theory, it’s time to design your very own Active Directory. In a production environment, your first step will be to name the directory. The Samba wiki has some great advice on Active Directory Naming. For your studies, consider just going ahead with ad.example.com or something similar.

Setting up the Domain

Now that you have chosen a name for your domain, set up your first domain controller. We have already covered the setup of the virtual machine (VM) in last week’s post. Now is the time to log into your first domain controller and work through the guide for Setting up Samba as an Active Directory Domain Controller. Enable the RFC2307 schema and make sure you perform all the tests described in the guide. Remember that you’ve used the Samba packages of your Linux distribution, so you can most likely use systemctl to start the Samba services.

After your first domain controller passes all tests, log into the second domain controller VM and join it as a second domain controller. Remember to review the various types of SysVol replication and set up unidirectional rsync replication. Also make sure all your computers’ clocks are in sync.

Populating the Domain

Once you’ve confirmed that your directory replication works well, it is time for the first regular member. Boot up your Windows VM and join the machine to our domain. Once it is rebooted, use your domain’s administrator account to sign into the VM. 

Now you can populate the domain. Create a couple of user accounts, as well as security groups containing some of your new users. Try to create accounts for some of your colleagues and group them according to their departments, or create accounts for your family members and some groups for their favorite hobbies. Use both samba-tool user and samba-tool group on one of your domain controllers as well as Active Directory Users and Computers utility on your Windows machine. Confirm that your user accounts work correctly by using these accounts to sign into the Windows machine.

Make sure also to review what happened underneath the hood: Find your user accounts in your domain’s LDAP tree, then review the objects’ attributes and how they relate to groups. On the Windows side, ADSI Edit and LDP allow you to access these objects. Don’t forget to do some practice on the Linux command line using ldbsearch, too. Adding RFC2307 attributes to your users and groups is a great chance to do so. The Samba wiki holds instruction for both the graphical interface on the Windows client and the cool ldbmodify command-line technique.

Joining the File Server

LPIC-3 Mixed Environments 3.0, Active Directory Domains, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Certification, LPI Guides
The next big step is joining the file server to the domain. Again, the Samba wiki explains all the steps for setting up Samba as a Domain Member. As you work through this guide, remember to use the ad mapping backend. Take some time to really understand ID mapping in Samba, including the various backends.

Once the server is joined into the domain, create a simple write file share and place a file there using the Windows client. Check the ownership of the file and try adding more files using other domain users. Finally, configure PAM Authentication to allow domain users to log into your server and try to log into your file server using one of your domain users.

DNS and Beyond

Topic 302 contains some more aspects that are important. One of them is DNS management, which offers you a chance to revisit your LPIC-3 DNS skills. Create some DNS records in your Active Directory and use dig to confirm their existence. You should also take a closer look at FSMO roles and running a standalone Samba server with local user management.

We’ve covered a lot of material this week and worked through a lot of extensive resources. However, we’re not done yet. The exam objectives contain some options, tools, and aspects you must be aware of in your exam. Take your time to carefully review the exam objectives and research anything you’re not certain about. With the materials covered today, you have a fully functional lab environment that you can use for your own studies. Next week we are going to extend this setup even further by going into the details of the share configuration on our file server.

Source: lpi.org