Saturday 28 August 2021

LPI Releases LPIC-3 Mixed Environments Version 3.0

LPIC-3 Mixed Environments Version 3.0, LPI Exam Prep, LPI Tutorial and Material, LPI Guides, LPI Learning, LPI Preparation, LPI Certification, LPI Career

Linux Professional Institute (LPI) released version 3.0 of the LPIC-3 Mixed Environments certification program. The release is a major update that brings fundamental changes to the exam objectives and adjusts the covered topics to recent changes in technology. The LPIC-3 certification series is the highest level offered currently by LPI and covers advanced open source topics in organizational computing infrastructure.

The new certification objectives focus on Samba 4 in conjunction with Active Directory Domains. A dedicated topic has been added which covers the management of Linux domains using FreeIPA as well as the integration of FreeIPA with Active Directory. Additionally, the exam no longer covers NT4 domains, NetBIOS services and OpenLDAP.The update is part of LPI’s regular exam reviews.

“Version 3.0 is a complete redevelopment of the LPIC-3 Mixed Environments exam. We removed legacy tools and commands and added a set of powerful technologies that have become established in the corporate IT infrastructure of almost all enterprises” says Fabian Thorns, LPI’s Director of Product Development.

“The update of the LPIC-3 Mixed Environments certification ensures that our exams test the skills demanded by industry today. Identity management and file services are business critical services and we are proud to certify experts who have the ability to implement such services using open source software in an enterprise environment.” says G. Matthew Rice, Executive Director of LPI.

Thorns concludes his statement by addressing the LPI community. “I would like to thank LPI’s exam development community and all subject matter experts involved in the update of the LPIC-3 series. Thank you all for being part of the LPI team, this update won’t be possible without you.”

LPIC-3 Mixed Environments Version 3.0, LPI Exam Prep, LPI Tutorial and Material, LPI Guides, LPI Learning, LPI Preparation, LPI Certification, LPI Career
The LPIC-3 Mixed Environments version 3.0 certification exams will be available in Pearson Vue testing centers and on the OnVUE online testing platform starting on Monday, August 23rd 2021. The exam will initially be available in English. A Japanese translation will be released soon. The exam objectives are available at https://lpicentral.blogspot.com/p/300-100-lpic-3-mixed-environment-lpic-3.html.

In 2021, all LPIC-3 certifications will be updated to version 3.0. Further information about the LPIC-3 updates can be found at https://www.lpi.org/lpic-3-version-3-update. Additional information about LPI’s certification programs and how to become LPI certified are available at http://lpicentral.blogspot.com/p/lpi-certifications.html.

Source: lpi.org

Thursday 26 August 2021

LPIC-3 Mixed Environments 3.0 Introduction #01: 301 Samba Basics

LPIC-3, LPIC-3 Certifications, LPIC-3 Mixed Environments 3.0, LPI Exam Prep, LPI Tutorial and Materials, LPI Certification, LPI Preparation, LPI Career

This blog posting is the first in a series that will help you to prepare for the new version 3.0 of the LPIC-3 Mixed Environments exam. The new objectives are a complete rewrite of the former version. A lot of topics were added or extended, and some outdated topics were removed. 

How to Study for the LPIC-3 Mixed Environments Exam

In this blog series we will go through the exam objectives and refer to resources on the internet that might be helpful for your studies. The selection of links is of course subjective. LPI does not recommend any specific way of studying. If you find other helpful resources, please leave a recommendation in the comments to this post.

Whenever you are studying for an LPI exam, make sure you have the most recent version of the exam objectives ready. These objectives tell you explicitly what you absolutely must know for the exam. Keep the objectives open all the time, and consider even printing them so you can tick off what you’ve already covered.

Also make sure you take notes right from the beginning of your studies. You will build up more and more knowledge as your studies continue. Ask yourself, what information would you like to review the day before the exam? Write such information down immediately whenever you encounter it.

Preparations

This posting is all about getting you started and discussing exam topic 301, Samba Basics. You probably know what Samba is and which protocols are involved. If you’d like to refresh your memory, the Wikipedia pages on Samba and the SMB protocol are a good starting point.

While preparing for your exam, you will need a lab environment to experiment with various configurations. Such lab environments are commonly set up using virtual machines. Now would be a good time to install four virtual machines: three the current version of your favorite Linux distribution, and another one running a recent version of Microsoft Windows. Consider naming your VMs according to the purpose: The Linux VMs will serve as domain controllers and file servers and could be named dc1, dc2 and fs. The Windows machine will mostly consume the Linux VM’s services and could be named winclient. Now is also a good time to install the Remove Server Administration Tools on the Windows VM as well as the samba package from the distribution repositories in all Linux VMs.

Samba Configuration

The most important pieces of the original Samba documentation are manual pages and the Samba wiki. The manual page smb.conf(5) is special, because it explains all the configuration options for Samba. You will use this man page a lot. Now is a good time to start reading it. Get started at the top and work all the way down until the start of the section EXPLANATION OF EACH PARAMETER. You don’t have to fully understand every detail because we will come back to some advanced topics such as ID mapping later in the exam objectives. For now just make sure you get an understanding of how the configuration works.

With this information in mind, log into your file server and open the smb.conf file that came with the Samba package. Review the options and look them up in the man page. Make sure the [globals] section contains the statement security = user and comment out any active server role statement.

Now make sure that there is a homes share in your configuration file. If the share isn’t present, add it as follows:

[homes]

read only = no

Confirm the validity of the file with the testparm command. If everything is fine, start the smbd and nmbd services or, if they already run, use smbcontrol all reload-config to make the configuration effective. Finally, add a new user to the file server (don’t forget to create a home directory, for example with useradd -m) and set a Samba password for that user using smbpasswd -a.

The First Connection

LPIC-3, LPIC-3 Certifications, LPIC-3 Mixed Environments 3.0, LPI Exam Prep, LPI Tutorial and Materials, LPI Certification, LPI Preparation, LPI Career
Now you’re all set for the first connection to your new server. In your Windows machine, open the explorer and enter the IP address of your file server in the addresses field, preceded by two backslashes and followed by another backslash, as in \\10.64.0.3\. When you are asked for a username and password, specify the credentials of the user you just created. You should now be able to access the user’s home directory on the file server from the Windows client. Try to add a file from the Windows client and see that it appears in the server’s file system.

Now that you are connected to your server, it’s time to experiment with some of the additional tools included in Samba. Review the man pages for the smbstatus and smbcontrol commands. Get an overview of what they can do and use them, for example, to identify and terminate the client’s connection to the server.

Registry-based Configuration

Finally, we should try to move this configuration from the smb.conf file to the server's registry-based configuration. First, import your current smb.conf into the server’s registry:

net conf import /etc/samba/smb.conf

Next, rename your current smb.conf to smb.conf.org and replace it with this file:

[global]

        config backend = registry

Now restart the Samba services and check that your server still runs fine. Run samba-regedit on the server to get an idea how the registry is organized.

Next Steps

Before concluding this post, let's take a final look at the exam objectives. We have already seen a lot of the content contained in topic 301. However, a few pieces are still missing. Take some time to go through the objectives for topic 301 line by line. You should look up each and every configuration parameter as well as all commands listed in the objectives. Get your hands dirty and try these commands on your file server. Remember it is still a rather empty virtual machine, so just start over if something breaks.

The various management commands will be extremely useful not just for the exam, but also for troubleshooting your real-world Samba setups. You might notice that some of the tools mentioned in the objectives handle LDAP directory contents. Even though we haven’t set up LDAP yet, now is a good time to brush up your LPIC-2 LDAP skills. You will need them next week, when we set up an Active Directory domain to manage the systems in our lab environment.

Source: lpi.org

Thursday 19 August 2021

Open, Simple, Generative: Why the Web is the Dominant Internet Application

LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Learning, LPI Preparation, LPI Study Materials

Everything in the 2021 Open Anniversary celebration comes together in the Open Web, the subject of this month's article. In fact, I have taken the words Open Web as almost a prompt for free association. While everyone appreciates the wealth of free information on the Web, readers of this article may be surprised at where the idea of the Open Web takes us.

Hypertext, the Internet Way

Let's start at the dawn of the World Wide Web. The two standards on which it originally rested were HTML and HTTP.

The “HT” that those abbreviations share stands for hypertext, a term famously invented by visionary philosopher Ted Nelson around 1965. As a somewhat heady and grandiose term that matches Nelson's expansive ambition and personality, hypertext recognizes that thoughts are never confined to individual documents. Each document inevitably refers to and depends on an understanding of other sources of information. This has always been true, but the web makes the relationships explicit through links.

A zeal for representing relationships was and remains the ultimate goal of Tim Berners-Lee, inventor of the web. Just glance through his original proposal to his management, the European Organization for Nuclear Research (CERN in its French abbreviation). The proposal is bound together throughout by a concern for exposing and organizing the relationships between thoughts and between things. The same obsession has remained through the decades in Berners-Lee's proposals for a Semantic Web (2000) and Linked Data (2006).

Grandiosity on the scale of Nelson and Berners-Lee is not entirely abjured by CERN, either, which presents the first-time visitor to its web site with the question, "What is the nature of our universe?"

So the idea of hypertext has been around for a while, but neither Nelson's grand vision (the stillborn Xanadu) nor later experiments such as Apple Computer's HyperCard caught on. I will say a bit about each of these projects in order to contrast them with the traits that took the World Wide Web on such a different path.

First, both Xanadu and HyperCard were proprietary. This limited chances for people outside the organizations developing each technology to add to it and build their own visions on it. The web, in contrast, because it was open, shared the amazing ability of certain computer technologies to spawn enormous innovation. In the term used by law professor Jonathan Zittrain, the web is generative.

Apples' Hypercard was starved for resources, admittedly, but I found little value to it in the first place. The design offered limited capabilities, probably because of the tiny computer memories and slow processors of the 1980s. Each piece of content had to fit on a small card. The biggest limitation was fundamental: each set of cards was self-contained and couldn't link to outside resources. It was left to Berners-Lee to make this major leap in hypertext power through one of his greatest inventions: the URL. These unassuming strings take advantage of the internet and Domain Name System—and Berners-Lee cooked into the URL ways to connect to content outside the web, a valuable gambit to gain a foothold in a sea of other protocols.

Xanadu was complex. This complexity stemmed from the best of intentions: Nelson insisted on creating bidirectional links, which could lead to all kinds of benefits. With bidirectional links, you could follow a site that is linking to you. Payment systems could be enabled—something many internet users dearly wish for now. There would be ways to preserve sites whose hosting computers go down—another serious problem always hovering over the internet, as I point out in my article “Open knowledge, the Internet Archive, and the history of everything.”

As Nelson said in a talk I attended in 2008, "Berners-Lee took the easy way out!" The web has one-directional links, and therefore suffers from all the aforementioned lapses that Nelson claimed would not have plagued Xanadu. To drive home how conscious this choice was, let's go back to Berners-Lee's proposal, mentioned earlier:

"Discussions on Hypertext have sometimes tackled the problem of copyright enforcement and data security. These are of secondary importance at CERN, where information exchange is still more important than secrecy. Authorisation and accounting systems for hypertext could conceivably be designed which are very sophisticated, but they are not proposed here."

Reams have been written, virtual and real, about the ramifications of Berners-Lee's prioritization. But history's verdict is pretty definitive: the easy way out is the right way. Like the larger internet, the web does not try to track down lost resources or assign value to traffic. Its job is just to get information from source to destination.

Open, simple, generative: these traits allowed the web to succeed where other systems had tried and failed. Web standards are debated, endorsed, and maintained by the nonprofit World Wide Web Consortium (W3C).

Berners-Lee also happened to come along at a good moment in computer history. He invented the web in 1989 and it picked up steam a couple of years later. This was the very time that the general public was discovering the internet, that odd research project used only by defense sites, scientific researchers, and a few related companies. (Most of the internet access points were run by the U.S. government until 1994.)

People had long been using non-internet discussion forums and clunky ways of transferring files. Much of the traffic now moved onto the web. And this leads to the next stage of the web's generativity.

Port 80 Finds New Uses

We need a bit of technical background to understand what happened next on the web.

Most computers run multiple programs. When a computer receives network traffic, it figures out which program to send it to, thanks to an arbitrary number called a port that is attached to each packet. A person may be talking to friends using internet relay chat (TCP port 6667), while receiving mail (TCP port 25), and so on. The web was awarded TCP port 80.

Meanwhile, to prevent malicious attacks, network and system administrators usually place firm restrictions on the ports where they accept traffic. In the mid-1990s, with the growth of the internet and the creation of high-speed, always-on connections, restrictions on ports hampered the introduction of new services. People would hear of some wonderful service that could enhance their productivity, but the network administrator either didn't trust the service or was too busy to reconfigure the network so it could send and receive traffic on the new port. The internet was experiencing one of the biggest booms known to any industry in history, and innovators were stymied by this odd technological straitjacket.

More and more, innovators gazed yearningly at the one port that was always guaranteed to be open: port 80, thanks to the universal adoration for the web. And so the developers made a momentous decision: they would violate the tradition of providing a unique port number for their application, and send everything over the Web. The user's web browser would receive and process the traffic. (I put this move into a broader context in a memoir, near the end of the section "Peer-to-peer (early 2000s) and the emergence of user-generated content.")

Although slapped with the censorious term "port 80 pollution" by network administrators, the movement toward web services succeeded beyond its wildest dreams and brought along Software as a Service. Many people spend all day on their browser, moving between Facebook, Google, Salesforce, etc.—with all the traffic moving through port 80.

Berners-Lee's HTTP protocol has now gone far beyond the web. It's the communications protocol of choice for the loosely coupled architecture known as microservices. This story is covered in my article "How the RESTful API Became Gateway to the World." That's the generative web in motion.

Web Hosting and the Democratization of the Internet

The simplicity of the web drove early adoption. Berners-Lee based HTTP on other standard protocols, making it recognizable to administrators. Meanwhile, he based the language for creating web pages, HTML, on an older standard called SGML, but made it rock-bottom easy to learn.

Furthermore, new HTML users could learn how to do cool new things with it just by viewing the source code to the web page in their browser. (I am indebted to Tim O'Reilly, who was my manager for many years, for pointing this out.) This transparency also applied to the languages for rich formatting (CSS) and dynamic web pages (JavaScript). Eventually, CSS and JavaScript were moved into separate files, and developers started shrinking or "minifying" code to save download time. Still, users could look into the files to study how to make web pages. People quit jotting their ideas into journals they shoved in desk drawers, and put their ideas up on the web.

As long as the internet ran on corporate servers, the professional administrators who managed the hardware and software could set up a web server like any other internet service. They could host a few web pages as well as the database that undergirded dynamic content. (See again my article "How the RESTful API Became Gateway to the World" for a history of dynamic content.) Everybody went through the administrator to put up a new web page.

The requirements changed in the early 2000s when millions of individuals started blogging, posting comments, and eventually uploading photos and videos. Tim O'Reilly coined the term "Web 2.0" for this explosion of individual contributions. Content generation was splitting off from web server management. The need was filled by content management systems (CMSes) and web hosting. Thousands of services now help people create their own web pages, providing CMS tools, databases, backups, and guaranteed up-time. Two of the most popular CMSes, WordPress and Drupal, are open source.

The open web depends on hosting. But you do give up some control when using a hosting service. A lot of sophisticated web operations use parts of the HTTP protocol that require control over the web server. A hosting service can also take down sites that it finds objectionable. (On the other hand, a take-down is less painful than hosting the site yourself and being sued or prosecuted.) The software that makes it so easy to build an attractive web site can also be limited or buggy.

The irony of Web 2.0 is that people can easily generate and disseminate content (sometimes racking up earnings in the hundreds of thousands of dollars) because of the technologies' simplicity—but at the same time cede control to social media platforms and other sites.

Many visionaries are trying to decentralize internet services, to make them more like the early days when most internet sites hosted their own servers. Various alternatives to centralized services exist, such as Jabber for chat (standardized now as the Extensible Messaging and Presence Protocol or XMPP) and Diaspora for social media. Proposals for decentralized services based on blockchains and cryptocurrencies revive Ted Nelson's goal of an internet where individuals can charge micropayments.

Accessibility Remains a Problem for Many

LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Learning, LPI Preparation, LPI Study Materials
The resources of the web should be available to everyone, but many factors hold access back: lack of internet connections, censorship, and non-inclusive web design. The article ends by discussing these issues.

Lack of Internet Connections

For years, the computer industry and the mainstream media have taken always-on, high-speed internet access for granted. The people working in those fields have internet access, and all their friends and neighbors do too. (Ironically, I am writing right now during a rainstorm that has cut my internet access, helping me to remember how privileged I usually am.)

The people who usually lacked access had far greater worries—lack of food, jobs, health care, or physical safety—and did not make universal access to the internet a major rallying cry. After the COVID-19 lockdowns revealed that children were being denied an education and adults were cut off from critical information because of limited internet access, some governments—although reeling from the pandemic—did start to look at solutions.

Earlier, some governments and NGOs had found ways to provide information through other media. The previous article in this Open Anniversary series mentioned Endless OS, which distributes computers loaded up with resources such as Wikipedia pages. Although internet access is richer, print-outs and computers can still provide desperately needed educational resources.

Censorship

Censorship is a more selective denial of internet access. There is no doubt that dangers lurk on the internet. Child pornography, terrorist recruitment, trade in illegal substances and stolen information—it all goes on. Censors target these problems, but also crack down on content that they consider politically or socially unacceptable. Because censorship requires central control over the gateways through which all internet content flows, censorship is usually found in highly centralized societies with strong central governments.

Because all of us know of some internet content we wish wasn't there, I will not argue the moral or political issues behind censorship. The topic of this section is what people do to get around it. The main remedy that has emerged is called an onion routing network. TOR, which was originally partly funded by the U.S. Navy, is the best-known of those networks today.

In an onion routing network, people who oppose censorship volunteer to host access points. If I want to reach a human rights researcher or (less sympathetically) want to buy ammunition online, I download a list of access points. I then send my message to one of the access points.

The access point has a list of other nodes in the onion routing network, and forwards my request to one chosen at random. The second node then routes my request to a third node, and so on. Like an onion, the anti-censorship network has many layers in order to make it hard to trace who sent a message to whom. The final nodes in the network routes my request to my recipient.

Because the lists of access points are public, censors know them too. It would be possible to block them all, and censors sometimes try. But the access points are numerous, change regularly, and often serve other purposes besides routing through the network. Sophisticated nodes introduce random delays between receiving a message and passing it on, to make it harder for a snooping observer to realize that the two messages are related.

Back to my successfully delivered request. Some information must be stored somewhere to allow the response to come back to me, and that's the feature of onion routing networks that is most vulnerable to attack. Like other types of cybersecurity, designers of onion routing networks are in constant competition with attackers.

Non-Inclusive Web Design

The final barrier I'll discuss in this article is web page designs that require a visitor to have good eyesight, good hearing, a steady hand, or some other trait that parts of the population lack. When advocates for the differently abled talk about "accessibility," they refer to designs that present no difficulties to anyone, or (because that's hard to achieve) offer workarounds for people with difficulties. Examples of accessibility features include:

◉ Supplementing different colors with other visual or textual cues to the differences in a web page

◉ Allowing text to be enlarged by the viewer

◉ Offering a textual description for each image, so that a person using a screen reader gets the most important information

◉ Adding closed-caption text to videos

◉ Allowing visitors to select elements from a screen without having to point and click

◉ Using familiar or standard design elements, so that visitors can apply knowledge they have learned from other sites

Many online tools exist to help designers check accessibility. In the United States, web sites should do whatever they need to conform to the American with Disabilities Act. Many companies also try to require accessibility on all their web sites. But most designers don't understand where their designs can exclude visitors, and guidelines often go unheeded.

Source: lpi.org

Tuesday 17 August 2021

Which Linux File System Should You Use?

LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Learning, LPI Certification, LPI Preparation

A file system is one of those implementations in an operating system that everyone uses but most are not aware of how it works.

Consider the older days when offices would keep records and files inside folders, bundle them into stacks, and put them on their respective shelves to where they belong. You could group the folders based on their registered dates or group them based on which area they refer to. So many ways to keep your files, yet each of them served a purpose, which was to ease our work by being kept in a structured manner and being found easily. 

A file system is an architecture defining how files are stored and retrieved. It defines a format and logic of – if a newly created file will be saved, how will it be saved, what extra data will it be saved with, where will it be saved, and how will it be accessed from where it was saved. 

File systems are defined based on where they are used. There are file systems defined for operating systems, networks, databases, and other special-purpose file systems. When talking about an OS, a file system may be defined as a hard disk, flash memory, RAM, or optical discs.

In this article, we will be focussing on the file system for hard disks on a Linux OS and discuss which type of file system is suitable. Before that, let’s get ourselves familiar with the various characteristics associated with a file system.

The architecture of a File System:

A file system mainly consists of 3 layers. From top to bottom:

1. Logical file system: interacts with the user application with the help of an API to provide open, read, close etc. operations and passes requests to the layer below.

2. Virtual file system: enables multiple instances of the physical file system to run concurrently.

3. Physical file system: handles the physical aspect of the disk while managing and storing physical memory blocks being read and written.

LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Learning, LPI Certification, LPI Preparation

Characteristics of a File System


◉ Space Management: how the data is stored on a storage device. Pertaining to the memory blocks and fragmentation practices applied in it.

◉ Filename: a file system may have certain restrictions to file names such as the name length, the use of special characters, and case sensitive-ness.

◉ Directory: the directories/folders may store files in a linear or hierarchical manner while maintaining an index table of all the files contained in that directory or subdirectory.

◉ Metadata: for each file stored, the file system stores various information about that file’s existence such as its data length, its access permissions, device type, modified date-time, and other attributes. This is called metadata.

◉ Utilities: file systems provide features for initializing, deleting, renaming, moving, copying, backup, recovery, and control access of files and folders.

◉ Design: due to their implementations, file systems have limitations on the amount of data they can store.

Some important terms:


Journaling:

Journaling file systems keep a log called the journal, that keeps a track of the changes made to a file but not yet permanently committed to the disk so that in case of a system failure the changes lost can be brought back.

Versioning:

Versioning file systems store previously saved versions of a file, i.e. the copies of a file is stored based on previous commits to the disk in a minutely or hourly manner to create a backup.

Inode:

The index node is the representation of any file or directory based on the parameters – size, permission, ownership, and location of the file and directory.

Now, we come to part where we discuss the various implementations of the file system in Linux for disk storage devices.

Linux File Systems: 

Note: Cluster and distributed file systems will not be included for simplicity.

ext (Extended File System): 

Implemented in 1992, it is the first file system specifically designed for Linux. It is the first member of the ext family of file systems.

ext2: 

The second ext was developed in 1993. It is a non-journaling file system that is preferred to be used with flash drives and SSDs. It solved the problems of separate timestamp for access, inode modification and data modification. Due to not being journaled, it is slow to load at boot time.

Xiafs: 

Also developed in 1993, this file system was less powerful and functional than ext2 and is no longer in use anywhere.

ext3: 

The third ext developed in 1999 is a journaling file system. It is reliable and unlike ext2, it prevents long delays at system boot if the file system is in an inconsistent state after an unclean shutdown. Other factors that make it better and different than ext2 is online file system growth and HTree indexing for large directories.

JFS (Journaled File System):

First created by IBM in 1990, the original JFS was taken to open source to be implemented for Linux in 1999. JFS performs well under different kinds of load, but is not commonly used anymore due to releasing of ext4 in 2006 which gives better performance.

ReiserFS: 

It is a journaling file system developed in 2001. Despite its earlier issues, it has tail packing as a scheme to reduce internal fragmentation. It uses a B+ Tree that gave less than linear time in directory lookups and updates. It was the default file system in SUSE Linux till version 6.4, until switching to ext3 in 2006 for version 10.2.

XFS: 

XFS is a 64-bit journaling file system and was ported to Linux in 2001. It now acts as the default file system for many Linux distributions. It provides features like snapshots, online defragmentation, sparse files, variable block sizes, and excellent capacity. It also excels at parallel I/O operations.

SquashFS: 

Developed in 2002, this file system is read-only and is used only with embedded systems where low overhead is needed.

Reiser4: 

It is an incremental model to ReiserFS. It was developed in 2004. However, it is not widely adapted or supported on many Linux distributions.

ext4: 

The fourth ext developed in 2006, is a journaling file system. It has backward compatibility with ext3 and ext2 and it provides several other features, some of which are persistent pre-allocation, unlimited number of subdirectories, metadata checksumming and large file size. ext4 is the default file system for many Linux distributions and also has compatibility with Windows and Macintosh.

btrfs (Better/Butter/B-tree FS): 

It was developed in 2007. It provides many features such as snapshotting, drive pooling, data scrubbing, self-healing and online defragmentation. It is the default file system for Fedora Workstation.

bcachefs: 

This is a copy-on-write file system that was first announced in 2015 with the goal of performing better than btrfs and ext4. Its features include full filesystem encryption, native compression, snapshots, and 64-bit check summing.

Others: Linux also has support for file systems of operating systems such as NTFS and exFAT, but these do no support standard Unix permission settings. They are mostly used for interoperability with other operating systems.

Below is a table, listing out the criteria on which filesystems can be compared:


Please note that there are more criteria than the ones listed in the table. This table is supposed to give you an idea of how file systems have evolved.

Parameters   File Systems 
  ext  ext2  Xiafs  ext3  JFS 

Max. filename length

(bytes)

255 255  248 255  255 

Allowable characters

in directory entries

(Any byte)

except NUL except NUL, /  except NUL  except NUL or / 

Any Unicode

except NUL

Max. pathname length Undefined Undefined  Undefined  Undefined  Undefined 
Max. file size 2 GB 16GB – 2TB  64MB  16GB – 2TB  4PB 
Max. volume size 2 GB  2TB – 32TB  2GB  2TB – 32TB  32PB
Max. no. of files –  –  – 

Metadata only

journaling

No No No  Yes   Yes
Compression No No  No  No  No 
Block sub-allocation No No  No  No Yes 
Online grow No No  - Yes No 
Encryption No No No No No
Checksum No No No No No

 

Parameters   File Systems 
  ReiserFS XFS Reiser4 ext4 btrfs

Max. filename length

(bytes)

4032

255 characters

255  3976 255  255 

Allowable characters

in directory entries

(Any byte)

except NUL or / except NUL except NUL, / except NUL, /

except NUL, /

Max. pathname length Undefined Undefined  Undefined  Undefined  Undefined 
Max. file size 8TB 8PB 8TB (on x86) 16GB – 16TB 16EB
Max. volume size 16TB 8EB - 2^32 2^64
Max. no. of files –  2^32 2^64

Metadata only

journaling

Yes Yes No  Yes  No
Compression No No  Yes No  Yes
Block sub-allocation Yes No  Yes No Yes 
Online grow Yes Yes Yes Yes Yes
Encryption No No Yes

Yes

(experimental)

No
Checksum No Partial No Partial Yes

Observations


We see that XFS, ext4 and btrfs perform the best among all the other file systems. In fact, btrfs looks as if it’s almost the best. Despite that, the ext family of file systems has been the default for most Linux distributions for a long time. So what is it that made the developers choose ext4 as the default rather than btrfs or XFS? Since ext4 is so important for this discussion, lets describe it a bit more.

ext4:

Ext4 was designed to be backward compatible with ext3 and ext2, its previous generations. It’s better than the previous generations in the following ways:

◉ It provides a large file system as described in the table above.

◉ Utilizes extents that improves large file performance and reduces fragmentation.

◉ Provides persistent pre-allocation which guarantees space allocation and contiguous memory.

◉ Delayed allocation improves performance and reduces fragmentation by effectively allocating larger amounts of data at a time.

◉ It uses HTree indices to allow unlimited amount of subdirectories.

◉ Performs journal checksumming which allows the file system to realize that some of its entries are invalid or out of order after a crash.

◉ Support for time-of-creation timestamps and improved timestamps to induce granularity.

◉ Transparent encryption.

◉ Allows cleaning of inode tables in background which in turn speeds initialization. The process is called lazy initialization.

◉ Enables write barriers by default. Which ensures that file system metadata is correctly written and ordered on disk, even when write caches lose power.

There are still some features in the process of developing like metadata checksumming, first-class quota supports, and large allocation blocks.

However, ext4 has some limitations. Ext4 does not guarantee the integrity of your data, if the data is corrupted while already on disk then it has no way of detecting or repairing such corruption. The ext4 file system cannot do secure deletion of file, which is supposed to cause overwriting of files upon deletion. It results in sensitive data ending up in the file-system journal.

XFS performs highly well for large filesystems and high degrees of concurrency. So XFS is stable, yet there’s not a solid borderline that would make you choose it over ext4 since both work about the same. Unless, you want a file system that directly solves a problem of ext4 like having capacity > 50TiB.

Btrfs on the other hand, despite offering features like multiple device management, per-block checksumming, asynchronous replication and inline compression, it does not perform the best in many common use cases as compared to ext4 and XFS. Several of its features can be buggy and result in reduced performance and data loss. 

Source: geeksforgeeks.org

Saturday 14 August 2021

7 Reasons Why Programmers Should Use Linux

Linux is an operating system just like Mac or Windows OS. A few years ago, it was primarily used for servers and wasn’t considered a very friendly choice for personalized desktops. The reason was its UI, which was complicated for an average user to understand. But, in this digital era, Linux has been steadily improved by developers, and now, you can find Linux in cars, home desktops, or enterprise servers.

More Info: 010-160: LPI Linux Essentials (Linux Essentials 010)

Linux Exam Prep, Linux Preparation, Linux Certification, Linux Career, Linux Tutorial and Materials, Linux Learning, Linux Guides

Every ten (out of 1000) across the globe are using this license-free operating system instead of involving themselves in the struggle of checking third-party drivers for Windows 10, or Mac OS X 10.11. Thinking about the reasons behind the growing popularity of Linux over proprietary OSs like Windows XP, Haiku, Mac, etcetera!! Let’s try listing top reasons which illustrate a clear picture of the increasing usage of Linux amongst programmers, developers, or testers working for a business venture.

1. Linux Design is Highly Secured


Linux is developed and deployed with higher security aspects, by which the programmers may easily avoid or eliminate viruses and other harmful malware. If you try to make changes in the system design or associated configuration, you require permissions from the user logged in as the root, i.e. Linux administrator. Such a highly-secured design won’t let the assaulters do much damage to the system consisting of a variety of reading and writing privileges. Thus, one may browse the Internet or run other files/programs without worrying if or not the system will get infected. And unlike Windows, Linux won’t be generating logs or uploading data from your systems, thereby making it exceedingly privacy-focussed. If, in case you are still afraid of the vulnerability of viruses or malware, you may install an antivirus like Avast or Norton to secure your systems further.  

2. Linux is Offering a Dozen of Customization Options


Customization is a sort of modification that an individual prefers to apply either to software or other entities attached to the hardware. When it comes to customizable options offered by Linux, this means that Linux is offering an advantage to its users to customize its options as per the complexity of computing environments. A few desktop environments offered by Linux are Cinnamon, Unity, GNOME, and KDE. Apart from this customization, users have an option of tweaking the desktop utilities (which could be disk repair, backup, or file management and networking programs) in Linux, adding newer fonts and icons delivering amazing effects, reskinning desktop themes with Conky Linux, and so on. Additionally, shell scripting in Linux can potentially be used to do special operations in a simple and easy-going manner. All such customization options make Linux efficient in providing various ways users may use to change display icons according to their choice, thereby creating a better customer experience on an overall basis. 

3. Linux Optimally Uses All the Hardware Resources Available


One can’t deny the fact that hardware systems tend to become outdated as soon as newer versions of operating systems are released. The reason is that newer operating systems need advanced technical specifications that outdated hardware fails to reciprocate. Still thinking if Linux supports such obsolete hardware!! Yes, with a variety of modules available in its installation procedure, users may pick a range of hardware requirements (like Intel 486SX, 386SX, 486DX) and let the Linux UI optimally use the available resources. Besides, Linux may be ported to non-Intel architectures such as MIPS, Alpha AXP, SPARC, PowerPC, and Motorola 68K after you select a particular distribution of Linux. All this has made Linux an extremely resource-efficient Operating System that can suitably run on many hardware specifications (like taking less than 256 MB CPU memory for smoother operation) other operating systems cannot even dream of. Magic … isn’t it?  

4. Linux Will Let You Write a Variety of Bash Scripts


Bash scripts are another form of shell scripts incorporated by a variety of commands for executing various tasks in a Linux-based environment. Those tasks can be managing the mailing lists, removing duplicates while extracting business or non-business email addresses, or adding the accurate formatting whose results are read well by other programs. Such scripts could be hard to understand at the initial stage, but they are capable of flexibly and quickly joining existing programs into powerful Linux solutions. Scalably, these bash scripts understand the behavior and needs of users and map them efficiently on the live terminal of Linux. This is really time-saving as the syntax of bash is easy to use and a few efforts are required for identifying performance errors at the times of debugging. All such merits encourage Linux programmers a lot to create and execute bash files for automating frequently performed Linux operations.

5. Linux Community is Readily Available for 24-Cross-7 Support  


Linux offers commendable community support through various forums over the Internet. Such forums have enabled Q&A sessions that encourage discussions related to kernel, shell, or frameworks supporting Linux applications. You may think how one will be benefited through such discussions – a lot of volunteers ( who can either be programmers or analysts) are readily available for clarifying all your queries with their passion for Linux. Many enterprises like Novell, Red Hat are enrolled with a paid support option which is helpful in sharing information and tips related to Linux OS (or associated applications). Such 24-Cross-7 Support successfully boosts up customer loyalty as the community members are helping users to find somebody who has done something similar to what they might try to do!! All this helps organizations build relationships with their customers on the grounds of satisfaction, loyalty, and better engagement with the multiple forum threads having solutions to all their Linux-based problems.  

6. Linux Product Versions Support Reliability and Stability  


Linux Exam Prep, Linux Preparation, Linux Certification, Linux Career, Linux Tutorial and Materials, Linux Learning, Linux Guides

Reliability and stability are interrelated to each other because a product or an OS can’t be stable in the market if the degree of accuracy in its quality results can’t be measured with trustworthiness. Linux, in this context, has rocked the 2021 market with its reliable and stable products like Ubuntu, Fedora, Gentoo, Debian. All of them have an availability rate of 99 percent approx. According to the statistics, from 1 million servers running currently, 96.3 percent of them are preoccupied with the programming capabilities of Linux. Also, 90 percent of the current cloud infrastructure is practically operating on Linux so that IT systems may be maintained and managed at reduced costs. The reason for such popularity is that after every patch or update is downloaded, there is no need for rebooting a Linux server. With this characteristic, Linux OS has demonstrated an uptime of 99.9 percent. This makes Linux more reliable and stable while managing the existing/ ongoing business processes at reduced costs.  

7. Linux Complies Well with Open Source Licensing


Open-Source means that anyone reviewing the source code may modify, inspect, or enhance it without any restrictions on its original rights. The licensing of the Linux Operating System supports this aspect and this is the reason why the developers or programmers of different countries can develop their own Linux versions with no strings attached? Indeed, this is quite helpful as countries may now begin with using such OSs for defense, manufacturing, or communications. Therefore, many countries across the globe need not pay dollars to purchase this license-free OS for trying out to create their OWN. Some of the Linux-based operating systems are Kylin, Nova, BOSS, IGOS Nusantara Linux, and Pardus Linux. All these OS examples have helped such countries control and document their total IT costs without compromising on quality and scalability.

Source: geeksforgeeks.org

Thursday 12 August 2021

My Path to Making Open Source Software

LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Preparation, LPI Career

I’m the kind of person who has a passion for computers and technology: since I was a boy I wondered how things work. Looking back, I can recognize some important milestones that made sense in my life and it relates to me with this world. I have pursued software development in school and in my career, working as a Python programmer, Django web site maintainer, Arduino hacker, and GNU/Linux administrator.

In Childhood

When I was a kid in the 1990s, my parents bought a computer to improve their work and paid for a basic internet connection via modem. My mom was used to working in front of the computer; she used to wait for a long time to get a web page loaded.

I remember asking myself “What is this thing?” At that time I had no idea about the great work by geniuses and pioneers of computers and electronics—I was just a curious boy, like many others.

High School

In my last years of high school, some of my subjects were related to electricity and electronics. I took an introductory course in that field, but actually, I was not very interested in Faraday’s Law no sooner than electronics caught my attention. Fortunately, the informatics class had a programming course, all about flowcharts and visual basic programming.

At that moment I realized that I was much more interested in programming than other subjects, since I was able to tell the computer to do something I wanted. But first I knew I  would have to learn “how to talk” with it. That’s pretty much the key to my future path… 

University

Time passed, and I decided to study systems engineering. Honestly, at first I failed programming and didn’t understand math. Later, I enrolled in another university and this time was able to pass!

The fun part is, when a friend of mine called José O. V. had helped me with a common task, he told me: “We can do it better by using something called Ubuntu.”He taught me the philosophy behind the project and basic installation I really enjoyed that experience—it was something I’d never seen before; but mainly, I was happy because I was using a computer in a way that best suited my needs.

This moment with a friend highlights the importance of help and support from the community when you get stuck or things are broken and no one around you knows how to solve it. Imagine me as a young man getting info from manuals, visiting forums, searching on the web, and so on. I learned about sharing with others the knowledge in a particular area, experience gained, tools, tips about how to fix issues, etc.

My actual open source journey started in 2009 with the Ubuntu 9.10 Karmic Koala version. After that, the parent project and the open source software community elicited my attention; those were my fundamental basis.

At the same time, in university, I learned to program in Java, Python, and a very little C++. My favourite subject was Operating Systems, because it pertains to Unix and the Linux kernel world.

I used to code Bash scripts to automate tasks and type in the command line for system monitoring. Most of the time I was focused on distro hopping and desktop environment tuning.

Also, I participated in some programming contests, and with a classmate named Magaly, made a project to help children with disabilities recognize and learn objects with the use of aromas; for that, we implemented a prototype using Arduino. For my thesis project, I worked together with my partner Liliana to implement a robotic assistant with Raspberry Pi.

At Work

My first job was at Cátedra Unesco Assistive Technologies for Educational Inclusion. I needed to implement a small infrastructure to host the projects. To complete this task I set up a Proxmox hypervisor to handle containers and virtual machines with an appropriate GNU/Linux server distro on top.

At this point, I learned useful tricks and tips about server management tasks such as installation/configuration, storage management using RAID and LVM, and reverse proxy redirection using Apache Guacamole to retrieve virtual instances remotely.

I also implemented a basic web application using the Django framework. I used the Godot engine for interactive application development with the help of GUI interface design, and the GDScript language to control robotic assistants for educational projects made by my teammates.

LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Preparation, LPI Career

2020 has been a very difficult year because of the COVID-19 pandemic. Despite that, some students where I worked had the opportunity to get a virtual instance inside the server to continue with their studies.

Months ago my contract expired, but I continued using Django for another project related to web accessibility. As you can see, “libre and open source” software have been a great tool in the academic area, but mainly and most importantly to help others improve their lives.

Today

I’m continuing my learning and am applying for a Linux or Open Source Certification. I would like to contribute to an open source project related to GUIs or desktop environment development.

Top Utilities and Applications for My Daily Use

As a desktop environment that lets me concentrate on my work, I’ve installed Ubuntu with GNOME (vanilla flavour), the PyCharm IDE for development, Bash and Python for common system tasks, Mozilla Firefox as the web browser, LibreOffice for an office suite, GIMP and Inkscape for image editing, draw.io for diagrams, and Git as a control version system.

Source: lpi.org

Tuesday 10 August 2021

New Portal For Learning Materials

LPI Learning Materials, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Preparation, LPI Study Materials, LPI Career

Get Concentrated Knowledge For Exam Preparation at learning.lpi.org

With learning.lpi.org, the Linux Professional Institute (LPI) has created a new central contact point for LPI exam preparation. The portal will continuously provide free learning materials for teachers, learners, and partners. learning.lpi.org started on August 27 with preparation materials for Linux Essentials. More content will follow soon.

"The LPI attaches great importance to the fact that exam candidates can freely determine their learning materials and paths," explains Fabian Thorns, Director of Certification Development at Linux Professional Institute. "The purpose of the portal is to significantly simplify access to knowledge by pooling all known resources in one place. This involves learning materials targeted towards our certifications on the one hand, but also third-party materials and content from the Publishing Partner Program on the other. "learning.lpi.org is to become the central point of contact for everyone preparing for an LPI exam," says Fabian Thorns.

Read More: LPIC-3 303: Linux Enterprise Professional Security

Dr. Markus Wirtz, Manager, Education Programs describes how this will look: "The learning materials follow the learning objectives of the respective exam in structure and, in addition to the thematic requirements, also take their weighting into account. As you can already see from the materials for Linux Essentials, they offer short teaching units with a clear structure, from theoretical basics, to guided and open exercises, to their solutions."

LPI Learning Materials, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Preparation, LPI Study Materials, LPI Career
"The study materials provided at learning.lpi.org are created in a worldwide network of open source and Linux specialists", explains LPI Managing Director, G. Matthew Rice. The learning.lpi.org project provides the infrastructure, quality control, and publication. This ensures an open development process with the community and the required quality.

The source language for all learning materials is English. Translations into other languages are planned and are already in progress for German and French. If you would like to participate in writing, proofreading, or translating, you are cordially invited. "We are happy to welcome supporters who have very good knowledge of Open Source and Linux and who work with Git and AsciiDoc files. You can contact us at learning@lpi.org," says Dr. Markus Wirtz.

A further component of learning.lpi.org is the intensive cooperation with publishers, platforms, and authors. "To this end, we have launched the LPI Publishing Partner (LPP) program," explains Fabian Thorns. "It gives our publishing partners access to the worldwide network of Linux specialists and allows learners to benefit from reliable sources for exam preparation.” Additional information on the program is available at learning.lpi.org.

Source: lpi.org

Saturday 7 August 2021

LPI Members elected the new Board of Directors

LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Career, LPI Preparation, LPI Learning, LPI Prep, LPI Guides, LPI Study Materials

At its Annual General Meeting held June 26, 2021, Linux Professional Institute (LPI) completed the last stage in a major change to its governing structure. For the first time, LPI’s Members have chosen its Board and directly participated in its high-level decision-making.

A new Board of Directors was elected by LPI’s membership using a ranked-choice voting system. LPI’s 2021 Board of Directors is (listed alphabetically):

◉ VM (Vicky) Brasseur (USA) - 1 year term

◉ Christopher “Duffy” Fron (USA, returning) - 2 year term

◉ Dorothy Gordon (Ghana, returning) - 3 year term

◉ Jon “maddog” Hall (USA, returning) - 3 year term

◉ Klaus Knopper (Germany) - 3 year term

◉ Mark Phillips (USA) - 2 year term

◉ Uirá Ribeiro (Brazil) - 3 year term

◉ Torsten Scheck (Germany, returning) - 1 year term

◉ Bryan J Smith (USA, returning) - 1 year term

◉ Thiago Sobral (Brazil) - 2 year term

The new Board commenced its work July 22nd at a preparatory meeting. At that time the Directors chose a Chair, Jon “maddog” Hall. The new Directors also determined who would serve three-year, two-year or one-year terms. Going forward this “staggered” approach will allow approximately one-third of the board to be elected each year. The new Board’s official term began July 26, one month after the AGM as specified by the Bylaws.

LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Career, LPI Preparation, LPI Learning, LPI Prep, LPI Guides, LPI Study Materials
Almost 50% of LPI’s members who were eligible to vote cast votes in the election of the board and 19% participated in the AGM, which was held by virtual conference call.

The election process started in November 2020 with creation of a Nomination Committee to solicit and evaluate candidates, producing a ballot of 16 from which Members were to choose 10. The election campaign and voting started May 12 and closed at the AGM.

“This election and AGM have been among the most important events in LPI’s history”, said current Board Chair Jon "maddog" Hall.“ It has taken several years and countless person-hours to design and develop a membership program which was necessary to reach this point, but it has been well worth it. LPI is committed to helping develop open source professionals.”

Source: lpi.org

Thursday 5 August 2021

ICT Pro partnership with Linux Professional Institute demonstrates the demand for FOSS

ICT Pro, LPI Exam Prep, LPI Tutorial and Material, LPI Preparation, LPI Certification, LPI Learning, LPI Cert Prep, LPI Career

ICT Pro is a corporate training firm centered in Brno, Czech Republic, providing a wide range of training for more than 25 years. Their training in English has already been offered in 14 different countries on 4 continents.

The courses span proprietary technologies such as Microsoft, IBM, and VMware as well as open source technologies such as Linux, Kubernetes, and MySQL. Topics include system administration, programming, networking, security, and what they call "soft skills" such as communications, personal development, and project management.

Read More: LPIC-3 304: Linux Enterprise Professional Virtualization and High Availability

ICT Pro is both an LPI Platinum Approved Training Partner and a Pearson VUE Authorized Test Center. Thus, they offer certification exams, including the LPI exams, to their customers.

They are currently using the LPI Learning Materials to cover LPIC-1 Exam objectives (101 and 102) in an intensive five-day bootcamp.

ICT Pro, LPI Exam Prep, LPI Tutorial and Material, LPI Preparation, LPI Certification, LPI Learning, LPI Cert Prep, LPI Career
ICT Pro generally provides hands-on and face-to-face training, but in the COVID-19 era, moving online became unavoidable. They have enhanced their education approach by online learning in virtual environments, as well as blended learning (also known as hybrid learning) that integrates online technology and digital media with traditional instructor-led classroom teaching activities. The ratio of online courses to in-person courses is currently about 50/50.

Concerning their partnership with LPI, ICT Pro's CEO Radek Havelka says, "We at ICT Pro aim to maintain and further strengthen our position on the IT and soft skills corporate education market. Becoming an LPI Platinum partner is a great opportunity for us to not only stay in touch with modern education for business development, but also provide a more complete professional training portfolio, especially focusing on open-source technologies."

Regarding the importance of free and open source technologies in the job market, Havelka cites the https://training.linuxfoundation.org/resources/2020-open-source-jobs-rep... 2020 Open Source Jobs Report from the Linux Foundation, which found the Open sources. He says, "The latest surveys report that Linux skills are back on top as the most sought-after skill, with 80 percent of hiring managers looking for tech professionals with Linux expertise."

Regarding the importance of free and open source technologies in the job market, Havelka says, "Back in 2018, the Tech Pro Research survey [which is behind a pay wall] found that Linux skills were back on top as the most sought-after skill, with 80 percent of hiring managers looking for tech professionals with Linux expertise. The latest surveys, such as the 2020 Open Source Jobs Report from the Linux Foundation, show that the number of hiring managers who report difficulty finding sufficient talent with open source skills has even risen to 93%."

Elzbieta Godlewska, LPI's representative for Poland, Czech Republic, and Slovakia, writes, "ICT Pro was LPI Central Europe's first training partner in the Czech Republic. I am very glad to see this partnership extend to the Platinum LPI level. We look forward to many more projects together and an even bigger number of LPI-certified professionals in this part of the world."

Source: lpi.org

Tuesday 3 August 2021

LPI EMEA partners met virtually

LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Learning, LPI Guides, LPI Preparation
Linux Professional Institute (LPI) hosts meetings to which Partners are invited on a regular basis throughout the year, but not specifically regional. On one side it needed to be digital; on the other, we wanted to highlight the sense of local community targeting the Partners of a specific region. So the EMEA Partner Meeting was the first of its kind in the region.

After many months of planning, on June 22 we were finally live for the LPI EMEA Partner Meeting.

The meeting was a success, and it allowed us to experience, virtually, what is a huge part of LPI's identity: the sense of community.

The Plan

I worked closely with Kaitlin Edwards, our Community Events Manager. When you are in the middle of a pandemic, there is a need to reinvent and find new solutions to a lot of things.

Yes: we are a digital-oriented organization. There are plenty of digital tools out there, but will you be up to giving your Meeting that "human touch" feeling that is mandatory if you want to share the sense of community I mentioned earlier?

You need the right tool, and you need to find the right way of using it. For this plan, the tool was Hopin, a virtual event platform.

The "Ingredients"

We do believe that with the panel of talks we brought to the Meeting hit the target. LPI has many new features and projects coming up in the next few months and the Meeting's attendees had the opportunity to get a snapshot of them. At the same time, we wanted to share insights and thoughts about what took place in 2020, and how the past year influenced the IT world.

The variety of talks, set up by Kaitlin, gave us all a chance to look at past, present and future LPI. 

Rafael Peregrino da Silva, our Director of Partnerships and Sponsorships, highlighted how the last year had been one of profound reorganization of the Partner relation framework: a reorganization that, with the launch of the Partner Portal, makes the Partner's day-by-day activity easier and smoother.

At the same time, programs like the Membership, Community, and Employability gives us even more tools to fulfil our mission: empowering the use of Open Source by supporting the people who work with it.

Fabian Thorns, the Director of Product Development, hit us with a ton of... juicy news. The LPIC-3 Exams are facing a massive overhaul. A new format for the Linux Essentials certification is ahead, while LPIC-2 will be updated as well.

Kenny Armstrong is the LPI Training Advisor, and we had him team up with our Simo "#LPIMemberJourney" Bertulli for a back and forth interview about - of course! - the Membership Program: its present and its future, made of integrations with the Learning and the broader community areas, as well as the development  of interactions with the Partners environment.

Dr. Markus Wirtz, the Manager of the Education Programs, shared his vision of what is, in my opinion (ok, I am a bit biased, here), one of the most effective LPI's projects: the Learning Portal. It was an interesting behind-the-scenes look on a project that constantly deals with merging openness and much-needed consistency.

When it came to thinking about a guest speaker, we immediately thought of Dorothy Gordon. And according to the feedback we had after the event, we were right. Dorothy is, among the many other outstanding roles, Chair of the Intergovernmental Council for UNESCO's Information for All Programme and a member of LPI's Board of Directors. Her "Power and Control in Digital Spaces'' told us how vital a healthy Open Source culture is, and how it’s going to be essential to address the instances that the geopolitics of digital spaces have to deal with when it comes to education, climate change and the constant risk of monopolisms.

Some fun, the concurrent sessions

There is no convention without fun. If the convention is digital, the fun has to be digital as well. But we got it covered! Our own Reiner Brandt, from the Linux Professional Institute Central Europe team,brilliantly managed a Kahoot Quiz session. It was great fun, and congratulations on the well deserved gigantic penguin to the winner!

There is no convention without more specific talks: we had them too, with Reiner's session on "Linux in the Classroom"; Sonia Ben Othman delivered "The 4C for better graduate employability"; and Markus with " StartIT – an upcoming program for ICT newbies".

The EMEA Meeting in bullet points

When you put a lot of energy into organizing a meeting, you hope for that energy to come back in the form of a positive outcome: you have to pass that energy to the attendees for it to come back again!

LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Learning, LPI Guides, LPI Preparation

And the 2021 LPI EMEA Partner Meeting has been a success indeed. We started nurturing the EMEA community in the weeks before, setting up a welcoming environment: all part of the plan! I was personally involved in setting up a Mattermost channel providing an easy benefit to our Partners to welcome them on a dedicated channel, where the conversation started before the actual Meeting and where it is still going on. I want to think that this approach played a nice role in the exceptional turnout that we had.

But let's take a look at the Meeting's figures.

◉ 8 between talks and sessions;

◉ 5 and half hours of content;

◉ 71 attendees at the Meeting, from 27 Countries;

◉ 277 chat comments made;

◉ More than 13,000 altogether minutes were spent attending the talks.

Source: lpi.org