Thursday 30 December 2021

Linux Essential Interview Questions (LPI 010-160)

Linux Essential Interview Questions, LPI 010-160, LPI Linux Essentials, Linux Certification, LPI Exam Prep, LPI Guides, LPI Career

Overview

The LPI Linux Essentials (010-160) is offered by Linux Professional Institute (LPI), which continuously adopts worldwide individual users, government entities, and industries to embrace open source technologies. This is an entry-level exam redefining traditional Information and Communication Technology (ICT) job roles to require more Linux skills. Therefore, we have designed some best Linux Essential (LPI 010-160) Interview Questions along with our experts to help you ace your exam with flying colors.

Now, let’s begin with some Linux Essential (LPI 010-160) Interview Questions.

1. What does Linux distribution have?

A Linux distribution is a bundle that consists of a Linux kernel and a selection of applications that are maintained by a company or user community.

2. What is Ubuntu?

Ubuntu is Debian-based distribution. Ubuntu was created by Mark Shuttleworth and his team in 2004, with the purpose to bring an easy-to-use Linux desktop environment. Ubuntu’s mission is to provide free software to everyone over the world as well as to cut the cost of professional services. 

3. What are Embedded Systems?

Embedded systems are a combination of computer hardware and software designed to have a specific function within a larger system.

4. Where are Embedded systems found?

Embedded systems are mainly found in automotive, medical, and even military applications. Due to its broad variety of applications, a number of operating systems based on the Linux kernel were developed in order to be used in embedded systems. A significant part of smart devices have a Linux kernel based operating system running on it.

5. What do you understand by Raspberry Pi?

Raspberry Pi is a low cost, credit-card sized computer that can function as a full-functionality desktop computer, but it can be used within an embedded Linux system.

6. Why do you use Office Applications?

Office applications are used for editing files such as texts, presentations, spreadsheets, and other formats commonly used in an office environment. Moreover, these applications are usually organized in collections called office suites.

7. List the two main web browsers in the Linux environment?

◉ Firstly, Google Chrome

◉ Secondly, Mozilla Firefox

8. What is Thunderbird?

Mozilla develops applications, like the e-mail client Thunderbird. Multiple users opt to use webmail instead of a dedicated email application, but a client like Thunderbird offers extra features and integrates best with other applications on the desktop.

9. List some of the most popular multimedia applications for the Linux environment?

◉ Blender

◉ GIMP

◉ Inkscape

◉ Audacity

◉ Lastly, ImageMagick

10. How Linux machines share data?

The NFS protocol is the standard way to share file systems in networks equipped only with Linux machines. With NFS, a computer can share one or more of its directories with specific computers on the network, so they can read and write files in these directories. Moreover, NFS can even be used to share an entire operating system’s directory tree with clients that will use it to boot from.

11. Expand and explain DHCP?

DHCP stands for Dynamic Host Configuration Protocol. DHCP is responsible for assigning an IP address to the host when a network cable is connected or when the device enters a wireless network. A DHCP server is very useful in local area networks also, to automatically provide IP addresses to all connected devices.

12. List some of the most popular programming languages?

To begin with, JavaScript

◉ C

◉ Java

◉ Perl

◉ Shell

◉ Python

◉ Lastly, PHP

13. What are the two major desktop environments in the Linux world?

◉ Gnome

◉ KDE

14. List Linux well know open source hypervisors?

◉ Xen

◉ KVM

◉ Lastly, VirtualBox

Linux Essential Interview Questions, LPI 010-160, LPI Linux Essentials, Linux Certification, LPI Exam Prep, LPI Guides, LPI Career
15. What are Cookies?

Cookies are small files a website can save on your computer in order to store and retrieve some kind of information that can be useful for your navigation.

16. Why is Encryption needed?

Whenever data is transferred or stored, precautions need to be taken to ensure that third parties may not access the data. Data transferred over the internet passes by a series of routers and networks where third parties might be able to access the network traffic. Furthermore, data stored on physical media might be read by anyone who comes into possession of that media. To avoid this kind of access, confidential information should be encrypted before it leaves a computing device.

17. What is TLS?

Transport Layer Security (TLS) is a protocol to offer security over network connections by making use of cryptography. TLS is the successor of the Secure Sockets Layer (SSL).

18. List the different types of Shells in Linux?

There are several different shells on Linux, these are just a few:

◉ Bourne-again shell (Bash)

◉ C shell (csh or tcsh, the enhanced csh)

◉ Korn shell (ksh)

◉ Lastly, Z shell (zsh)

19. List different commands that shell supports?

The shell supports two types of commands:

◉ Internal

◉ Lastly, External

20. In Bash, there are three types of quotes. List them?

◉ Firstly, Double quotes

◉ Secondly, Single quotes

◉ Lastly, Escape characters

21. Define Compression?

Compression is used to reduce the amount of space a specific set of data consumes. Compression is commonly used for reducing the amount of space that is needed to store a file. Another common use is to reduce the amount of data sent over a network connection.

22. What is the difference between Lossy and Lossless algorithms?

Things compressed with a lossless algorithm can be decompressed back into their original form. Data compressed with a lossy algorithm cannot be recovered. Lossy algorithms are often used for images, video, and audio, where the quality loss is imperceptible to humans, irrelevant to the context, or the loss, is worth the saved space or network throughput.

23. What is the use of Archiving tools?

Archiving tools are used to bundle up files and directories into a single file. Some common uses are backups, bundling software source code, and data retention.

24. What is I/O Redirection?

I/O redirection enables the user to redirect information from or to a command by using a text file. The standard input, output, and error output can be redirected, and the information can be taken from text files.

25. List some Linux based operating system?

◉ Linux-based Operating Systems

◉ Enterprise Linux

◉ Consumer Linux

26. List Windows-based Operation Systems?

◉ Windows Servers

◉ Windows Desktops

27. Mention the three Linux distributors?

◉ Enterprise Grade Linux Distributions

◉ Consumer Grade Linux Distributions

◉ Lastly, Experimental and Hacker Linux Distributions

28. What does Unix Operating Systems include?

◉ AIX

◉ FreeBSD, NetBSD, OpenBSD

◉ HP-UX

◉ Irix

◉ Lastly, Solaris

29. What is Unix?

Before we had Linux as an operating system there was Unix. Unix was sold with hardware and even today several commercial Unixes such as AIX and HP-UX are available on the market.

30. What do you understand by a motherboard?

All of a system’s hardware needs to interconnect. Therefore, a motherboard normalizes that interconnection using standardized connectors and form factors. It also provides support for the configuration and electrical needs of those connectors.

31. What is a device directory?

The device directory contains device files for all connected hardware devices. These device files are used as an interface between the devices and the processes using them.

32. Mention the two different types of device directory?

Each device file falls into one of two categories:

◉ Firstly, Block devices

◉ Lastly, Character devices

33. List the main memory types of Linux?

◉ Physical memory

◉ Lastly, Swap

34. List the different layers of networking?

◉ Link Layer

◉ Network Layer

◉ Lastly, Application Layer

35. What is the need to use 31-bit Subnet Mask?

◉ In mixed routing mode, you can configure any external interface to use an IPv4 address with a 31-bit subnet mask.

◉ Lastly, a 31-bit subnet mask is often used for an interface that is the endpoint of a point-to-point network.

36. What is the use of 32-bit Subnet Mask?

◉ A 32-bit subnet mask defines a network with only one IP address.

◉ Lastly, in mixed routing mode, you can only configure a 32-bit subnet mask for a physical external interface.

37. Can you use 32 subnet mask for a virtual external interface?

No, we cannot use 32 subnet masks for a virtual external interface, such as a VLAN or Link Aggregation interface because you cannot configure a virtual external interface with a default gateway on a different subnet.

38. List the major prefix types in IPv6?

◉ Global Unique Address

◉ Unique Local Address

◉ Lastly, Link Local Address

39. What are the three most common file types?

◉ normal file

◉ directory

◉ Lastly, soft link.

40. What are Temporary Files?

Temporary files are files used by programs to store data that is only needed for a short time. These can be the data of running processes, crash logs, scratch files from an autosave, intermediary files used during file conversion, cache files, etc.

Source: testpreptraining.com

Tuesday 28 December 2021

Linux Professional Institute DevOps Tools Engineer 701-100 Exam

Linux Professional Institute, DevOps Tools Engineer, 701-100 Exam, 701-100 Exam Prep, LPI DevOps Tools Engineer, LPI Exam Prep, LPI Tutorial and Materials, LPI Guides, LPI Certification, LPI Career, LPI Preparation

Linux Professional Institute DevOps Tools Engineer 701-100 

LPI DevOps Tools Engineer 701-100 Exam is a professional certification for DevOps professionals. The global exam certifies your knowledge of DevOps-related topics like container and machine deployment, configuration management and monitoring and technical skills in using open source tools like Docker, Vagrant, Ansible, Puppet, Git, and Jenkins. The certificate attests to your skills and knowledge in implementing and managing DevOps related tasks.

Who should take the exam?

The LPI DevOps Tools Engineer 701-100 Exam is for professionals engaged in software development, software testing and DevOps related tasks in a company. DevOps engineers, DevOps managers, software test managers and engineers, software developers are also fit to take the certificate for better job prospects.

Course Outline

The important topics covered in this exam Linux Professional Institute DevOps Tools Engineer 701-100  are -

◉ Topic 701: Software Engineering - Total Weight: 18

◉ Topic 702: Container Management - Total Weight: 16

◉ Topic 703: Machine Deployment - Total Weight: 8

◉ Topic 704: Configuration Management - Total Weight: 10

◉ Topic 705: Service Operations - Total Weight: 8

Exam Format and Information

Exam Name: Linux Professional Institute DevOps Tools Engineer 

Exam Code: 701-100

Exam Duration: 90 mins

Exam Format: Multiple Choice and Multi-Response Questions

Exam Type: DevOps

Number of Questions: 60 Questions

Eligibility/Pre-Requisite: NIL

Exam Fee: Country-specific pricing, check at link- https://www.lpi.org/exam-pricing.

Exam Language: English, Japanese

Pass Score: 500/800

Source: testpreptraining.com

Saturday 25 December 2021

LPIC-1 Certified Linux Administrator 101-500 Exam

LPIC-1 Certified Linux Administrator, 101-500 Exam, LPI Certification, LPI Exam Prep, LPI Tutorial and Material, LPI Exam Preparation, LPI Preparation, LPI Career

The LPIC-1 Certified Linux Administrator 101-500 Exam is the first test required for gaining your LPIC-1 certification. This exam certifies candidate on various Linux operating system administration tasks like administering users, package management, navigating Linux filesystems, managing processes, start services and using remote network shares.

Who should take the exam?


The certification is suitable for IT and Non-IT professionals who want to make a career in Linux System Administration domain. The certification fits well for system administrators, Linux administrators, network administrators, Linux students and enthusiasts.

Course Outline


The important topics covered in LPIC-1 Certified Linux Administrator 101-500 exam are:

◉ Topic 101: System Architecture - Total Weight: 8

◉ Topic 102: Linux Installation and Package Management - Total Weight: 12

◉ Topic 103: GNU and Unix Commands - Total Weight: 25

◉ Topic 104: Devices, Linux Filesystems, Filesystem Hierarchy Standard - Total Weight: 14

Exam Format and Information


Exam Name: LPIC-1 Certified Linux Administrator 

Exam Code: 101-500 

Exam Duration: 90 mins

Exam Format: Multiple Choice and Multi-Response Questions

Exam Type: Linux Administration

Number of Questions: 60 Questions

Exam Language: English, German, Japanese, Portuguese,  Spanish, Chinese (Simplified) and Chinese (Traditional).

Pass Score: 500/800

Source: testpreptraining.com

Thursday 23 December 2021

Linux Professional Institute (LPI) Lowers Exam Prices in More Than 140 Countries

Linux Professional Institute (LPI) Lowers Exam Prices in More Than 140 Countries

Linux Professional Institute (LPI), LPI Exam, LPI Exam Prep, LPI Certification, LPI Career, LPI Learning, LPI Tutorial and Materials, LPI Skills

Linux Professional Institute (LPI), producer of the world’s most popular skills certification, is lowering exam prices in more than 140 countries and territories for 2022 and beyond. The reduction comes as part of a major change in LPI’s pricing structure, adjusting exam prices for each country based on the Inequality-adjusted Human Development Index (IHDI) of the United Nations Development Program.

“It’s our mission to increase and improve the career opportunities in Open Source software for everyone,” said LPI Executive Director Matthew Rice. “As a result, we have always acted globally as an organization, and it’s important for us to recognize the very real disparities between the world’s economies. As a nonprofit our goal is to maximize accessibility of our programs, not revenue.”

Linux Professional Institute (LPI), LPI Exam, LPI Exam Prep, LPI Certification, LPI Career, LPI Learning, LPI Tutorial and Materials, LPI Skills
(“Essentials” exams relate to the LPI certificate programs of that name including “Linux Essentials”. “Professional” exams related to LPI certification programs such as its three-level “LPIC” Linux administration series as well as its DevOps and BSD certifications.)

As part of a regular examination of its pricing, for 2022 LPI is simplifying to four tiers rather than 14, in one of three currencies: In Japan, exams are priced in Yen; in the Euro zone, exams are priced in Euros; and in all other countries exams are priced in the local equivalent of US dollars.

For the 2022 pricing adjustments, the cost of the top and bottom tiers have not changed. However, because of adjustments using the IHDI, many countries and territories have been moved to new Tiers. 
Prices will be lowered in 145 locations, in many cases significantly. Examples:

◉ South Africa (-42%)
◉ Guatemala (-41%)
◉ India (-36%)
◉ Brazil (-25%)

Prices will increase slightly in 51 locations, all highly developed economies, to bring them in line with current prices at the top level. Examples:

◉ Bermuda (+15%)
◉ Australia (+6%)
◉ Singapore (+6%)
◉ United Kingdom (+4%)

Prices will remain unchanged in another 51 locations, mostly reflecting those already at the highest and lowest prices charged for our exams. Examples:

◉ Japan
◉ USA
◉ Haiti
◉ Senegal

Linux Professional Institute (LPI), LPI Exam, LPI Exam Prep, LPI Certification, LPI Career, LPI Learning, LPI Tutorial and Materials, LPI Skills
(“Essentials” exams relate to the LPI certificate programs of that name including “Linux Essentials”. “Professional” exams related to LPI certification programs such as its three-level “LPIC” Linux administration series as well as its DevOps and BSD certifications.)

“Using the IHDI has led to a number of adjustments, mostly leading to price reductions that will make certification more affordable, and thus more accessible, to hundreds of millions,” said Rice, “At a time of global inflation it is a source of pride that we can go in the other direction and decrease prices for people seeking quality careers working with Open Source.

Source: lpi.org

Tuesday 21 December 2021

The Linux Essentials Certification

Linux Essentials Certification, LPI Exam Prep, LPI Certification, LPI Learning, LPI Career, LPI Skills, LPI Jobs

Show employers that you have the foundational skills required for your next job or promotion.

Linux adoption continues to rise world-wide as individual users, government entities and industries ranging from automotive to space exploration embrace open source technologies. This expansion of open source in enterprise is redefining traditional Information and Communication Technology (ICT) job roles to require more Linux skills. Whether you’re starting your career in open source, or looking for advancement, independently verifying your skill set can help you stand out to hiring managers or your management team.

The Linux Essentials certificate also serves as a great introduction to the more complete and advanced Linux Professional certification track.

Current version: 1.6 (Exam code 010-160)

Objectives: 010-160

Prerequisites: There are no prerequisites for this certification

Requirements: Passing the Linux Essentials 010 exam. The Linux Essentials exam contains 40 questions and must be completed within 60-minutes.

Validity period: Lifetime

Cost: Click here for exam pricing in your country.

Languages for exam available in VUE test centers: English, German, Japanese, Portuguese (Brazilian), Dutch

Languages for exam available online via OnVUE: English, Japanese, Portuguese (Brazilian), German, Dutch

To receive the Linux Essentials certificate the candidate must:

◉ have an understanding of the Linux and open source industry and knowledge of the most popular open source Applications;

◉ understand the major components of the Linux operating system, and have the technical proficiency to work on the Linux command line; and

◉ have a basic understanding of security and administration related topics such as user/group management, working on the command line, and permissions.

Source: lpi.org

Saturday 18 December 2021

The Opening World: An Open Anniversary Review, Part 1

LPI Exam Prep, LPI, LPI Exam, LPI Exam Preparation, LPI Career, LPI Tutorial and Material, LPI Guides, LPI Skills

Walls holding back information sharing and participatory decision-making have been breaking down over the past few decades. Many readers will question this claim, basing their fears on recent developments in politics, disinformation, and social disintegration. But I hold on to my conviction that our world is getting more open, and I'll examine where it's going in this two-part article. The article is the culmination of a year-long "Open Anniversary" series on the Linux Professional Institute blog. Previous installments in the series are:

January (Free Culture): Introductory explanation to launch the series

February (Open Source): Open source in the worldwide COVID-19 battle

March (Open Business): Who's building businesses around free and open source software?

April (Open Government): Where transparency, crowdsourcing, and open source software meet

May (Open Knowledge): Open knowledge, the Internet Archive, and the history of everything

June (Open Hardware): Open hardware suitable for more and more computing projects

July (Open education): The many ways to target world disparities

August (Open Web): Open, simple, generative: Why the Web is the dominant internet application

September (Linux): The many meanings of Linux

October (Free Software): Steps toward a great career in free and open source software

November (Open Access): Open Access Flips Hundreds of Years of Scientific Research

This first part of the series defends the cause of openness against critics who blame it for current social and political problems. I try to locate more appropriate targets for this criticism.

Is Openness Dangerous?

There's plenty to lament in what we see online, apparently spiraling out of control: rampant conspiracy theories, the plethora of criminal activity on the "dark internet," and more. Some people stretch their criticisms too far, though. I've heard uninformed and defamatory statements like, "The internet is causing polarization" and "The internet helps lies to travel quickly." When we evaluate technologies, we have to think carefully. What precise technologies are we talking about? Who is using them, and how are they being used?

Such questions become even more complex because the combination of personal digital devices and near-universal networking also hands tools to spies and to governments trying to curtail their population's behavior.

The internet actually is still doing what it did all along, starting from the supposedly golden age when it brought people together around the world and provided safe spaces to discuss stigmatized issues such as gay and lesbian behavior, recovery from child abuse or drug use, non-neurotypical experiences, and so forth. So many topics are now part of the public discourse—just look at recent commitments to address sexual harassment in the workplace, for instance—that were first aired in internet communities.

One goes to meetings today where people say, “I’m on the autistic spectrum” or “I’m a victim of child abuse” or “I spent five years in prison” or “My pronouns or they and them” without shame or stigma. There has to be a connection here; we forget how much more of an open society we have become since the internet.

As data looms in importance, the internet is keeping up as a resource for the marginalized. One recent example, NativeDATA, tailors health information to native North American peoples, who suffer from a lot of health problems related to their environments and social status.

From the beginning, too, there was plenty of evidence that the internet had some pretty nasty corners. Illegal trade, hate speech, and wanton lies were known problems. Attempts to separate the good from the bad started quite some time ago—remember the Communications Decency Act of 1996—but always floundered on the dilemma that different people had different ideas about what was good and bad, and ultimately people realized that they didn't want to hand the decision over to any authority.

It is a tribute to the spirit of the early internet that major social media companies—while investing millions of dollars to take down harmful content—show reluctance to crack down further, and democratic governments are moving cautiously in defining standards (notably the Digital Services Act package in the European Union). For instance, although the EU wants social media sites to label and remove content that is manifestly dangerous, the regulators want transparency in such removal and clear explanations about why it's removed. The regulators are also sensitive to excessive demands on social media sites.

Things have taken a turn for the worse during the past decade, so far as I can see, but the problem is not the internet: it is the services built by companies such as search engines and social media. A recent working paper by Suran et al. on "collective intelligence" points to the problem. Successful collective intelligence (related to the ideas of crowdsourcing and the wisdom of crowds) requires two traits: diversity and transparency. The internet is quite capable of fostering these values, but social media works against them.

Regarding diversity, the preference by search engines and social media to display items similar to what one has previously "liked" or clicked on creates the bubbles so often criticized by observers. And the algorithms, of course, are quite opaque. The companies can't afford to be transparent about what they do because revealing the algorithms would make it easier to game their systems. But the problem demonstrates that we need something different from social media for serious discussions and "news."

Some people also claim that social media tends to inflame the discourse, arousing fear and hate. I'm not convinced this is true. People on social media joyfully pile on to express their approval for positive things such as births, marriages, degrees earned, awards, and promotions. Let's just say that social media is designed to evoke emotions instead of cautious consideration, and leave it at that.

I love social media. Like billions of people, I use it to keep up with old college friends, share my pleasures and pains with them, and connect my colleagues with common interests. Social media was designed for that and does it superbly.

Social media introduces risks when people use it to exchange "news," organize political engagement, or function as a public space in other ways. Those tasks are better served by completely different tools—offline and online—that foster thoughtful debate and intensive research. There are models for such spaces. They use some of the same superficial mechanisms as social media does, such as groups and ratings. But the public spaces deliberately engage interested people in working together to solve their problems. Positive results with broad consensus are their goal.

These platforms can be run by governments, companies, or non-profits. One example is a partner of the Linux Professional Institute, SmartCT in the Philippines.

Read More: Open Access Flips Hundreds of Years of Scientific Research, Part 2

Source: lpi.org

Thursday 16 December 2021

Linux Professional Institute Certification Programs

Linux Professional Institute Certification, LPI Linux Essentials, Linux Professional Institute (LPI), LPIC-1, LPIC-1 Certifications, LPIC-2, LPIC-2 Certifications, LPIC-3 Mixed Environments, LPIC-3 Security, DevOps Certification, BSD Specialist

Linux Professional Institute (LPI) offers three different certification tracks. The core certification program, Linux Professional, contains three different levels addressing distinct aspects of Linux system administration. The organization also offers an introductory Essentials program for beginners in Linux and open source, as well as an Open Technology track for professionals working with additional technologies such as DevOps and BSD.

1. Linux Essentials

Linux adoption continues to rise world-wide as individual users, government entities and industries ranging from automotive to space exploration embrace open source technologies. This expansion of open source in enterprise is redefining traditional Information and Communication Technology (ICT) job roles to require more Linux skills. Whether you’re starting your career in open source, or looking for advancement, independently verifying your skill set can help you stand out to hiring managers or your management team.

The Linux Essentials certificate also serves as a great introduction to the more complete and advanced Linux Professional certification track.

2. Linux Professional

2.1. LPIC-1

The LPIC-1 is designed to reflect current research and validate a candidate's proficiency in real world system administration. The objectives are tied to real-world job skills, which we determine through job task analysis surveying during exam development.

Current version: 5.0 (Exam codes 101-500 and 102-500)

2.2. LPIC-2

LPIC-2 is the second certification in the multi-level professional certification program of the Linux Professional Institute (LPI). The LPIC-2 will validate the candidate's ability to administer small to medium–sized mixed networks. 

Current version: 4.5 (Exam codes 201-450 and 202-450)

3. LPIC-3

3.1. LPIC-3 Mixed Environments

The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Four separate LPIC-3 specialty certifications are available. Passing any one of the four exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 Mixed Environments certification covers the administration of Linux systems enterprise-wide in a mixed environments.


The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Four separate LPIC-3 specialty certifications are available. Passing any one of the four exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 Security certification covers the administration of Linux systems enterprise-wide with an emphasis on security.


The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Four separate LPIC-3 specialty certifications are available. Passing any one of the four exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 Virtualization and Containerization certification covers the administration of Linux systems enterprise-wide with an emphasis on Virtualization & Containerization.

3.4. LPIC-3 High Availability Systems and Storage

The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Four separate LPIC-3 specialty certifications are available. Passing any one of the fourexams will grant the LPIC-3 certification for that specialty.

The LPIC-3 High Availability Systems and Storage certification covers the administration of Linux systems enterprise-wide with an emphasis on Virtualization & Containerization.

4. Open Technology



Businesses across the globe are increasingly implementing DevOps practices to optimize daily systems administration and software development tasks. As a result, businesses across industries are hiring IT professionals that can effectively apply DevOps to reduce delivery time and improve quality in the development of new software products.

To meet this growing need for qualified professionals, Linux Professional Institute (LPI) developed the Linux Professional Institute DevOps Tools Engineer certification which verifies the skills needed to use the tools that enhance collaboration in workflows throughout system administration and software development.

In developing the Linux Professional Institute DevOps Tools Engineer certification, LPI reviewed the DevOps tools landscape and defined a set of essential skills when applying DevOps. As such, the certification exam focuses on the practical skills required to work successfully in a DevOps environment – focusing on the skills needed to use the most prominent DevOps tools. The result is a certification that covers the intersection between development and operations, making it relevant for all IT professionals working in the field of DevOps.


The BSD Specialist certification is part of the Linux Professional Institute (LPI) Open Technology certification program. 

The exam focuses on the practical skills required to work successfully in a FreeBSD, NetBSD or OpenBSD environment and tests the knowledge and skills needed to administer BSD operating systems.

The typical BSD Specialist certification holder is a system administrator of BSD operating systems. The certification holder has an understanding of the architecture of the BSD operating systems. This includes the ability to manage various aspects of a BSD installation, including the management of user accounts and groups, processes, file systems, installed software, and client networking configuration. The candidate is experienced in using standard BSD and Unix tools on the command line. 

Source: lpi.org

Tuesday 14 December 2021

The LPIC-3 Mixed Environments Certification

LPIC-3, LPIC-3 Certifications, LPIC-3 Mixed Environments, LPI Exam, LPI Tutorial and Material, LPI Certification, LPI Prep, LPI Prepariton, LPI Career

The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Four separate LPIC-3 specialty certifications are available. Passing any one of the four exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 Mixed Environments certification covers the administration of Linux systems enterprise-wide in a mixed environments.

Current version: 3.0 (Exam code 300-300)

Previous version: 1.0 (Exam code 300-100)

Available until February 23rd, 2022

Objectives: 300-300

Prerequisites: The candidate must have an active LPIC-2 certification to receive the LPIC-3 certification.

Requirements: Passing the 300 exam. The 90-minute exam is 60 multiple-choice and fill-in-the-blank questions.

Validity period: 5 years

Cost: Click here for exam pricing in your country.

Languages for exam available in VUE test centers: English, Japanese

Source: lpi.org

Thursday 9 December 2021

The LPIC-3 Security Certification

LPIC-3 Security Certification, LPIC-3, LPIC-3 Certifications, LPIC-3 Security, Linux Professional Institute (LPI)

The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Three separate LPIC-3 specialty certifications are available. Passing any one of the three exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 Security certification covers the administration of Linux systems enterprise-wide with an emphasis on security.

Current version: 3.0 (Exam code 303-300)

Previous version: 2.0 (Exam code 303-200) Available until April 4th, 2022

Objectives: 303-300

Prerequisites: The candidate must have an active LPIC-2 certification to receive the LPIC-3 certification.

Requirements: Passing the 303 exam. The 90-minute exam is 60 multiple-choice and fill in the blank questions.

Validity period: 5 years

Cost: Click here for exam pricing in your country.

Languages for exam available in VUE test centers: English (Japanese coming soon)

Languages for exam available online via OnVUE: English

Source: lpi.org

Tuesday 7 December 2021

The LPIC-3 Virtualization and High Availability Certification

LPIC-3 Virtualization and High Availability Certification, LPIC-3 Exam Prep, LPIC-3 Certification, LPIC-3 Guides, LPI Prep, LPI Preparation, LPI Career

The LPIC-3 certification is the culmination of the multi-level professional certification program of the Linux Professional Institute (LPI). LPIC-3 is designed for the enterprise-level Linux professional and represents the highest level of professional, distribution-neutral Linux certification within the industry. Three separate LPIC-3 specialty certifications are available. Passing any one of the three exams will grant the LPIC-3 certification for that specialty.

The LPIC-3 Virtualization and High Availability certification covers the administration of Linux systems enterprise-wide with an emphasis on Virtualization & High Availability.

Current version: 2.0 (Exam code 304-200)

Objectives: 304-200

Prerequisites: The candidate must have an active LPIC-2 certification to receive the LPIC-3 certification.

Requirements: Passing the 304 exam. The 90-minute exam is 60 multiple-choice and fill in the blank questions.

Validity period: 5 years

Cost: Click here for exam pricing in your country.

Languages for exam available in VUE test centers​: English, Japanese

Source: lpi.org

Saturday 4 December 2021

Climbing the Pyramid

Just a few weeks ago, Linux Professional Institute announced a complete refresh of its Level 3 exams, the second since its founding. As with our other certifications, the LPIC-3 program is under constant review to ensure that what we test for is what employers want. This time around it means retiring the 304 “Virtualization and High Availability” exam and replacing it with two new specializations: exams 305 “Virtualization and Containerization” and 306 “High Availability and Storage Clusters”.

As always, you need to pass only one of these exams to achieve LPI Level 3 certification, provided that you’ve already completed LPIC-1 and LPIC-2 as prerequisites. More technical details on the certification programs themselves may be found here on the LPI website.

LPI, LPI Exam, LPI Exam Prep, LPI Exam Preparation, LPI Certification, LPI Learning, LPI Guides, LPI Career

To me, the work done by Fabian Thorns (our Director of Product Development) and his team to keep the LPIC-3 program current and relevant is a source of pride within LPI. Ours is the only vendor-neutral open source program doing this kind of high-level specialization, crowning the four levels of our Linux-focused certificates and certifications.

Rather than being tacked-on, this multi-level design has been a fundamental part of LPI from its inception in 1999. (I know because I was there in the room.) The LPI community understood, even in the earliest days of the organization, that we must accommodate the needs of not only entry-level open source workers but also those needing advanced and ultimately enterprise-level skills. We also knew then that -- much like in university -- everyone should know the same basics at the lower levels, but as students progress they come to specialize.

Doing a multi-tier certification program to evaluate peoples’ mastery of these many skill levels isn’t easy. The top-level exams are the hardest to craft because their subject matter is so advanced. That difficulty is combined with the awareness that the top levels have the least volume because students need to achieve the lower levels first.

I recall clearly a conversation with someone at a different certification body whose major emphasis was entry-level programs. “It’s the bottom of the pyramid”, I was told, “with the least cost to produce and the highest potential audience.”

LPI, LPI Exam, LPI Exam Prep, LPI Exam Preparation, LPI Certification, LPI Learning, LPI Guides, LPI Career

Who wants the problem of making the highest-cost program with the lowest potential revenue? We do.

Although we need to earn enough to keep the organization fiscally sound, LPI does not exist to maximize revenue. We are mission-driven to serve our community of open source professionals, and that means paying attention to all parts of the pyramid. This mission has led us to many projects that cost more than they earn, such as our student subsidy and sponsorship programs, and participation in events such as Software Freedom Day (September 18 this year). Plus there are some exciting new initiatives coming near the end of 2021.

We’re happy to be part of an active, vibrant open source ecosystem with multiple paths to both personal fulfilment and professional success. But we are also proud to be one of the few to be able to see, react to, and serve the changing character of this community from the rare vantage point at the top of the pyramid.

Source: lpi.org

Thursday 2 December 2021

Open Access Flips Hundreds of Years of Scientific Research, Part 2

LPI Exam Prep, LPI Exam, LPI Exam Preparation, LPI Tutorial and Materials, LPI Learning, LPI Certification, LPI Guide, LPI Career

A true revolution has hit academia over the past couple decades, changing how publications fund their work and in consequence the ways researchers share information. The previous article in this series introduced Open Access, describing its benefits and how it works. This final article shows examples and explains the relationship between Open Access and Creative Commons.

Examples of Adoption of Open Access in Computing

Michael Collins, Senior Computer Scientist at the Information Sciences Institute at the University of Southern California, comments: "While publish-or-perish is the rule for academia, where the researcher chooses to publish varies in different disciplines. Computer science is an outlier because the discipline emphasizes conference publications, whereas journal publication is the norm elsewhere. This practice is changing, as top-tier CS conferences move from a single PC meeting to a continuous submission and review process, meaning that conferences are now becoming 'journalier.'  Still, that emphasis on conference publications means that CS researchers tend to have many more publications than other researchers."

I myself witnessed the astonishing evolution of one institution during about a decade from a position of skepticism and resistance to Open Access business models (an attitude of, "This is a nice ideal but not for us") to a commitment toward a sustainable transition to Open Access over a well-defined period of time. The Association for Computing Machinery dates to 1947 (just about the dawn of digital computing) and now has nearly 100,000 members from more than 190 countries. ACM is fully engaged in a transition to Open Access—but as an organization that relies heavily on subscriptions to publications for a large percentage of its income, it needs to handle the transition extremely carefully.

Some 75-80% of ACM publications are conference proceedings, but the Association also publishes more than 60 journals, 7 tech magazines, nearly 40 newsletters, and research-oriented books. Collectively, these outlets publish from 20,000-25,000 research articles each year.

Even before Open Access, ACM has made its publications freely available to institutions in many parts of the developing world where average incomes are significantly lower than in the developed world. 

ACM has been experimenting with various forms of Open Access since the early 2010s. At a board meeting held virtually in the Summer of 2020, the Association’s governing body, the ACM Council, made a formal commitment to transition all of the Association’s research publications to Open Access over the next five years—tempering the promise with a provision that the outcome can be financially sustainable..

According to ACM's Director of Publications, Scott Delman, "All other things being equal, Open Access is superior to subscription-based publication for readers and authors and fits better with ACM's mission to advance the field of Computer Science." At present, as a result of ACM’s introduction of the ACM Open model in January 2020, and factoring in Hybrid Open Access articles where the authors pay an APC, approximately 15% of ACM publications are made Open Access annually. The Association expects to hit a 20% milestone by year end and hopes to increase the trend by 10-15% each year until it covers all ACM research publications.

Specifically, ACM found that when its articles are published in front of the ACM Digital Library paywall, they are downloaded on average two to four times more often than articles behind ACM’s paywall. Most significantly, the number of citations for the articles in other publications have increased by a similar amount. Everyone in academia knows that the number of citations is a critical metric used in their fields to measure the value of an article and its author.

According to Delman, the ACM is recognized now as being at the forefront of Open Access. The Association is working with approximately 1,000 institutions around the world to make the transition to Open Access over the next 2-3 years affordable, sustainable, and permanent.

The transition to the ACM Open model upends its old financial model. Before the transition began last year, ACM relied on a long tail of almost 3,000 institutions to underwrite the cost of ACM publications. At the end of the transition, at least half of those institutions will pay significantly less than they currently pay for access to the ACM Digital Library. The transition involves complex negotiations with each institution, academic library consortium administrators, intermediary agents, funders, department heads, and deans of research, and will take years to fully implement.

With Open Access, most of the fees come from top tier institutions whose faculty and students do the most publishing. In general, with Open Access models, the more you provide articles, the more you benefit from publication, and the more you pay. Institutions who are more peripheral get a much better deal than they do with traditional subscription-based publishing.

USENIX (whose name is a pun on UNIX, the dominant operating system for hackers and researchers when the organization was founded in 1975) was an early adopter of open access. The web page says "USENIX has brought together a community of engineers, system administrators, scientists, and technicians working on the cutting edge of the computing world." The conferences I attended in the 1990s and 2000s focused on improving networks at all layers. Thus, USENIX was a bridge between academic computer scientists and advanced practitioners in industry and government.

The Institute of Electrical and Electronics Engineers (IEEE), with hundreds of thousands of members in more than 160 countries, has also committed to a combination of Open Access and hybrid access. The IEEE Article Sharing and Posting Policies cover Green and Gold Open Access at various stages of publication.

Finally, the Learning Materials from Linux Professional Institute offer introductory and advanced educational information about a range of free and open source software.

Creative Commons

Parallel to the Open Access movement, Creative Commons was founded to promote sharing and collaborative development of all sorts of content: research articles, novels, music, films, games, etc.

The general understanding of Creative Commons, formed by law professor Lawrence Lessig, is that the idea was sparked by the landmark Eldred v Ashcroft copyright case that Lessig litigated before the U.S. Supreme Court in 2003.

But Creative Commons doesn't do anything to mitigate the outcome of Eldred, which effectively blessed the maneuvers by movie studios and other copyright holders to extend copyrights indefinitely into the future and keep their popular features from entering the public domain. Instead, Creative Commons looks toward a different constituency: content creators operating in a stew of new ideas from many quarters, grabbing an image here or a snatch of a musical riff there and assembling innovative artistic collages. These creative types use copyright to facilitate distribution and reuse, not to derive profit in the old way by restricting access.

Lessig was directly inspired by the GNU General Public License, designed for software. Like the GPL, Creative Commons supports the classic four freedoms underpinning the free software movement. The "no derivatives" clause mentioned earlier in this article is a carve-out for creative people who don't want unauthorized variants of their work to circulate. Another departure from the four freedoms is an option to prohibit commercial uses of a work.

Open Access is one of the many types of content facilitated by Creative Commons. Attitudes are changing about how to support the development of research and other content. The copyright model that served the pre-digital era fairly well is straining to reflect the public's new needs and expectations. Seeing all the major funders and publishing houses that are adopting Open Access, we can be assured that it will dominate future research.

Read More: Open Access Flips Hundreds of Years of Scientific Research

Source: lpi.org

Tuesday 30 November 2021

The DevOps Tools Engineer Certification

DevOps Tools Engineer Certification, LPI Exam Prep, LPI Exam, LPI Certification, LPI Guides, LPI Learning, LPI Certification, LPI Career

Businesses across the globe are increasingly implementing DevOps practices to optimize daily systems administration and software development tasks. As a result, businesses across industries are hiring IT professionals that can effectively apply DevOps to reduce delivery time and improve quality in the development of new software products.

To meet this growing need for qualified professionals, Linux Professional Institute (LPI) developed the Linux Professional Institute DevOps Tools Engineer certification which verifies the skills needed to use the tools that enhance collaboration in workflows throughout system administration and software development.

In developing the Linux Professional Institute DevOps Tools Engineer certification, LPI reviewed the DevOps tools landscape and defined a set of essential skills when applying DevOps. As such, the certification exam focuses on the practical skills required to work successfully in a DevOps environment – focusing on the skills needed to use the most prominent DevOps tools. The result is a certification that covers the intersection between development and operations, making it relevant for all IT professionals working in the field of DevOps.

Current version: 1.0 (Exam code 701-100)

Objectives: 701-100

Prerequisites: There are no prerequisites for this certification. However, an additional certification in the candidate’s primary area of expertise, such as LPIC-1 or a developer certification, is strongly recommended.

Requirements: Passing the DevOps Tools Engineer exam. The 90 minute exam is 60 multiple choice and fill-in-the-blank questions.

Validity period: 5 years

Cost: Click here for exam pricing in your country.

Languages for exam available in VUE test centers: English, Japanese

To receive the Linux Professional Institute DevOps Tools Engineer certification the candidate must:

◉ Have a working knowledge of DevOps-related domains such as Software Engineering and Architecture, Container and Machine Deployment, Configuration Management and Monitoring.

◉ Have proficiency in prominent free and open source utilities such as Docker, Vagrant, Ansible, Puppet, Git, and Jenkins.

Source: lpi.org

Thursday 25 November 2021

The BSD Specialist Certification

BSD Specialist Certification, BSD Certification, LPI Certification, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Materials, LPI Preparation

The BSD Specialist certification is part of the Linux Professional Institute (LPI) Open Technology certification program.

The exam focuses on the practical skills required to work successfully in a FreeBSD, NetBSD or OpenBSD environment and tests the knowledge and skills needed to administer BSD operating systems.

The typical BSD Specialist certification holder is a system administrator of BSD operating systems. The certification holder has an understanding of the architecture of the BSD operating systems. This includes the ability to manage various aspects of a BSD installation, including the management of user accounts and groups, processes, file systems, installed software, and client networking configuration. The candidate is experienced in using standard BSD and Unix tools on the command line. 

Current version: 1.0 (Exam code 702-100)

Objectives: 702-100

Prerequisites: There is no prerequisite certification for taking the BSD Specialist Engineer exam. However, it is strongly recommended that a candidate has more than a year of experience in administering BSD systems of various kinds.

Requirements: Passing the BSD Specialist exam. The 90 minute exam is 60 multiple choice and fill-in-the-blank questions.

Validity period: 5 years

Cost: Click here for exam pricing in your country.

Languages for exam available in VUE test centers: English

To receive the BSD Specialist certification the candidate must:

◉ Have a working knowledge of BSD operating systems: FreeBSD, NetBSD, and OpenBSD 

◉ Be able to install, manage, and configure BSD operating system

◉ Be able to configure hardware, set kernel parameters, and manage system security 

◉ Have basic knowledge in BSD system administration, job scheduling, and system automation

◉ Have basic network administration knowledge

Source: lpi.org

Tuesday 23 November 2021

Difference Between Fedora and Kali Linux

Fedora OS, developed by Red Hat, is a Linux based open-source operating system. As it is Linux based, so it is freely available for use and is open source. It uses the DNF package manager and gnome environment along with anaconda installer. It supports 3 platforms, which are Workstation Fedora designed for Personal Computers, Fedora Server designed for servers, and Fedora Atomic designed for cloud computing.

Fedora, Kali Linux, LPI Exam, LPI Tutorial and Materials, LPI Certification, LPI Learning, LPI Preparation, LPI Guides, LPI Career

Kali Linux is a Linux based open source Operating System which is freely available for use. It belongs to the Debian family of Linux. It was developed by “Offensive Security”. It was first released in March 2013 with the aim to be the replacement of the BackTrackOS. Kali comes packed with 100+ of penetration testing, security research, digital forensics, reverse engineering, and ethical hacking tools.

Difference between Fedora and Kali Linux

Fedora Kali 
Developed by RedHat.  Developed by Offensive Security.
Fedora was initially released in November 2003.  Kali Linux was initially released in March 2013. 
Fedora is used for daily use or on server or on a cloud.  Kali Linux is used by security researchers or ethical hackers for security purposes. 
The discussion forum for fedora is ask.fedoraproject.org.  The discussion forum for Kali Linux is forums.kali.org. 
Latest Fedora consists of the Gnome environment by default, though it allows you to change the same.  Latest Kali consists of the xfce environment by default, though it allows you to change the same. 
Fedora doesn’t comes packed with hacking and penetration testing tools.  Kali comes packed with hacking and penetration testing tools. 
Comes with a user friendly Interface.  Comes with a less user friendly Interface as compared to fedora. 
Fedora is a good option for beginners to Linux.  Kali Linux is a good option for those who are intermediate in Linux. 

Source: geeksforgeeks.org

Saturday 20 November 2021

Open Access Flips Hundreds of Years of Scientific Research

LPI Exam Prep, LPI Exam, LPI Tutorial and Materials, LPI Guides, LPI Career, LPI Skills
We have viewed the spirit of openness from many angles—in free software, open government, and many other trends—in the Open Anniversary series published on this LPI site during 2021. No field has been more transformed by this spirit than academic research, represented by the Open Access movement. This article discusses the major aspects of Open Access, along with the role of Creative Commons licenses.

The first article in this two-part series lays out the concepts and concerns with Open Access.

A Seismic Shift in Academic Publishing

Since the invention of the printing press, researchers have been eager to share their insights with the world by publishing articles and books. The urge to open one's research to all has become a pillar of science, reaching the point where the phrase "publish or perish" characterizes the world of research.

Yet a kind of elite access to information grew up over time. Research journals became expensive to the degree where most people outside universities, major companies, or well-established research centers found the cost of journals a barrier to learning.

Of course, all that money has gone to useful things. Journals screen submissions and conduct peer review, playing the role of responsible gatekeeper. But they also introduce their own biases, preferring positive results over negative ones, big breakthroughs over modest advances, famous research centers over lesser-known institutions, and hot topics over obscure corners of research. Other prejudices reflecting the larger society, such as relating to gender, are also hard to root out.

The internet came along and presented radically new opportunities. Authors could publish anything they wanted at any time, and could crowdsource reviews among diverse viewers instead of depending on the three peer reviewers chosen by a journal. Cloud storage made it easy to publicize data sets so other researchers could combine them (after investing effort in harmonizing their differences) and mine them for new insights.

Finally, researchers sought out informal ways to trade articles in order to bypass paywalls. Some of my reviewers emphasized the challenge posed by the widespread availability of unauthorized (or if you insist, "pirated") copies of articles as a major factor pressuring conventional publishers to change. The reviewers highlight Sci-Hub as a particularly rich alternative to paid publications. Statistics show Sci-Hub growing and being popular in both affluent and developing countries.

Although I recognize the extent of unauthorized exchanges in research papers, I would like to point out that unauthorized exchanges of software are probably even more common. Although the major proprietary software companies support free software for solid business reasons, these companies show no indication of following the path research publishers are taking to open up proprietary offerings.

Open Access results from researchers’ intrepid leap into the new era of information for all. The movement is now a well-established process documented by organizations such as the Open Access Scholarly Publishing Association, Plan S, and the Open Scholarship Initiative.

Considerations in Open Access

Moving a journal to Open Access is far more complicated than just throwing articles onto a public web site. Authors and publishers have to deal with sponsor requirements, licenses, venue, and publishing costs.

Sponsor Requirements

For many years, public criticism has grown over the privatization of government-funded research. The public was paying taxes to support the research, but the results were tucked behind expensive paywalls. An ethical objection arose, insisting that research funded this way should be available to all. Ideally, the original data (which might be even more valuable than the published paper) would be shared publicly too, subject to privacy protections.

Now major government and private institutions are requiring Open Access for research they fund. Notable examples include the National Institutes of Health, which underpin much drug development and other health research, and the National Science Foundation. Researchers and publishers need to stay aware of the requirements imposed by funders.#

Licenses

Here is where Creative Commons (discussed in the second part of this article) proves valuable. The legal team at Creative Commons have designed a set of elegant licenses that provide a range of interesting options to both copyright owners and readers. All the licenses allow the public to read, copy, and redistribute content. The licenses also require the original authors to be credited.

Additional choices face an author. The boldest among authors can allow other people to distribute updated or revised versions of the article, as with free software. One can see the value of this for providing updates or additional data to a research topic. But there is also an iron-clad tradition in academia of articles as a "version of record." The original article must be archived someplace to preserve the integrity of scientific inquiry. In order to prevent the risk of confusion, or the risk that their ideas will be taken in a direction that is repugnant to them, authors can choose a "no derivatives" clause in Creative Commons.

Venue

The perceived significance of an article is affected by the site where it is posted: an author's personal site, a university or research institution, a conference site, or the publisher's official web page.

Most publishers allow authors to post pre-publication versions or "preprints" on web sites maintained by the authors or their institutions. The implication is that this version is not as trustworthy as the article that has been through peer review and editing—thus becoming the version of record—but is still useful. The author also often posts the final version under this policy, which is called Green Open Access.

When the publisher puts the version of record on a public web site without a paywall, it is called Gold Open Access. Normally, the venue is the publisher's site, putting an additional stamp of approval on the article. The authors or their institutions usually pay an article processing charge (APC) to cover the publisher’s costs. There are other subtle stages in publication, covered in the site to which I already referred.

Finally, Diamond Open Access funds publication entirely from external sources such as grants, putting articles online for free and charging neither authors or readers.

Publishing Costs

According to Scott Delman, director of publications at the nonprofit ACM, “Many of the largest publishers are for-profit corporations generating annual profit margins exceeding 30%, while there remains a very long tail consisting of hundreds or thousands of smaller society and privately owned nonprofit publishers.” But even nonprofits have editorial and publishing costs that are traditionally covered by subscriptions that can run into thousands of dollars a year. 

Deprived by Open Access of monopoly control over distribution, publishers fund their efforts through the APC. But publishers avoid charging individual authors, who would find the charge a high barrier to publication. Instead, publishers usually collect the APC from the research institutions, who benefit from the fame of their authors and who can afford a couple thousand dollars to publish each article.

Gold Open Access does bring the risk of "perverse incentives," according to Lorena Barba, an aeronautics engineer and editor-in-chief of the IEEE technical magazine Computing in Science and Engineering. Barba worries that, because each article published brings in revenue, journals may accept low-quality articles to bulk up their offerings. Possibly, Gold Open Access will exacerbate the familiar problem of low-quality journals that do little or no review but exploit authors by presenting the journals as legitimate research outlets. Barba herself cofounded two Diamond Open Access journals and is also a strong supporter of Green Open Access because it allows the public to see articles as soon as they are written.

LPI Exam Prep, LPI Exam, LPI Tutorial and Materials, LPI Guides, LPI Career, LPI Skills

Barba also points out that the normal funding model for Gold Open Access, shifting costs from readers to authors and their institutions, fails to reduce the gap between affluent and low-income regions. It is now the institutions who represent potential authors in low-income regions who are at a disadvantage. Furthermore, journals should make special dispensations for authors who are not represented by an institution.

RightsLink is popular among publishers for simplifying the collection of APCs. It allows sophisticated practices such as splitting fees among multiple institutions.

RightsLink was set up to support Open Access by the Copyright Clearance Center (CCC), an institution established in the 1970s before the internet was in widespread use. I remember, being in graduate school in the 1980s, that professors were assigning fewer books in class and turning more and more to photocopies of journal articles because they reflected the latest discoveries in a field. Massive amounts of photocopying presented a challenge to copyright law, stretching the concept of fair use. Copyright holders worked out a deal with colleges whereby the colleges paid bulk fees to allow photocopying, the whole system administered by the CCC. With Open Access, costs are shifting from consumers to producers of information, and the CCC is evolving with the times.

The second part of this article looks at the computer field in particular for examples of Open Access in action.


Source: lpi.org

Tuesday 16 November 2021

Grounding For Open Source Foundations: An Interview with Martin Michlmayr

LPI Exam Prep, LPI Tutorial and Materials, LPI Guides, LPI Certification, LPI Career, LPI Prep, LPI Study Materials, LPI Central
Foundations play a crucial role in open source. Few free software projects can set up a non-profit corporation and legal protection for their code, organize a board of directors to handle all their administrative needs, or raise the necessary funds. So we urgently need the Apache Foundation, Eclipse Foundation, Linux Foundation, and others.

Martin Michlmayr, who has put in stints as Debian Project Leader and president of Software in the Public Interest, released a 58-page report titled Growing Open Source Projects with a Stable Foundation in April 2021. Among its broad range of topics are governance, stability, community growth, financial and legal considerations, and the pressures on the foundations themselves. This report is well worth a read for anyone running a free software project or working with a foundation in that area. People interested in the foundations themselves will also benefit from a research report by Michlmayr going into some depth about them.

I interviewed Michlmayr to assemble some basic ideas that free software advocates should have when thinking about their stability.

You cover a very wide range of functions for foundations. Which are fulfilled well currently, and which need more focus or effort?

Martin Michlmayr: It depends on the organization. A lot of foundations suffer from a lack of resources, which determines the kinds of services they are able to provide.

Generally, the functions that are most important are fairly well covered. This includes accepting donations, paying for expenses, taking care of a lot of the administrative work that projects need (such as renewing trademarks and domain names), etc. A number of other functions, such as marketing, are generally not covered so well.

Some foundations (notably the Linux Foundation and Eclipse Foundation) focus on industry collaboration. They do a great job of providing a neutral venue where companies can collaborate. They also provide help to companies to get started with open source.

This role for a foundation, as a neutral venue for collaboration, is becoming increasingly important as open source evolves from a hobby into a way for companies to solve problems better, faster, and at a lower cost by collaborating with others who have a similar problem.

Suppose I've just started a free software project showing promise. At what point should I look for help from a foundation? What does my project need to have in place?

Michlmayr: A challenge for new projects is that most foundations don't cater to them. Many foundations expect projects to be well-established already, although a number of organizations offer incubation to new projects.

Generally, new projects should think ahead and consider how things will change as they grow. They should focus a lot on getting governance right.

Are there perhaps too many open source foundations now? Or should there be more? Are current ones evolving to meet new challenges, or do the challenges call for new foundations?

LPI Exam Prep, LPI Tutorial and Materials, LPI Guides, LPI Certification, LPI Career, LPI Prep, LPI Study Materials, LPI Central
Michlmayr: Running a foundation is a lot of work, and we saw in the past where people started a new organization without properly understanding how much work it would be -- especially work few people enjoy doing (such as the extensive paperwork). I believe there's a better understanding of the burden nowadays, and most organizations are created because they fill a specific need that couldn't be met as easily through an existing organization.

Are there too many? Possibly, and we've seen some organizations become virtual organizations within another foundation so that they don't need to do their own paperwork. The X.Org Foundation, for example, joined Software in the Public Interest and operates as a virtual organization now.

The Linux Foundation supports a "foundation-in-a-foundation" model that makes it easy to start new organizations by using the existing infrastructure and capabilities of the LF. I think we'll see more of this.

Open Collective is another interesting example: they provide infrastructure for receiving and spending funds; projects can easily sign up and make use of this infrastructure immediately. This meets the needs of many projects, in particular smaller ones.

 Source: lpi.org