Tuesday 31 October 2023

Morrolinux’s Tips: Playing Games on Linux? It Works!

Morrolinux’s Tips: Playing Games on Linux? It Works!

Many game enthusiasts believe that playing on GNU/Linux is impossible. In reality, support for gaming on Linux is constantly improving, thanks to various available software solutions. With the launch of Valve’s Steam Deck, interest in gaming on Linux is growing even more. Figure 1 shows the Cyberpunk 2077 RPG running on Linux.

Steam Deck…


Steam Deck (Figure 2) is a handheld gaming device developed by Valve Corporation that runs on a custom version of the Linux operating system called SteamOS, built on top of the Arch Linux distribution. Valve has made the source code for SteamOS and other components of Steam Deck software available on GitHub, allowing developers to create their own custom builds and contribute to the platform’s ongoing development

Steam Deck allows gamers to play their favorite Steam games on the go. The device features a custom AMD APU that includes a quad-core Zen 2 CPU, an RDNA 2 GPU, and 16GB of LPDDR5 RAM. The Steam Deck also has a 7-inch touchscreen display, a built-in controller, and a full-sized USB-C port for external peripherals.

Proton


Morrolinux’s Tips: Playing Games on Linux? It Works!
Fig 1: Cyberpunk 2077 su Linux
There are several compatibility layers for Linux, such as Wine, CrossOver, and Lutris, that allow many Windows games to be run on our beloved open-source operating system. However, these programs require some configuration and do not always work perfectly. Additionally, not all games work with these solutions.

A few years ago, Valve developed a more advanced solution called Proton. Proton is open-source software based on Wine, offering better performance and game compatibility than previous solutions.

Morrolinux’s Tips: Playing Games on Linux? It Works!
Fig 2: Steam Deck
Proton combines Wine with technologies developed by Valve and other contributors, such as DXVK and VKD3D, that implement various versions of Direct3D (the foundation of Microsoft DirectX) in Vulkan. The combination has visible benefits in terms of compatibility and game performance on Linux. Figure 3 shows Cyberpunk 2077 running on Wine, while Figure 4 shows it using NVIDIA’s RTX ray tracing GPU.

Currently, numerous games are already supported by Proton without any additional configuration.

Most Linux distributions support Proton, but some are more suitable than others for gaming. For example, Arch Linux and derivatives such as Manjaro Linux are highly appreciated by Linux gamers for their ease of use and compatibility with many applications. To use Proton, you must have a reasonably recent graphics card supporting Vulkan and a Linux distribution with an updated kernel.

Supported (and Supporting) Marketplaces


All the major video game distribution platforms are also available on Linux. Some, like the aforementioned Valve’s Steam, have a native Linux client. Others, such as Epic Games, GOG, and itch.io, are accessible through open-source third-party clients like Lutris and Heroic Launcher (Figure 5).

Steam is the most popular among such platforms; it offers a vast catalog of Linux-compatible games (Figure 6). The native support for Proton simplifies the installation of Windows games on Linux, making it even more interesting for a Linux user.

How to Get There


Morrolinux’s Tips: Playing Games on Linux? It Works!
Fig 3: Playing Cyberpunk 2077 on Wine
In summary, it is possible to play games on Linux thanks to the many solutions discussed in this article. If you want an optimal gaming experience on Linux, using a Linux distribution with an updated kernel and a graphics card compatible with Vulkan is advisable. Additionally, game distribution platforms like Steam offer a vast catalog of games compatible with Linux, making the gaming experience on our preferred open-source operating system more straightforward.

If you need a step-by-step guide, consider this roadmap:

1. Choose a Linux distribution:

Selecting the proper Linux distribution is crucial for a smooth gaming experience. Opt for a distribution known for its gaming compatibility and user-friendly experience. Arch Linux and Manjaro Linux are popular choices among gamers due to their extensive software repositories and excellent hardware support. You can find more information and download links for these distributions on their respective websites. If you want to dive deeper into the topic, check this Beebom article for a list of gaming-friendly distros and this ZDNET article for a review of PikaOS.

Morrolinux’s Tips: Playing Games on Linux? It Works!
Fig 4: Playing Cyberpunk 2077 on RTX
2. Ensure your hardware’s compatibility:

Before diving into gaming on Linux, it’s essential to ensure that your hardware is compatible. Check whether your graphics card supports Vulkan, a graphics API that offers optimal performance in Linux gaming. Most modern graphics cards from AMD and NVIDIA support Vulkan. Refer to your graphics card manufacturer’s website for drivers and information on Vulkan support.

3. Install Proton:

Before installing Proton, you must install Steam on your Linux system. Visit the Steam website to download and install the Steam client. Once installed, launch Steam and navigate to the “Steam” menu, then select “Settings.” In the settings window, click on the “Steam Play” tab, check the box that says “Enable Steam Play for supported titles,” and select the latest version of Proton from the drop-down menu. Click “OK” to apply the changes. Now you’re ready to enjoy a wide range of Windows games on Linux using Proton.

Morrolinux’s Tips: Playing Games on Linux? It Works!
Fig 5: Heroic Launcher
4. Access game marketplaces:

To gain access to a vast catalog of Linux-compatible games, you can utilize popular game distribution platforms such as Steam, Epic Games, GOG, itch.io, and more. Steam is particularly renowned for its extensive library of Linux-compatible games. Install the native Linux client for Steam by visiting the Steam for Linux page and following the instructions for your Linux distribution. For other game marketplaces, such as Epic Games, GOG, and itch.io, you can use open-source third-party clients like Lutris or Heroic Launcher. Visit their websites to download and install these clients.

5. Install and play games:

Morrolinux’s Tips: Playing Games on Linux? It Works!
Fig 6: Steam’s compatibility menu
With Proton and your chosen game marketplace or client, you can now install and enjoy a wide variety of games on your Linux system. Simply browse the available games, select the ones you’re interested in, and follow the installation prompts. Proton will automatically handle the compatibility and necessary configurations to ensure a smooth gaming experience. Steam users can find a comprehensive list of Linux-compatible games on the Steam Store by selecting the “Linux” operating system filter.

By following these steps, you’ll be well equipped to dive into the exciting world of gaming on Linux. Enjoy exploring the vast collection of games and have fun gaming on your preferred open-source operating system! Do not forget that it is important to regularly update your Linux distribution, graphics drivers, and Proton to ensure optimal performance and compatibility with the latest games.

Source: lpi.org

Saturday 28 October 2023

A Journey With Linux and Open Source: Bruno Alves

A Journey With Linux and Open Source: Bruno Alves

Welcome to another edition of our Share Your Voice series! Today, we have the pleasure of chatting with Bruno Alves, a seasoned Senior Linux System Administrator and DevOps Engineer with an impressive 13 years of experience. Bruno holds the LPIC-3 Security certification, highlighting the significant value he places on LPI’s certifications in his career journey. Let’s delve into his expertise and insights in the world of Linux and open-source technology.

What was your initial introduction to Linux and open source software, and what ignited your fascination with them?

My first encounter with Linux was during a computer class back in 2006. Our teacher passionately explained the wonders of Linux, the vibrant open source community, and how people collaborated to maintain these projects. The idea of such a collaborative and powerful operating system intrigued me, sparking my curiosity to give it a try.

As a dedicated Windows user at the time, transitioning to Linux wasn’t without its challenges, but my determination drove me to install Linux on all my computers, preparing myself for this exciting new journey. Interestingly, my enthusiasm influenced my mom to learn Linux as well, and now she can perform simple tasks on the system too!

How has your career and personal life been affected by Linux and open source software, and what doors have they opened for you?

Talking about my career, I decided to work with Linux around seven years ago. In order to get into this position, I followed the LPI content path for my studies. Getting my LPIC-1 certification in 2016 opened doors for me to work as a Linux system administrator.

Throughout these years, maintaining a mindset to keep improving, I wanted to get my LPIC-2 and LPIC-3 Security certification. By choosing this speciality, I can satisfy a wider range of business needs on a daily basis. I could see a high demand for this skill on the job market too; that persuaded me to follow this path. In the near future, I would like to get the Virtualization and Containerization specialty too.

I am immensely grateful for the path I chose and the incredible impact Linux has had on my life. Thanks to Linux, I now enjoy a better quality of life, living in a small city and having the flexibility to work from home.

I want to share a valuable tip with everyone: Invest your time in LPI Learning Materials and obtain your certifications. It will significantly enhance your visibility in the job market, opening up numerous opportunities for you!

The knowledge gained through LPI certifications is directly applicable in daily work as a Linux system administrator, making it even more rewarding. Embrace this journey, and the rewards will be beyond your expectations.

Do you want to tell us a bit more about what motivated you to pursue an LPI certification, and the positive impact it has had on your career?

When I considered a certification that encompassed a wide range of distributions, without being tied to any specific one, LPI immediately came to mind. It was the perfect choice as I sought to become a certified professional and secure a Linux System Administrator position: I needed something “distro-independent.” The global reputation and high regard for LPI made its certifications the ideal credentials for me, ensuring recognition as a competent professional no matter where I wanted to work. LPI has truly opened doors to exciting opportunities in the tech industry for me.

What approach did you take to prepare for the LPI certification exams, and what tips do you have for those contemplating certification?

To prepare for the exams, I dedicated daily study sessions and set a clear deadline for the exam date. I made sure to track my progress and the specific topics I needed to review thoroughly to cover all exam content. LPI’s Wiki comes very handy for that.

Success came through effective planning and consistent effort – a combination of planning and action. Equally important were maintaining a positive mindset and a strong determination to achieve the certification. These principles guided me to success, and they can do the same for you. Keep your goals in focus, stay positive, and embrace the journey towards certification.

Balancing stability and innovation is crucial in mission-critical systems. How do you navigate this challenge while actively participating in the open source community?

In navigating the challenge of balancing stability and innovation while actively participating in the open source community, we – as a team, and as the community itself – encounter various scenarios to implement, review, and fine-tune systems in our environment. Embracing best practices, we prioritize mission-critical systems with well-planned deployment methods, such as the blue-green or canary approach, minimizing or eliminating downtime.

The cloud and containerization have emerged as powerful allies, enabling us to maintain innovation and experimentation while modernizing our workload. Employing “Infrastructure as Code” tools, we deploy new infrastructures across regions or accounts with minimal effort, replicating Production workloads in Developer environments. The open source community provides invaluable assistance through extensive documentation and supportive forums, guiding us through challenging situations.

One recent project that exemplifies community collaboration involved deploying TheForeman to manage our Linux servers and automate the provisioning process. Engaging with TheForeman’s community, I gained insights into resolving issues and better understanding the application. Sharing solutions and helping others facing similar challenges reflects the true spirit of collaboration within the open source community, enhancing its collective strength and empowering its growth.

The rise of cloud computing and containerization has revolutionized the use of Linux and open source software. How are you adapting to these transformative changes?

As I mentioned before, the rise of cloud computing and containerization gave us the opportunity to improve the deployment process, bringing scalability and reliability. Linux usage was increased together with cloud adoption, since containerization was native to Linux many years ago with LXC. Whoever was using Linux and familiar with containers at that time was at the top of the wave and could predict a high demand for professionals with this knowledge.

What are your main responsibilities in your current position, and which key tools are indispensable to your workflow?

Today I’m a Senior Linux System Administrator at Stoneridge,Inc., and help them to manage their Linux Systems by bringing DevOps tools and culture, automation, and security best practices in a high-scale cloud environment. Today I’m working with AWS as a cloud provider, Terraform to automate the cloud deploy process, Puppet and Ansible for item configuration, theForeman for provision and server management, a lot of bash scripts, etc.

When nerds take a break from nerding, how do they unwind and enjoy their leisure time?

Beyond the computer world, I find joy in caring for my pet birds. Holding a national license to nurture wild species domestically, I contribute to preserving and growing their populations. This effort helps mitigate the decline caused by loss of natural habitats and illegal capture in the forest.

During my leisure time, I indulge in playing retro video games, relishing the 8-16-bit era (NES, SNES, Master System, Game Boy), and also exploring the 32-bit world with PlayStation 1 and 2. For current console gaming, I’m engrossed in the Nintendo Switch.

Amidst the pandemic days, I stumbled upon an intriguing hobby—listening and exploring short-wave radio stations. Through a simple radio device, I can tune into music and news from diverse corners of the globe, a truly amazing experience. However, the reception varies with weather conditions and other factors.

With my family, I love venturing into nature, exploring quaint countryside towns over weekends alongside my wife, Bruna. These adventures allow us to delve into local culture, savor delightful gastronomy, and create cherished memories together.

Source: lpi.org

Thursday 26 October 2023

Unlocking Success: A Comprehensive Guide to the LPI Exam

LPI Exam, LPI Career, LPI Skill, LPI Jobs, LPI Tutorial and Materials, LPI Tutorial and Materials

The Linux Professional Institute (LPI) Certification is a vital credential for individuals aspiring to excel in the field of Linux administration. In this comprehensive guide, we will delve into every aspect of the LPI Exam, helping you understand its significance, structure, and offering valuable insights to ensure your success.

Introduction to the LPI Exam


What is the LPI Exam?

The LPI Exam, short for the Linux Professional Institute Certification Exam, is an internationally recognized examination designed to assess your proficiency in Linux system administration. It is a two-part examination that measures your competence in managing Linux systems and administering various aspects of a Linux-based network.

The Significance of LPI Certification

Obtaining the LPI certification holds immense importance in the world of IT. It not only demonstrates your expertise but also validates your skills, making you stand out in a highly competitive job market. Linux is the backbone of many organizations' IT infrastructure, and having this certification on your resume can open up a world of opportunities.

Understanding the LPI Exam Structure


To succeed in the LPI Exam, you must be well-acquainted with its structure. It consists of two parts, LPI 101 and LPI 102. Let's break down each part.

LPI 101: Linux Administrator

The LPI 101 exam assesses your ability to perform essential Linux tasks. This includes system architecture, package management, file and directory management, and more. Here are some key areas that this exam focuses on:

1. System Architecture

In this section, you'll be tested on your understanding of the Linux boot process, the initialization system, and the role of the kernel.

2. Linux Installation and Package Management

You'll need to demonstrate your proficiency in package management, including installing, updating, and configuring software packages.

3. GNU and Unix Commands

This part assesses your knowledge of basic command-line operations, file management, and working with text files.

LPI 102: Linux Network Administrator

The LPI 102 exam centers around Linux network administration. It covers topics like internet services, network configuration, system security, and more. Here are some of the key areas you'll need to master:

1. Networking Fundamentals

This section tests your knowledge of network protocols, ports, and the configuration of network services.

2. Security and System Administration

You'll need to understand system security, user and group management, and essential security practices.

3. Web Services

The LPI 102 exam evaluates your skills in configuring and managing web services and their security.

How to Prepare for the LPI Exam


Now that you understand the LPI Exam structure, it's time to explore how to prepare for this challenging certification.

1. Comprehensive Study Material

Start with acquiring the right study material. LPI provides official resources, including study guides and sample questions. Additionally, there are numerous books and online courses available to help you prepare.

2. Hands-On Practice

Linux administration requires hands-on experience. Set up a virtual lab or use Linux on your personal computer to practice the skills you'll be tested on.

3. Join a Community

Participating in Linux and LPI-related forums or communities can be immensely beneficial. You can exchange knowledge, seek advice, and get assistance from experienced professionals.

4. Take Practice Exams

Practice exams can help you assess your readiness for the LPI Exam. They simulate the actual test environment, allowing you to identify areas where you need improvement.

5. Time Management

Effective time management is crucial. Create a study schedule that allows you to cover all the necessary topics and allocate more time to areas where you face difficulties.

Success in LPI Exam: Your Gateway to a Bright Career


Acquiring the LPI certification not only validates your Linux administration skills but also opens the door to lucrative job opportunities. Organizations seek certified Linux professionals to manage their critical IT systems.

In conclusion, the LPI Exam is your ticket to a successful career in Linux administration. It's a rigorous test of your knowledge and skills, but with the right preparation and dedication, you can achieve this prestigious certification and set yourself on a path to professional excellence.

Saturday 21 October 2023

Contribute to Open Source: Daniele Scasciafratte, Part 2

Contribute to Open Source: Daniele Scasciafratte, Part 2

We are back with Daniele Scasciafratte, author of the book Contribute to opensource: the right way, to dig deeper into the book and the best practice for making contributions to FOSS projects. This is part 2 of the interview with him. You can find part 1 here.

Many people may feel intimidated or overwhelmed by the idea of contributing to open source projects. How does your book address these concerns and provide guidance for aspiring contributors?

Every open source project has the same environmental architecture and tools. The first step is to explain how a project lives: the communication platforms/services and the differences between them, and to suggest that you are here to simplify the work for the maintainer. The next step is a personal question asking yiou to understand your needs, like a feature or documentation improvement. The final step concerns the various activities.

I divide these steps into first and second levels and categorize them as having a direct or non-direct impact on the project. Direct activities are tasks whose impact on the project can be seen immediately. In contrast, the impacts of non-direct tasks might be seen after finishing the job.

First-level activities can be done by everyone with zero experience in the project: reviewing, localization, support, testing, promotion, and evangelism. The second level requires more experience and is difficult for newcomers: documentation, community management, and development.

By categorizing contributions this way, I make it easier for everyone to understand how any FOSS project works and reuse the same approach for every project: it’s learning by example.

People are overwhelmed because every project is different, and need clarification about where to start. Instead, looking for everyday things helps; Like when you are grocery shopping, and you have no idea about a new product, you just read the label that explains the ingredients.

Your book covers various aspects of open source contribution, such as choosing a project, setting up a development environment, submitting patches, and dealing with issues and pull requests. Can you provide some practical tips or strategies for new readers to open source contributions?

The first thing I always suggest, whether you are skilled or a newbie, a developer or a localizer, is just read the documentation, even if it’s just the Readme File.

Knowledge is more meaningful when it gets closer to your experience. While working on the first draft of the book, I asked various people to review it, and I saw that a lot of people were asking for explanations for some jargon words or for some stuff that I hadn’t explained and clarified well enough.

Open source is based on people’s feedback: a ticket, a code patch, a reply in the support forum, etc. Why not start by improving something that’s important for the project’s future? I remember a survey from Stack Overflow showing that one of the main selling points for a FOSS project is the quality of the documentation, followed by the community.

Another problem I saw with newcomers is that they want to contribute because they are thrilled by the project, but need help figuring out where to start.

Explaining the various activities helps them understand where they feel more comfortable contributing. Participating in support is the ideal step; it prepares you to move quickly to other things.

Open source projects often have unique cultures, development processes, and coding standards. How does your book help readers navigate these differences and contribute effectively to different types of projects? What is the easiest way to start once the differences are addressed?

Explaining the various communication and platform tools usually is a way to understand the structure of the community: for example, whether it is run by a company or by different contributors, whether it is an old community with old procedures (like mailing list only) if modern with other communication mechanisms, and whether it is welcoming to new people.

\Joining a project with an integralistic approach is something FOSS is (in)famous for. You will have a bias that doesn’t let you participate in the best way to improve a project. I focus on something other than whether the project uses GitHub or GitLab; those are just code hosting platforms with their particular services and policies. I don’t focus on the code of conduct either. It is crucial for sure, but not just when you are in a study phase or in the first steps of contributing to a project.

Open source projects can vary significantly in size, complexity, and maturity. How does your book address the different projects and guide readers to choose the right ones to contribute to based on their skills, interests, and goals?

Any project comes with different needs, so learn their priority, for instance:

  • We need more active developers, so we need to simplify the dev environment or improve the documentation for them
  • The community is dying because there are gatekeepers who are not so friendly to newcomers
  • The project has a lot of users but is not famous outside that community, which means more advocacy to do.

I explain various areas and activities using external resources as well. The 134-page book contains 200 links to such resources as recaps, analyses, and journals where projects explain how they improved something with facts, not buzzwords.

Finally, what do you hope readers will take away from your book? What message or advice do have you for individuals interested in contributing to open source projects but who may need help knowing where to start?

The message is simple: don’t worry and be nice. FOSS is made by humans, and if you are professional, gracious when you are talking to them and reporting a bug that is very annoying for you, it will improve every future move.

Also, if they are an employee paid to work on a project, being nice is always better. We shouldn’t forget about the motto “Free as Beer” at the start of the FOSS.

The book’s title includes the words “the right way”; that’s just a way of saying that some approaches work better. The book starts with my autobiography to give an example of a career in the open source environment.

Each new edition started with me gathering ideas at events, talking with people, and reading Twitter or Reddit to see what people are looking for. This way, I can check a “thermometer” of the FOSS world, like I started doing during the COVID pandemic.

I gather resources to improve the book if I see something that needs to be added.

Right now I’m tracking the rise of AI, but there isn’t anything new in the FOSS sphere here that affects the book (in my opinion), as we already saw with the GitHub Copilot case. Still, after all this time, there are a few new licenses with some specialized rules, for example.

So I am unsure whether there will be a new edition this year because I always decide that I have written everything to say about the “contribute to open source” topic. This is the task for the readers: Reach out to me to report something missing.

Also, an important thing is that just reading the book and writing to me to report a typo, something that’s not clear, or something missing about a topic is in itself contributing to Open Source, as the book is entirely on GitHub and licensed under the free GPLv3 license,. People who contribute are mentioned in the book.

We are getting close to 1,000 downloads, and it will be a new record because the work is only self-promoted.

I think a relevant example of the 3rd edition is the “boy scout rule” I mentioned to explain my purpose and my approach to the book and to open source:

“Always leave the campground cleaner than you found it. If you find a mess on the ground, you clean it up regardless of who might have made it. You intentionally improve the environment for the next group of campers.”

Source: lpi.org

Thursday 19 October 2023

IBM, Red Hat and Free Software: An old maddog’s view

IBM, Red Hat and Free Software: An old maddog’s view

Several people have opined on the recent announcement of Red Hat to change their terms of sales for their software. Here are some thoughts from someone who has been around a long time and been in the midst of a lot of what occurred, and has been on many sides of the fence.

This is a fairly long article. It goes back a long way. People who know me will realize that I am going to tell a lot of details that will fit sooner or later. Have patience. Or you can jump close to the bottom and read the section “Tying it all together” without knowing all the reasoning.

Ancient history for understanding


I started programming in 1969. I wrote my programs on punched cards and used FORTRAN as a university cooperative education student. I learned programming by reading a book and practicing. That first computer was an IBM 1130 and it was my first exposure to IBM or any computer company.

Back at the university I joined the Digital Equipment User’s Society (DECUS) which had a library of software written by DEC’s customers and distributed for the price of copying (sometimes on paper tape and sometimes on magnetic tape).

There were very few “professional programmers” in those days. In fact I had a professor who taught programming that told me I would NEVER be able to earn a living as a “professional programmer”. If you wrote code in those days you were a physicist, or a chemist , or an electrical engineer, or a university professor and you needed the code to do your work or for research.

Once you had met your own need, you might have contributed the program to DECUS so they could distribute it…because selling software was hard, and that was not what you did for a living.

In fact, not only was selling software hard, but you could not copyright your software nor apply for patents in your software. The way you protected your code was through “Trade Secret” and “Contract Law”. This meant that you either had to create a contract with each and every user or you had to distribute your software in binary form. Distributing your software in binary form back in those days was “difficult” since there were not that many machines of one architecture, and if they did have an operating system (and many did not) there were many operating systems that ran on any given architecture.

Since there were so few computers of any given architecture and if they did have an operating system there were many operating systems for each architecture (the DEC PDP-11 had more than eleven operating systems) therefore many companies distributed their software in source code form or even sent an engineer out to install it, run test suites and prove it was working. Then if the customer received the source code for the software it was often put into escrow in case the supplier went out of business.

I remember negotiating a contract for an efficient COBOL compiler in 1975 where the license fee was 100,000 USD for one copy of the compiler that ran on one IBM mainframe and could be used to do one compile at a time. It took a couple of days for their engineer to get the compiler installed, working and running the acceptance tests. Yes, my company’s lawyers kept the source code tape in escrow.

Many other users/programmers distributed their code in “The Public Domain”, so other users could do anything they wanted with it.

The early 1980s changed all that with strong copyright laws being applied to binaries and source code. This was necessary for the ROMs that were being used in games and (later) the software that was being distributed for Intel-based CP/M and MS DOS systems.

Once the software had copyrights then software developers needed licenses to tell other users what their rights were in usage of that software.

For end users this was the infamous EULA (the “End User License Agreement” that no one reads) and for developers a source code agreement which was issued and signed in a much smaller number.

The origins and rise of Unix


Unix was started by Bell Labs in 1969. For years it was distributed only inside of Bell Labs and Western Electric, but eventually escaped to some RESEARCH universities such as University of California Berkeley, MIT, Stanford, CMU and others for professors and students to study and “play with”. These universities eventually were granted a campus-wide source code license for an extremely small amount of money, and the code was freely distributed among them.

Unique among these universities was the University of California, Berkeley. Nestled in the tall redwood trees of Berkeley, California with a wonderful climate, close to the laid-back cosmopolitan life of San Francisco, it was one of the universities that Ken Thompson chose to take a magnetic tape of UNIX and use it to teach operating system design to eager young students. Eventually the students and staff, working with Ken, were able to create a version of UNIX that might conceivably be said to be better than the UNIX system from AT&T. BSD Unix had demand paged virtual memory, while AT&T was still a swapping memory model. Eventually BSD Unix had native TCP/IP while AT&T UNIX only had uucp. BSD Unix had a rich set of utilities, while AT&T had stripped down the utility base in the transition to System V from System IV.

This is why many early Unix companies, including Sun Microsystems (with SunOS), DEC (with Ultrix) and HP (with HP/UX) all went with a BSD base to their binary-only products.

Another interesting tidbit of history was John Lions. John was a professor at the University of New South Wales in Australia and he was very interested in what was happening in Bell Labs.

John took sabbatical in 1978 and traveled to Bell Labs. Working along with Ken Thompson, Dennis Ritchie, Doug McIlroy and others he wrote a book on Version 6 of Unix that commented all the source code for the Unix kernel and a commentary on why that code had been chosen and what it did. Unfortunately in 1979 the licensing for Unix changed and John was not able to publish his book for over twenty years.

Unfortunately for AT&T John had made photocopies of drafts of his book and gave those to his students for comments, questions and review. When John’s book was stopped from publication, the students made photocopies of his book, and photocopies of the photocopies, and photocopies of the photocopies of the photocopies, each one becoming slightly lighter and harder to read than the previous generation.

For years Unix programmers measured their “age” in the Unix community by the generation of John’s book which they owned. I am proud to say that I have a third generation of the photocopies.

John’s efforts educated thousands of programmers in how elements of the Unix kernel worked and the thought patterns of Ken and Dennis in developing the system.

[Eventually John’s book was released for publication, and you may purchase it and read it yourself. If you wish you can run a copy of Version 6 Unix on a simulator named SIMH which runs on Linux. You can see what an early version of Unix was like.]

Eventually some commercial companies also obtained source code licenses from AT&T under very expensive and restrictive contract law. This expensive license was also used with small schools that were not considered research universities. I know, since Hartford State Technical College was one of those schools, and I was not able to get Unix for my students in the period of 1977 to 1980. Not only did you have to pay an astronomical amount of money for the license, but you had to tell Bell Labs the serial number of the machine you were going to put the source code on. If that machine broke you had to call up Bell Labs and tell them the serial number of the machine where you were going to move the source code.

Eventually some companies, such as Sun Microsystems, negotiated a redistribution agreement with AT&T Bell Labs to sell binary-only copies of Unix-like systems at a much less restrictive and much less expensive licensing fee than getting the source code from AT&T Bell Labs directly.

Eventually these companies made the redistribution of Unix-like systems their normal way of doing business, since to distribute AT&T derived source code to their customers required that the customer have an AT&T source code license, which was still very expensive and very hard to get.

I should point out that these companies did not just take the AT&T code, re-compile the code and distribute them. They hired many engineers and made a lot of changes to the AT&T code and some of them decided to use code from the University of California Berkeley as the basis of their products, then went on to change the code with their own engineers. Often this not only meant changing items in the kernel, but changing the compilers to fit the architecture and other significant pieces of engineering work.

Then, in the early 1980s Richard M. Stallman (RMS), a student at MIT received a distribution of Unix in binary only form. While MIT had a site-wide license for AT&T source code, the company that made that distribution for their hardware did not sell sources easily and RMS was upset that he could not change the OS to make the changes he needed.

So RMS started the GNU (“GNU is not Unix”) project for the purpose of distributing a freedom operating system that would require people distributing binaries to make sure that the people receiving those binaries would receive the sources and the ability to fix bugs or make the revisions they needed.

RMS did not have a staff of people to help him do this, nor did he have millions of dollars to spend on the hardware and testing staff. So he created a community of people around the GNU project and (later) the Free Software Foundation. We will call this community the GNU community (or “GNU” for short) in the rest of this article.

RMS did come up with an interesting plan, one of creating software that was useful to the people who used it across a wide variety of operating systems.

The first piece of software was emacs, a powerful text editor that worked across operating systems, and as programmers used it they realized the value of using the same sub-commands and keystrokes across all the systems they worked on.

Then GNU worked on a compiler suite, then utilities. All projects were useful to programmers, who in turn made other pieces of code useful to them.

What didn’t GNU work on? An office package. Few programmers spent a lot of time working on office documents.

In the meantime another need was being addressed. Universities who were doing computer research were generating code that needed to be distributed.

MIT and the University of California Berkeley were generating code that they really did not want to sell. Ideally they wanted to give it away so other people could also use it in research. However the software was now copyrighted, so these universities needed a license that told people what they could do with that copyrighted code. More importantly, from the University’s perspective, the license also told the users of the software that there was no guarantee of any usefulness, and they should not expect support, nor could the university be held liable for any damages from the use of the software.

We joked at the time that the licenses did not even guarantee that the systems you put the software on would not catch fire and burst into flames. This is said tongue-in-cheek, but was a real consideration.

These licenses (and more) eventually became known as the “permissive” licenses of Open Source, as they made few demands on the users of the source code of the software known as “developers”. The developers were free to create binary-only distributions and pass on the binaries to the end users without having to make the source code (other than the code they originally received under the license) visible to the end user.

Only the “restrictive” license of the GPL forced the developer to make their changes visible to the end users who received their binaries.

Originally there was a lot of confusion around the different licenses.

Some people thought that the binaries created by the use of the GNU compilers were also covered by the GPL even though the sources that generated the licenses were completely free of any licensing (i.e. created by the user themselves).

Some people thought that you could not sell GPL licensed code. RMS refuted that, but admitted that GPL licensed code typically meant that just selling the code for large amounts of money was “difficult” for many reasons.

However many people did sell the code. Companies such as Walnut Creek (Bob Bruce) and Prime Time Freeware for Unix (Richard Morin) sold compendiums of code organized on CD-ROMs and (later) DVDs for money. While the programs that were on these compendiums were covered by individual “Open Source” licenses, the entire CD or DVD might have had its own copyright and license. Even if it was “legal” to copy the entire ISO and produce your own CDs and DVDs and sell them, probably the creators of the originals might have had harsh thoughts toward the resellers.

During all of this time the system vendors such as Digital Equipment Corporation, HP, Sun and IBM were all creating Unix-like operating systems based on either AT&T System V or part of the Berkeley Software Distribution (in many cases starting with BSD Unix 4.x). Each of these companies hired huge numbers of Unix software engineers, documentation people, quality assurance people, product managers and so forth. They had huge buildings, many lawyers, and sold their distributions for a lot of money. Many were “system companies” delivering the software bundled with their hardware. Some, like Santa Cruz Operations (SCO), created only a software distribution.

Originally these companies produced their own proprietary operating systems and sold them along with the hardware, sensing that the hardware without an operating system was fairly useless, but later they separated the hardware sales from the operating system sales to offer their customers more flexibility with their job mix to solve the customer’s problems.

However this typically meant more cost for both the hardware and the separate operating system. And it was difficult to differentiate from your competitors external to your company and internal to your company. Probably the most famous of these conflicts was DEC’s VMS operating system and various Unix offerings….and even PDP-11 versus VAX.

DEC had well over 500 personnel (mostly engineers and documentation people) in the Digital Unix group along with peripheral engineering and product management to produce Digital Unix.

Roughly speaking, each company was spending on the neighborhood of 1-2 billion USD per year to sell their systems, investing in sophisticated computer science features to show that their Unix-like system was best.

The rise of Microsoft and the death of Unix


In the meantime a software company in Redmond, Washington was producing and selling the same operating systems to run on the PC no matter whether you bought it from HP, IBM, or DEC, and this operating system was now moving up in the world, headed towards the lucrative hardware server market. While there were obviously fewer servers than there were desktop systems, the license price of a server operating system could be in the range of 30,000 USD or more.

The Unix Market was stuck between a rock and a hard place. It was becoming too expensive to keep engineering unique Unix-like systems and competing with not only other Unix-like vendors, but also to fight off Windows NT. Even O’Reilly Publishers, who had for years been producing books about Unix subsystems and commands, was switching over to producing books on Windows NT.

The rise of Linux


Then the Linux kernel project burst on the scene. The kernel project was enabled by six major considerations:

  • A large amount of software was available from GNU. MIT, BSD and independent software projects
  • A large amount of information about operating system internals was available on the Internet
  • High speed Internet was coming into the home, not just industry and academia
  • Low cost, powerful processors capable of demand-paged virtual memory were not only available on the market, but were being replaced by more powerful systems, and were therefore available to build a “hobby” kernel.
  • A lot of luck and opportunity
  • A uniquely stubborn project leader who had a lot of charisma.

Having started in late 1991, by late 1993 “the kernel project” and many distribution creators such as “Soft Landing Systems”, “Yggdrasil”, “Debian”, “Slackware” and “Red Hat” to flourish.

Some of these were started as a “commercial” distribution, with the hope and dream of making money and some were started as a “community project” to benefit “the community”.

At the same time, distributions that were based on the Berkeley Software Distribution were still held up by the long-running “Unix Systems Labs Vs BSDi” lawsuit that was holding up the creation of “BSDlite” that would be used to start the various BSD distributions.

Linux (or GNU/Linux as some called it) started to take off, pushed by the many distributions and the press (including magazines and papers).

Linux was cute penguins


I will admit the following is my own thoughts on the popularity of Linux versus BSD, but from my perspective it was a combination of many factors.

As I said before, at the end of 1993 BSD was still being held up by the lawsuit, but the Linux companies were moving forward, and because of this the BSD companies (of which there were only one or two at the time) had nothing new to say to the press.

Another reason that the Linux distributions moved forward was the difference in the model. The GPL had a dynamic effect on the model of forcing the source code to go out with the binaries. Later on many embedded systems people, or companies that wanted an inexpensive OS for their closed system, might chose software with an MIT or BSD license that license would not force them to ship all their source code to their customers, but the combination of the GPL for the kernel and the large amount of code from the Free Software Foundation caught the imagination of a lot of the press and customers.

People could start a distribution project without asking ANYONE’s permission, and eventually that sparked hundreds of distributions.

The X Window System and Project Athena


I should also mention Project Athena at MIT, which was originally a research project to create a light-weight client-server atmosphere for Unix workstations.

Out of this project came Kerberos, a net-work based authentication system, as well as the X Window System.

At this time Sun Microsystems had successfully made NFS a “standard” in the Unix industry and was trying to advocate for a Display Postscript-based windowing system named “News”.

Other companies were looking for alternatives, and the client-server based X Window System showed promise. However X10.3, one release from Project Athena, needed some more development that eventually led to X11.x and on top of that were Intrinsics and Widgets (Button Boxes, Radio Boxes, Scroll-bars, etc.) that gave the “look and feel” that people see in a modern desktop system.

These needs drove the movement of developing the X Window System out of MIT and Project Athena into the X Consortium, people paid full time to coordinate the development. The X Consortium was funded by memberships from companies and people that felt they had something to get from having X supported. The X Consortium opened in 1993 and closed its doors in 1996.

Some of these same companies decided to go against Unix System Labs, the consortium set up by Sun Microsystems and AT&T, so they formed the Open Software Foundation (OSF) and decided to set a source-code and API standard for Unix systems. Formed in 1988, it merged with X/Open in 1996 to form the Open Group. Today they maintain a series of formal standards and certifications.

There were many other consortia formed. The Common Desktop Environment (I still have lots of SWAG from that) was one of them. And it always seemed with consortia that they would start up, be well funded, then the companies funding them would look around and say “why should I pay for this, all the other companies will pay for it” and those companies would drop out to let the consortium’s funding dry up.

From the few to the many


At this point, dear reader, we have seen how software originally was written by people who needed it, whereas “professional programmers” wrote code for other people and who required funding to make it worthwhile for them. The “problem” with professional programmers is that they expect to earn a living by writing code. They have to buy food, housing and pay taxes. They may or may not even use the code they write in their daily life.

We also saw a time where operating systems, for the most part, were either written by computer companies, to make their systems usable, or by educational bodies as research projects. As Linux matures and as standards make the average “PC” from one vendor become more and more electrically the same, the number of engineers needed to make each distribution of Linux work on a “PC” is minimal.

PCs have typically had difficulty in differentiating one from another, and “price” is more and more one of the mitigating issues. Having to pay for an operating system is something that no company wants to do, and few users expect to pay for it either. So the hardware vendors turn more and more to Linux….an operating system that they do not have to pay any money to put on their platform.

Recently I have been seeing some cracks in the dike. As more and more users of FOSS come on board, they put more and more demands on developers whose numbers are not growing sufficiently fast enough to keep all the software working.

I hear from FOSS developers that too few, and sometimes no, developers are working on blocks of code. Of course this can also happen to closed-source code, but this shortness hits mostly in areas that are not considered “sexy”, such as quality assurance, release engineering, documentation and translations.

Funding the work


In the early days there were just a few people working on projects that had relatively few people using them. They were passionate about their work, and no one got paid.

One of the first times I heard any type of rumblings was when some people had figured out some ways of making money with Linux. One rumble that came up was an indignation that came because the developers did not want people to make money on code they had written and contributed for free.

I understood the feelings of these people, but I advocated the fact that if you did not allow companies to make money from Linux that the movement would go forward slowly, like cold molasses. Allowing companies to make money would cause Linux to go forward quickly. While we lost some of the early developers who did not agree with this, most of the developers that really counted (including Linus) saw the logic in this.

About this time various companies were looking at “Open Source”. Netscape was in battle with other companies who were creating browsers and on the other side there were the web-servers like Apache that were needed to provide servers.

At the same time Netscape decided to “Open Source” their code in an attempt to bring in more developers and lower the costs of producing a world-class browser and server.

The community


All through software history there were “communities” that came about. In the early days the communities revolved around user groups, or groups of people involved in some type of software project, working together for a common goal.

Sometimes these were formed around the systems companies (DECUS, IBM’s SHARE, Sun Microsystems’ Sun-sites, etc) and later bulletin boards, newsgroups, etc.

Over time the “community” expanded to include documentation people, translation people or even people just promoting Free Software and “Open Source” for various reasons.

However, in the later years it turned more and more into people using gratis software and not understanding Freedom Software. The same people who would use pirated software, not giving back at all to the community or the developers.

Shiver me timbers….


One of the other issues of software is the concept of “Software Piracy”, the illegal copying and use of software against its license.

Over the years some people in the “FOSS Community” have downplayed the idea of Intellectual Property and even the existence of copyright, without acknowledging that without copyright they would have no control over their software whatsoever. Software in the public domain has no protection from people taking the software, making changes to it, creating a binary copy and selling it for whatever the customer would pay. However, some of these FOSS people condone software piracy and turn a blind eye to it.

I am not one of those people.

I remember the day I recognized the value of fighting software piracy. I was at a conference in Brazil when I told the audience that they should be using Free Software. They answered back and said:

“Oh, Mr. maddog, ALL of our software is free!”

At that time almost 90% of all desktop software in Brazil was pirated, and so with the ease of obtaining software for gratis, part of the usefulness of Free Software (its low cost) was obliterated.

An organization, the [Business} Software Alliance (BSA), was set up by companies like Oracle, Microsoft, Adobe and others to find and prosecute (typically) companies and government agencies that were using unlicensed or incorrectly licensed software.

If all the people using the Linux kernel would pay just one dollar for each hardware platform where it was running, we would be able to easily fund most FOSS development.

Enter IBM


One person at IBM, by the name of Daniel Frye, became my liaison to IBM. Dan had understood the model and the reasons for having Open Source.

Like many other computer companies (including Microsoft) there were people in IBM who believed in FOSS and were working on projects on their own time.

One of Daniel’s focuses was to find and organize some of these people into a FOSS unit inside of IBM to help move Linux forward.

From time to time I was invited to Austin, Texas to meet with IBM (which, as a DEC employee, felt very strange).

One time I was there and Dan asked me, as President of Linux International(TM), to speak to a meeting of these people in the “Linux group”. I gave my talk and was then issued into a “green room” to wait while the rest of the meeting went on. After a little while I had to go to the restroom, and while looking for it I saw a letter being projected on the screen in front of all these IBM people. It was a letter from Lou Gerstner, then the president of IBM. The letter said, in effect, that in the past IBM had been a closed-source company unless business reasons existed for it being Open Source. In the future, the letter went on, IBM would be an Open Source company unless there were business reasons for being closed source.

This letter sent chills up my back, because working at DEC, I knew how difficult it was to take a piece of code written by DEC engineers and make it “free software”, even if DEC had no plans to sell that code … .no plans to make it available to the public. After going through the process I had DEC engineers tell me “never again”. This statement by Gerstner reversed the process. It was now up to the business people to prove why they could not make it open source.

I know there will be a lot of people out there that will say to me “no way” that Gerstner said that. They will cite examples of IBM not being “Open”. I will tell you that it is one thing for a President and CEO to make a decision like that and another for a large company like IBM to implement it. It takes time and it takes a business plan for a company like IBM to change its business.

It was around this time that IBM made their famous announcement that they were going to invest a billion US dollars into “Linux”. They may have also said “Open Source”, but I have lost track of the timing of that. This announcement caught the world by shock, that such a large and staid computer company would make this statement.

A month or two after this Dan met with me again, looked me right in the eye and asked if the Linux community might consider IBM trying to “take over Linux”, could they accept the “dancing elephant” coming into the Linux community, or be afraid that IBM would crush Linux.

I told Dan that I was sure the “people that counted” in the Linux community would see IBM as a partner.

Shortly after that I was aware of IBM hiring Linux developers so they could work full time on various parts of Linux, not just part time as before. I knew people who were working as disparate parts of “Linux” as the Apache Web Server that were paid by IBM.

About a year later IBM made another statement. They had recovered that billion dollars of investment, and were going to invest another billion dollars.

I was at a Linux event in New York City when I heard of IBM selling their laptop and desktop division to Lenovo. I knew that while that division was still profitable, it was not profitable to the extent that it could support IBM. So IBM sold off that division, purchased Price Waterhouse Cooper (doubling the size of their integration department) and shifted their efforts into creating business solutions, which WERE more profitable.

There was one more, more subtle issue. Before that announcement, literally one day before the announcement, if an IBM salesman had used anything other than IBM hardware to create a solution, there might have been hell to pay. However at that Linux event it was announced that IBM was giving away two Apple laptops as prizes in a contest. The implications of that prize giveaway was not lost on me. Two days before that announcement, if IBM marketing people had offered a prize of a non-IBM product, they probably would have been FIRED.

In the future a business solution by IBM might use ANY hardware and ANY software, not just IBM’s. This was amazing. And it showed that IBM was supporting Open Source, because Open Source allowed their solution providers to create better solutions at a lower cost. It is as simple as that.

Lenovo, with its lower overhead and focused business, could easily make a reasonable profit off those low-end systems, particularly when IBM might be a really good customer of theirs.

IBM was no longer a “computer company”. They were a business solutions company.

Later on IBM sold off their small server division to Lenovo, for much the same reason.

So when IBM wanted to be able to provide an Open Source solution for their enterprise solutions, which distribution were they going to purchase? Red Hat.

And then there was SCO


I mentioned “SCO” earlier as a distribution of Unix that was much like Microsoft. SCO created distributions, mostly based on AT&T code (instead of Berkeley) and even took over the distribution of Xenix from Microsoft when Microsoft did not want to distribute it anymore.

The was Santa Cruz Operations, located in the Santa Cruz mountains overlooking the beautiful Monterey Bay.

Started by a father/son team Larry and Doug Michels, they had a great group of developers and probably distributed more licenses for Unix than any other vendor. They specialized in server systems that drove lots of hotels, restaurants, etc. using character-cell terminals and later X-terms and such.

Doug, in particular, is a great guy. It was Doug, when he was on the Board of Directors for Uniforum, who INSISTED that Linus be given a “Lifetime Achievement” award at the tender age of 27.

I worked with Doug on several projects, including the Common Desktop Environment (CDE) and enjoyed working with his employees.

Later Doug and Larry sold off SCO to the Caldera Group, creators of Caldera Linux. Based in Utah the Candera crew were a spin-off from Novell. From what I could see, Caldera was not so much interested in “FreeDOM” Linux as having a “cheap Unix” free of AT&T royalties, but still using AT&T code. They continually pursued deals with closed-source software that they could bind into their Linux distribution to give value.

This purchase formed the basis of what became known as “Bad SCO” (when Caldera changed their name to “SCO”), and who soon took a business tactic of suing Linux vendors because “SCO” said that Linux had AT&T source code in it and was a violation of their licencing terms.

This caused a massive uproar in the Linux Marketplace, with people not knowing if Linux would stop being circulated.

Of course most of us in the Linux community knew these challenges were false. One of the claims that SCO made was that they owned the copyrights to the AT&T code. I knew this was false because I read the agreement between AT&T and Novell (DEC was a licensee of both, so they shared the contract with us) and I knew that, at most, Santa Cruz Operations had the right to sub-license and collect royalties….but I will admit the contract was very confusing.

However no one knew who would fund the lawsuit that would shortly occur.

IBM bellied up to the bar (as did Novell, Red Hat and several others), and for the next several years the legal battle went on with SCO bringing charges to court and the “good guys” knocking them down. You can read more about this on Wikipedia.

In the end the courts found that at most SCO had an issue with IBM itself over a defunct contract, and Linux was in the clear.

But without IBM, the Linux community might have been in trouble. And “Big Blue” being in the battle gave a lot of vendors and users of Linux the confidence that things would turn out all right.

Red Hat and RHEL


Now we get down to Red Hat and its path.

I first knew Red Hat about the time that Bob Young realized that the most CDs his company ACC corps were from this little company in Raleigh, North Carolina.

Bob traveled there and found three developers who were great technically but were not the strongest in business and marketing.

Bob bought into the company and helped develop the policies of the company. He advocated for larger servers, more Internet connectivity, in order to give away more copies of Red Hat. It was Bob who pointed out that “Linux is catsup, and I will make Red Hat™ the same as “Heinz™”.

Red Hat developed the business model of selling services, and became profitable doing that. Eventually Red Hat went out with one of the most profitable IPOs of that time.

Red Hat went through a series of Presidents, each one having the skills needed at the time until eventually the need of IBM matched the desires of the Red Hat stockholders.

It is no secret that Red Hat did not care about the desktop other than as a development platform for RHEL. They gave up their desktop development to Fedora. Red Hat cared about the enterprise, the companies that were willing to pay hefty price tags for the support that Red Hat was going to sell them with the assurance that the customers would have the source code in case they needed it.

These enterprise companies are serious about their need for computers, but do not want to make the investment in employees to give them the level of support they need. So they pay Red Hat. But most of those companies have Apple or Microsoft on the desktop and could care less about having Fedora there. They want RHEL to be solid, and to have that phone ready, and they are willing to pay for it.

The alternatives are to buy a closed-source solution, and do battle to get the source code when you need it or deal (on a server basis) a solution that is not a hardware/software system solution needed by IBM.

“Full Stack” systems companies versus others


A few years ago Oracle made a decision to buy the Intellectual Property of Sun Microsystems. Of course Oracle had its products work on many different operating systems, but Oracle realized that if they had complete control of the hardware, the operating system and the application base (in this case their premier Oracle database engine) they would create “Unstoppable Oracle”.

Why is a full-stack, systems company preferred? You can make changes and fixes to the full-stack that benefits your applications and not have to convince/cajole/argue with people to get it in. Likewise you can test the full stack for inefficiencies or weak points.

I have worked for “full-stack” companies. We supported our own hardware. The device drivers we wrote had diagnostics that the operating system could make visible to the systems administrators to tell them that devices were ABOUT to fail, and to allow those devices to be swapped out. We built features into the system that benefited our database products and our networking products. Things could be made more seamless.

IBM is a full-stack company. Apple is a full-stack company. Their products tend to be more expensive, but many serious people pay more for them.

Why would companies pay to use RHEL?


Certain companies (those we call “enterprises”) are not universities or hobbyists. Those companies (and governments) use terms like “mission critical” and “always on”. They typically do not measure their numbers of computers in the tens or hundreds, but thousands….and they need them to work well.

They talk about “Mean Time to Failure” (MTTF) and “Mean Time to Repair” (MTTR) and want to have “Terms of Service Agreements” (TSA) which talk about so many hours of up-time that are guaranteed (99.999% up-time) with penalties if they are not met. And as a rule of thumb computer companies know that for every “9” to the right of the decimal point you need to put in 100 times more work and expense to get there.

And typically in these “Terms of Service” you also talk about how many “Points of Contact” you have between the customer and the service provider. The fewer the “Points of Contact” the less your contract costs because the customer supplied “point of contact” will have more knowledge about the system and the problem than your average user.

Also on these contracts the customer does not call into what we in the industry call “first line support”. The customer has already applied all the patches, rebooted the system, and made sure the mouse is plugged in. So the customer calls a special number and gets the second or third line of support.

In other words, serious people. Really serious people. And those really serious people are ready to spend really serious money to get it.

I have worked both for those companies that want to buy those services and those companies that needed to provide those services.

Many people will understand that the greater the number of systems that you have under contract the more issues you will have. Likewise the greater number of systems you have under contract the lower the cost of providing service per system if spread evenly across all those customers and systems who need that enterprise support.

IBM has typically been one of those companies that provided really serious support.

Tying it all together


IBM still had many operating systems and solutions that they used in their business solutions business, but IBM needed a Linux solution that they could use as a full-stack solution, just like Oracle did. Giving IBM the ability to integrate the hardware, operating system and solutions to fit the customer better.

Likewise Red Hat Software, with its RHEL solution, had the reputation and engineering behind it to provide an enterprise solution.

Red Hat had focused on enterprise servers, unlike other well-known distributions, with their community version “Fedora” acting as a trial base for new ideas to be folded into RHEL at a later time. However RHEL was the Red Hat business focus.

It should also be pointed out that some pieces of software came only from Red Hat. There were few “community people” who worked on some pieces of the distribution called “RHEL”. So while many of the pieces were copyrighted then released under some version of the GPL, many contributions that made up RHEL came only from Red Hat.

Red Hat also had a good reputation in the Linux community, releasing all of their source code to the larger community and charging for support.

However, over time some customers developed a pattern of purchasing a small number of RHEL systems, then using the “bug-for-bug” compatible version of Red Hat from some other distribution. This, of course, saved the customer money, however it also reduced the amount of revenue that Red Hat received for the same amount of work. This forced Red Hat to charge more for each license they sold, or lay off Red Hat employees, or not do projects they might have otherwise funded.

So recently Red Hat/IBM made a business decision to limit their customers to those who would buy a license from them for every single system that would run RHEL and only distribute their source-code and the information necessary on how to build that distribution to those customers. Therefore the people who receive those binaries would receive the sources so they could fix bugs and extend the operating system as they wished…..this was, and is, the essence of the GPL.

Most, if not all, of the articles I have read have said something along the lines of “IBM/Red Hat seem to be following the GPL..but…but…but...the community!”

Which community? There are plenty of distributions for people who do not need the same level of engineering and support that IBM and Red Hat offer. Red Hat, and IBM, continue to send their changes for GPLed code “upstream” to flow down to all the other distributions. They continue to share ideas with the larger community.

In the early days of the DEC Linux/alpha port I used Red Hat because they were the one distribution who worked along with DEC to put the bits out. Later other distributions followed onto the Alpha from the work that Red Hat had done. Quite frankly, I have never used “RHEL” and have not used Fedora in a long time. Personal preference.

However I now see a lot of people coming out of the woodwork and beating their breasts and saying how they are going to protect the investment of people who want to use RHEL for free.

I have seen developers of various distributions make T-shirts declaring that they are not “Freeloaders”. I do not know who may have called any of the developers of CentOS or Rocky Linux, Alma or any other “clone” of any other distribution a “freeloader”. I have brought out enough distributions in my time to know that doing that is not “gratis”. It takes work.

However I will say that there are many people who use these clones and do not give back to the community in any way, shape or form who I consider to be “freeloaders”, and that would probably be the people who sign a business agreement with IBM/Red Hat and then do not want to live up to that agreement. For these freeloaders there are so many other distributions of Linux that would be “happy” to have them use their distributions.

/*
A personal note here:


As I have stated above, I have been in the “Open Source” community before there was Open Source, before there was the Free Software Foundation, before there was the GNU project.

I am 73 years old, and have spent more than 50 years in “the community”. I have whip marks up and down my back for promoting source code and giving out sources even when I might have been fired or taken to court for it, because the customer needed it. Most of the people who laughed at me for supporting Linux when I worked for the Digital Unix Group are now working for Linux companies. That is ok. I have a thick skin, but the whip marks are still there.

There are so many ways that people can help build this community that have nothing to do with the ability to write code, write documentation or even generate a reasonable bug report.

Simply promoting Free Software to your schools, companies, governments and understanding the community would go a long way. Starting up a Linux Club (lpi.org/clubs) in your school or helping others to Upgrade to Linux (upgradetolinux.com) are ways that Linux users (whether individuals, companies, universities or governments) can contribute to the community.

But many of the freeloaders will not even do that.
*/

So far I have seen four different distributions saying that they will continue the production of “not RHEL”, generating even more distributions for the average user to say “which one should I use”? If they really want to do this, why not just work together to produce one good one? Why not make their own distributions a RHEL competitor? How long will they keep beating their breasts when they find out that they can not make any money at doing it?

SuSE said that they would invest ten million dollars in developing a competitor to RHEL. Fantastic! COMPETE. Create an enterprise competitor to Red Hat with the same business channels, world-wide support team, etc. etc. You will find it is not inexpensive to do that. Ten million may get you started.

My answer to all this? RHEL customers will have to decide what they want to do. I am sure that IBM and Red Hat hope that their customers will see the value of RHEL and the support that Red Hat/IBM and their channel partners provide for it.

The rest of the customers who just want to buy one copy of RHEL and then run a “free” distribution on all their other systems no matter how it is created, well it seems that IBM does not want to do business with them anymore, so they will have to go to other suppliers who have enterprise capable distributions of Linux and who can tolerate that type of customer.

I will also point out that IBM and Red Hat have presented one set of business conditions to their customers, and their customers are free to accept or reject them. Then IBM and Red Hat are free to create another set of business conditions for another set of customers.

I want to make sure people know that I do not have any hate for people and companies who set business conditions as long as they do not violate the licenses they are under. Business is business.

However I will point out that as “evil” as Red Hat and IBM have been portrayed in this business change there is no mention at all of all the companies that support Open Source “Permissive Licenses”, which do not guarantee the sources to their end users, or offer only “Closed Source” Licenses….who do not allow and have never allowed clones to be made….these people and companies do not have any right to throw stones (and you know who you are).

Red Hat and IBM are making their sources available to all those who receive their binaries under contract. That is the GPL.

For all the researchers, students, hobbyists and people with little or no money, there are literally hundreds of distributions that they can choose, and many that run across other interesting architectures that RHEL does not even address.

Source: lpi.org

Tuesday 17 October 2023

Web Development Must Be Considered as a Whole

Web Development, LPI, LPI Career, LPI Jobs, LPI Exam, LPI Preparation, LPI Tutorial and Materials

Visit our website” is the phrase found in every press release and at the end of articles and white papers. Websites have become the primary medium to inform the public about products, services, leisure, news, education, and just about everything. But to fulfill the websites’ mission, their developers need a fine understanding of their website visitors, their tools, and the deployment of their back-end applications.

In this article, I explain why it’s important to understand usability and accessibility deeply—the user experience (UX), but more—to understand the impact of the tools you use to build your website, and to deploy your application efficiently on your servers.

Usability and Accessibility


Keep informed about innovations in the display of information that can make your website easier to view and navigate. It’s the responsibility of the web developer to create a structure for the website that allows visitors to navigate the contents in a reliable, agile, and efficient way.

An example of a successful web design element is the accordion, which displays brief summaries of the most popular questions about a topic. When a visitor clicks on a summary, it expands into a fuller explanation. This element displays content that interests most visitors concisely, and makes navigation friendlier. The accordion works better if an open element collapses when the visitor opens another element. The accordion is particularly suited to the many visitors these days who visit websites on their phones, because the element reduces scrolling.

Most websites today are not accessible to all visitors—even though accessibility is required by regulations such as the Americans with Disabilities Act in the USA and the European Accessibility Act in the EU, and even though the World Wide Web Consortium (W3C) offers precise https://www.w3.org/WAI/standards-guidelines/wcag/ guidelines for accessibility.

Websites still put backgrounds that are not distinguishable enough from the print that appears on them, or provide important features through graphics that have no accompanying explanatory text. The sites don’t work well for visitors who require assistive technologies such as screen magnifiers, screen readers, or voice recognition. Many sites are also not responsive: that is, they don’t adapt to the size of the visitor’s screen. Others fail to support all legitimate forms of payment.

These usability problems plague many sites even if they use content management systems Drupal or WordPress, and depend on themes that create the user experience. It should be noted that addressing the needs of disabled users (the blind, those who have trouble manipulating a mouse, and so forth) usually also provides a better user experience for other visitors.

Performance is also your responsibility as a web developer. Websites often contain large graphics that don’t respect the needs of people with slow Internet access. Some popular web modules also create a performance problem, by making features a priority and loading sites down with all kinds of programming code that your site might never use. Before you use a module, research its security record, its behavior, and its potential effects on performance,

But too many developers assume that performance is fine because pages load quickly into their browsers in the office, or at home on their high-speed Internet connections.

Project Integration


User experience is not the only topic calling for research. Web developers must also think about how to integrate their code into their organizations.

It’s important to deploy your application in a manner that doesn’t affect other projects. When I started web development, my work environment was on a virtual server with a Linux system. A single directory held all the projects we had developed, including those that had been retired.

These projects shared system resources. Therefore, when we needed to update a website, other projects were affected.

That’s when I saw the need to investigate and implement another solution. I chose the container management system Docker, which put my code in an independent container. This change allowed me to configure the precise requirements of each project, such as a specific version of the database engine, web server, or interpreter.

While exploring deployment options, I discovered other collaborative tools such as the version control tool Git. Version control is very useful in web development, as on other projects requiring updates. Using Git let me share my work with other developers.

Resources such as the Web Development Essentials program from the Linux Professional Institute (LPI) offer web developers the opportunity to acquire basic knowledge of critical tools such as HTML, JavaScript, CSS, and SQL.

The practical experience you obtain by completing the program allows you to step into the wonderful world of web-based software development, to understand the workings of the systems, applications, and websites that we use every day, and little by little to discover how you can create your own solutions for your organization or personal project.

In summary, our work as web developers, in addition to producing websites, is focused on researching, testing, innovating, and especially enjoying these other sides of programming. In addition to writing source code, a broad approach to our job allows us to improve the end user’s experience in accessing information.

Source: lpi.org