Saturday, 24 August 2024

Morrolinux: Linux on Apple Silicon – Surpassing Expectations

Morrolinux: Linux on Apple Silicon – Surpassing Expectations

Linux’s adaptability is well-known, with its ability to run on a myriad of architectures forming a testament to its flexibility. The journey of porting Glu Linux to Apple Silicon M1 highlights this adaptability. Initial reactions were mixed, with some questioning the logic behind installing Linux on a Mac. However, the combination of Apple Silicon M1’s hardware efficiency and relative affordability presented a compelling case for Linux enthusiasts.

The Beginnings: Growing Pains and The Role of Community


Initially, the compatibility of Linux with Apple Silicon was a work in progress. Key components such as Bluetooth, speakers, and GPU acceleration were missing, limiting the usability of Asahi Linux in everyday scenarios. Despite these challenges, the project, led by Hector Martin (AKA marcan), made significant progress, largely due to community support on platforms such as Patreon.

The community played indeed a crucial role in the project’s development. Notable contributors such as YouTuber Asahi Lina engaged in reverse engineering the GPU, sharing progress through live streams. This collaborative and open-source approach was pivotal in uncovering crucial traits of the hardware in the absence of official documentation from Apple.

Morrolinux: Linux on Apple Silicon – Surpassing Expectations
Asahi Lina running the first basic GPU accelerated demo

Major Milestones: From GPU Acceleration to Enhanced Audio Quality


One of the project’s significant achievements was the implementation of GPU drivers, supporting OpenGL 2.1 and OpenGL ES 2.0, along with OpenGL 3 and(a work in progress) Vulkan. This development enabled smoother operation of desktop environments and web browsers.

Morrolinux: Linux on Apple Silicon – Surpassing Expectations
Portals (opengl 3) running on steam under x86 emulation on an M1 machine

The collaboration between the Asahi Linux team and the PipeWire and Wire Plumber projects not only achieved unparalleled audio quality through speaker calibration on Linux laptops but also made broader contributions to the Linux audio ecosystem. By adhering to an “upstream first” policy, these improvements offer benefits beyond the Linux on Apple Silicon project, enhancing audio experiences across various platforms. Notably, this partnership introduced automatic loading of audio DSP filters for different hardware models, addressing a gap in the Linux audio stack for improved sound quality across devices.

The Rise of Remix and Full-Scale Support


The release of Fedora Asahi Remix marked a milestone in offering a stable version of Linux for Apple Silicon. This version streamlined the installation process, facilitating a dual-boot setup with MacOS. The release also boasted extensive hardware support, including novel features like the (also still a work in progress) Apple Neural Engine on M1 and M2 processors.

Morrolinux: Linux on Apple Silicon – Surpassing Expectations
KDE about page on an M1 machine running Fedora

The Linuxified Apple Silicon: Progress and Prospects


Linux on Apple Silicon has shown remarkable progress, offering a user experience that rivals and, in some aspects, outshines MacOS. Most functionalities, including the keyboard backlight and webcam, operate smoothly.

Although further development is needed for complete microphone support and external display compatibility via USB-C and Thunderbolt, the overall performance is commendable. This rapid evolution highlights the strength of community-driven, open-source collaboration. With just two years since its inception, the project underscores the cooperative spirit of the Linux community. Anticipating the future, further improvements and wider adoption of Linux on Apple devices are expected, supported by continued development and active community; and if you are wondering if Linux on Apple Silicon is going to be better its performance on x86… Well: the answer is probably going to be – Yes! – soon…

Source: lpi.org

Saturday, 10 August 2024

The Evolution of Research in Computer Science

The Evolution of Research in Computer Science

In developing products the normal stages are typically research, advanced development, product development, manufacturing engineering, customer serviceability engineering, product release, product sustainability, product retirement. In the modern day of agile programming and “devops” some or all of these steps are often blurred, but the usefulness of all of them is still recognized, or should be.

Today we will concentrate on research, which is the main reason why I gave my support to the Linux/DEC Alpha port in the years of 1994 to 1996, even though my paid job was to support and promote the 64-bit DEC Unix system and (to a lesser extent) the 64-bit VMS system. It is also why I continue to give my support to Free and Open Source Software, and especially Linux, after that.

In early 1994 there were few opportunities for a truly “Open” operating system. Yes, research universities were able to do research because of the quite liberal university source code licensing of Unix systems, as well as governmental research and industrial research. However the implementation of that research was still under control of the commercial interests in computer science, and speed of taking research to development to distribution was relatively slow. BSD-lite was still not on the horizon as the USL/BSDI lawsuit was still going on. MINIX was still hampered by its restraint to educational and research uses (not solved until the year 2000). When you took it all into consideration, the Linux Kernel project was the only show in town especially when you took into account that all of its libraries, utilities and compilers were already 64-bit in order to run on Digital Unix.

Following on close to the original deportation of GNU/Linux V1.0 (starting in late 1993 with distributions such as Soft Landing Systems, Yggdrasil, Debian, Red Hat, Slackware and others) was the need for low-cost flexible supercomputers, initially called Beowulf systems. Donald Becker and Dr. Thomas Sterling codified and publicized the use of commodity hardware (PCs) and Free Software (Linux) to replace uniquely designed and manufactured supercomputers to produce systems that could deliver the power of a supercomputer for approximately 1/40th of the price. In addition, when the initial funding job of these computers was finished, the computer could be re-deployed to other projects either in whole, or by breaking apart into smaller clusters. This model eventually became known as “High Performance Computing” (HPC) and the 500 world’s fastest computers use this technology today.

Before we get started in the “why” of computer research and FOSS we should take a look at how “Research” originated in computer science. In computer science research was originally done only by the entities that could afford impossibly expensive equipment, or could design and produce its own hardware. These were originally research universities, government and very large electronics companies. Later on smaller companies sprung up that also did research. Many times this research generated patents, which helped to fuel the further development of research.

Eventually the area of software extended to entities that did not have the resources to purchase their own computers. Microsoft wrote some of their first software on machines owned by MIT. The GNU tools were often developed on computers that were not owned by the Free Software Foundation. Software did not necessarily require ownership of the very expensive hardware needed in the early days of computers. Today you could do many forms of computer science research on an 80 USD (or cheaper) Raspberry Pi.

Unfortunately today many companies have retired or greatly reduced their Research groups. Only a very few of them do “pure research” and even fewer license their research out to other companies on an equitable basis.

If you measure research today, using patents as a measure, more than 75% of patents are awarded to small and medium sized companies, and the number of patents awarded per employee is astonishing when you look at companies that have 1-9 employees. While it is true that large companies like Google and Apple apply and receive a lot of patents overall, the number of patents per employee is won by small to medium companies hands down. Of course many readers of this do not like patents, and particularly patents on software, but it is a way of measuring research and it can be shown that a lot of research is currently done today by small companies and even “lone wolves”.

By 1994 I had lived through all of the major upgrades to “address space” in the computer world. I started with a twelve-bit address space (4096 twelve-bit words in a DEC PDP-8) to a 24-bit address space (16,,777,216 bytes) in an IBM mainframe to 16 bits ( 65,536 bytes) in the DEC PDP-11 to 32 bits (4,294,967,296 bytes) in the DEC VAX architecture. While many people never felt really constrained by the 32-bit architecture, I knew of many programmers and problems that were.

The problem was with what we call “edge programming”, where the dataset that you are working with is so big you can not have it all in the same memory space. When this happens you start to “organize” or “break down” the data, then program to transfer results from address space to address space. Often this means you have to save the meta data (or partial results) from one address space, then apply it to the next address space. Often this causes problems in getting the program correct.

What types of programs are these? Weather forecasting, climate study, genome research, digital movie production, emulating a wind tunnel, modeling an atomic explosion.

Of course all of these are application level programs, and any implementation of a 64-bit operating system would probably serve the purpose of writing that application.

However many of these problems are on a research level and whether or not then finished application was FOSS, the tools used could make a difference.

One major researcher in genome studies was using the proprietary database of a well-known database vendor. That vendor’s licensing made it impossible for the researcher to simply image a disk with the database on it, and send the backup to another researcher who had the same proprietary database with the same license as the first researcher. Instead the first researcher had to unload their data, send the tapes to the second researcher and have the second researcher load the tapes into their database system.

This might have been acceptable for a gigabyte or two of data, but was brutal for the petabytes (one million gigabytes) of data that was used to do the research.

This issue was solved by using an open database like MySQL. The researchers could just image the disks and send the images.

While I was interested in 64-bit applications and what they could do for humanity, I was much more interested in 64-bit libraries, system calls, and the efficient implementation of both, which would allow application vendors to use data sizes almost without bound in applications.

Another example is the rendering of digital movies. With analog film you have historically had 8mm, 16mm, 32mm and (most recently) 70 mm film and (of course) in a color situation you have each “pixel” of the color have (in effect) infinite color depth due to the analog qualities of film. With analog film this is also no concept of “compression” from frame to frame. Each frame is a separate “still image”, which our eye gives the illusion of movement.

With digital movies there are so many considerations that it is difficult to say what the “average” size of a movie or even one frame. Is the movie wide screen? 3D? Imax? Standard or High definition? What is the frame rate and the length of the video? What is the resolution of each frame?

We can get an idea of how big these video files could be are (for a 1 hour digital movie): 2K – 3GB, 4K – 22GB and 8K – 40GB in size. Since a 32 bit address space allows either 2GB or 4GB of address space (depending on implementation) at most you can see that fitting even a relatively short “low technology” film into memory at one time.

Why do you need the whole film? Why not just one frame at a time?

It has to do with compression. Films are not sent to movie theaters or put onto a physical medium like Blu-Ray in a “raw’ form. They are compressed with various compression techniques through the use of a “codec”, which uses a mathematical technique to compress, then later decompress, the images.

Many of these compression techniques use a “difference” between a particular frame used as a base and the differences applied over the next couple of (reduced size) frames. If this was continued over the course of the entire movie the problem comes when there is some glitch in the process. How far back in the file do you have to go in order to fix the “glitch”? The answer is to store another complete frame every so often to “reset” the process and start the “diffs” all over again. There might be some small “glitch” in the viewing, but typically so small no one would notice it.

Thrown in the coordination needed by something like 3D or Imax, and you can see the huge size of a movie cinematic event today.

Investigating climate change, it is nice to be able to access in 64-bit virtual memory over 32,000 bytes for every square meter on the surface of the earth including all the oceans.

When choosing an operating system for doing research there were several options.

You could use a closed source operating system. You *might* be able to get a source code license, sign a non-disclosure contract (NDA) , do your research and publish the results. The results would be some type of white-paper delivered at a conference (I have seen many of these white-papers) but there would be no source code published because that was proprietary. A collaborator would have to go through the same steps you did to get the sources (if they could), and then you could supply “diffs” to that source code. Finally, there was no guarantee that the research you had done would actually make it into the proprietary system….that would be up to the vendor of the operating system. Your research could be for nothing.

It was many years after Windows NT running as a 32-bit operating system on the Alpha that Microsoft released a 64-bit address space on any of their operating systems. Unfortunately this was too late for Digital, a strong partner of Microsoft, to take advantage of the 64-bit address space that the Alpha facilitated.

We are entering an interesting point in computer science. Many of the “bottlenecks” of computing power are, for the most part, overcome. No longer do we struggle over issues of having single-core, 16-bit monolithic sub-megabyte memory hardware running at sub-1MHz clock speeds that support only 90KB floppy disks. Today’s 64-bit, multi-core, multi-processor, multiple GigaByte memories with solid-state storage systems and multiple Gbit/second LAN networking fit into laptops, much less server systems gives a much more stable basic programming platform.

Personally I waited for a laptop that would support USB 40 Gbit per second and things like WiFi 7 before I purchased what might be the last laptop that I purchase in my lifetime.

At the same time we are moving from when SIMD means more than GPUs that can paint screens very fast, but are moving into MIMD programming hardware, with AI and Quantum computing pushing the challenges of programming even further. All of these will take additional research of how to integrate these into average-day programming. My opinion is that any collaborative research, to facilitate a greater chance of follow-on collaborative advanced development and implementation must be done with Free and Open Source Software.

Source: lpi.org

Saturday, 3 August 2024

Legal Linux: A Lawyer in Open Source

Legal Linux: A Lawyer in Open Source

In the ever-evolving landscape of technology, the boundaries between disciplines are becoming increasingly blurred. One individual who exemplifies this intersection of diverse fields is Andrea Palumbo, a lawyer who has made his mark in the legal support to IT and open source technology.

As a Solution Provider Partner with the Linux Professional Institute (LPI), Andrea’s journey challenges conventional notions of what it means to be an IT professional.

His unique perspective sheds light on the expanding role of legal expertise in shaping the future of the IT industry, particularly within the open source community.

In this exclusive interview, we delve into Andrea’s motivations, experiences, and insights as a Solution Provider LPI partner. From his initial inspiration to integrate legal knowledge with open source technologies to his contributions in advocating for the new Open Source Essentials exam and certificate, Andrea’s story is one of innovation and collaboration.

Andrea, as a lawyer, what inspired you to become a partner with the Linux Professional Institute (LPI)?


The driving force behind everything was undoubtedly my passion for technology and the FOSS philosophy. Consequently, I consider it essential to become part of a community that shares my principles and to improve my skills in the open-source domain.

How do you see the intersection between law and open source technology shaping the future of the IT industry?


I’ve always regarded open source as a delightful anomaly in the IT landscape—a place where seemingly incompatible elements like business, innovation, and knowledge sharing can harmoniously coexist. In this reality, made possible by FOSS technologies, I firmly believe that law, when studied and applied correctly, can facilitate the widespread adoption and understanding of this approach to new technologies.

What motivated you to write an article about LPI’s new Open Source Essentials Exam and Certificate?


As soon as I learned about LPI’s new Open Source Essentials Exam, I recognized its significance. It represents an essential step for anyone seeking to enhance their preparation in FOSS technologies.

In your opinion, what makes the Open Source Essentials Exam and Certificate valuable for professionals outside the traditional IT realm?


Obviously, this certification is not for everyone, but those who work in the legal field and provide advice or assistance related to digital law cannot afford to be unaware of the fundamental aspects of Open Source. The certificate, in addition to specific skills, demonstrates a professional’s ability to delve into certain areas, even highly complex ones, and to stay constantly updated—an approach that our partners notice and appreciate.

How do you believe the Open Source Essentials Certification can benefit professionals in legal fields or other non-technical sectors?


Certainly, the certificate assures clients and partners that the consultant they rely on possesses specific expertise in a very particular domain. On the other hand, as I mentioned earlier, I believe that every legal professional dealing with digital law should be familiar with the legal foundations of Open Source.

How do you stay updated with the latest developments in open source technology, considering your legal background?


I’m an avid reader of online magazines that focus on IT, and specialized websites.

What challenges have you faced as a non-technical professional in the IT industry, and how have you overcome them?


Many times, there are comprehension issues between the digital and legal worlds because both use technical language that is not understandable to the other party. In my experience, when unnecessary formalities have been abandoned between these two worlds, all problems have always been overcome.

And, finally, what message would you like to convey to professionals from diverse backgrounds who may be interested in partnering with LPI and exploring opportunities in the open source community?


The Open Source world, in my opinion, based on the idea of sharing, finds its greatest expression in FOSS communities. It is in them that you can experience the true value of this philosophy and derive significant benefits, both in terms of knowledge and, why not, business.

Source: lpi.org