Showing posts with label Linux Professional Institute (LPI). Show all posts
Showing posts with label Linux Professional Institute (LPI). Show all posts

Saturday, 24 August 2024

Morrolinux: Linux on Apple Silicon – Surpassing Expectations

Morrolinux: Linux on Apple Silicon – Surpassing Expectations

Linux’s adaptability is well-known, with its ability to run on a myriad of architectures forming a testament to its flexibility. The journey of porting Glu Linux to Apple Silicon M1 highlights this adaptability. Initial reactions were mixed, with some questioning the logic behind installing Linux on a Mac. However, the combination of Apple Silicon M1’s hardware efficiency and relative affordability presented a compelling case for Linux enthusiasts.

The Beginnings: Growing Pains and The Role of Community


Initially, the compatibility of Linux with Apple Silicon was a work in progress. Key components such as Bluetooth, speakers, and GPU acceleration were missing, limiting the usability of Asahi Linux in everyday scenarios. Despite these challenges, the project, led by Hector Martin (AKA marcan), made significant progress, largely due to community support on platforms such as Patreon.

The community played indeed a crucial role in the project’s development. Notable contributors such as YouTuber Asahi Lina engaged in reverse engineering the GPU, sharing progress through live streams. This collaborative and open-source approach was pivotal in uncovering crucial traits of the hardware in the absence of official documentation from Apple.

Morrolinux: Linux on Apple Silicon – Surpassing Expectations
Asahi Lina running the first basic GPU accelerated demo

Major Milestones: From GPU Acceleration to Enhanced Audio Quality


One of the project’s significant achievements was the implementation of GPU drivers, supporting OpenGL 2.1 and OpenGL ES 2.0, along with OpenGL 3 and(a work in progress) Vulkan. This development enabled smoother operation of desktop environments and web browsers.

Morrolinux: Linux on Apple Silicon – Surpassing Expectations
Portals (opengl 3) running on steam under x86 emulation on an M1 machine

The collaboration between the Asahi Linux team and the PipeWire and Wire Plumber projects not only achieved unparalleled audio quality through speaker calibration on Linux laptops but also made broader contributions to the Linux audio ecosystem. By adhering to an “upstream first” policy, these improvements offer benefits beyond the Linux on Apple Silicon project, enhancing audio experiences across various platforms. Notably, this partnership introduced automatic loading of audio DSP filters for different hardware models, addressing a gap in the Linux audio stack for improved sound quality across devices.

The Rise of Remix and Full-Scale Support


The release of Fedora Asahi Remix marked a milestone in offering a stable version of Linux for Apple Silicon. This version streamlined the installation process, facilitating a dual-boot setup with MacOS. The release also boasted extensive hardware support, including novel features like the (also still a work in progress) Apple Neural Engine on M1 and M2 processors.

Morrolinux: Linux on Apple Silicon – Surpassing Expectations
KDE about page on an M1 machine running Fedora

The Linuxified Apple Silicon: Progress and Prospects


Linux on Apple Silicon has shown remarkable progress, offering a user experience that rivals and, in some aspects, outshines MacOS. Most functionalities, including the keyboard backlight and webcam, operate smoothly.

Although further development is needed for complete microphone support and external display compatibility via USB-C and Thunderbolt, the overall performance is commendable. This rapid evolution highlights the strength of community-driven, open-source collaboration. With just two years since its inception, the project underscores the cooperative spirit of the Linux community. Anticipating the future, further improvements and wider adoption of Linux on Apple devices are expected, supported by continued development and active community; and if you are wondering if Linux on Apple Silicon is going to be better its performance on x86… Well: the answer is probably going to be – Yes! – soon…

Source: lpi.org

Saturday, 10 August 2024

The Evolution of Research in Computer Science

The Evolution of Research in Computer Science

In developing products the normal stages are typically research, advanced development, product development, manufacturing engineering, customer serviceability engineering, product release, product sustainability, product retirement. In the modern day of agile programming and “devops” some or all of these steps are often blurred, but the usefulness of all of them is still recognized, or should be.

Today we will concentrate on research, which is the main reason why I gave my support to the Linux/DEC Alpha port in the years of 1994 to 1996, even though my paid job was to support and promote the 64-bit DEC Unix system and (to a lesser extent) the 64-bit VMS system. It is also why I continue to give my support to Free and Open Source Software, and especially Linux, after that.

In early 1994 there were few opportunities for a truly “Open” operating system. Yes, research universities were able to do research because of the quite liberal university source code licensing of Unix systems, as well as governmental research and industrial research. However the implementation of that research was still under control of the commercial interests in computer science, and speed of taking research to development to distribution was relatively slow. BSD-lite was still not on the horizon as the USL/BSDI lawsuit was still going on. MINIX was still hampered by its restraint to educational and research uses (not solved until the year 2000). When you took it all into consideration, the Linux Kernel project was the only show in town especially when you took into account that all of its libraries, utilities and compilers were already 64-bit in order to run on Digital Unix.

Following on close to the original deportation of GNU/Linux V1.0 (starting in late 1993 with distributions such as Soft Landing Systems, Yggdrasil, Debian, Red Hat, Slackware and others) was the need for low-cost flexible supercomputers, initially called Beowulf systems. Donald Becker and Dr. Thomas Sterling codified and publicized the use of commodity hardware (PCs) and Free Software (Linux) to replace uniquely designed and manufactured supercomputers to produce systems that could deliver the power of a supercomputer for approximately 1/40th of the price. In addition, when the initial funding job of these computers was finished, the computer could be re-deployed to other projects either in whole, or by breaking apart into smaller clusters. This model eventually became known as “High Performance Computing” (HPC) and the 500 world’s fastest computers use this technology today.

Before we get started in the “why” of computer research and FOSS we should take a look at how “Research” originated in computer science. In computer science research was originally done only by the entities that could afford impossibly expensive equipment, or could design and produce its own hardware. These were originally research universities, government and very large electronics companies. Later on smaller companies sprung up that also did research. Many times this research generated patents, which helped to fuel the further development of research.

Eventually the area of software extended to entities that did not have the resources to purchase their own computers. Microsoft wrote some of their first software on machines owned by MIT. The GNU tools were often developed on computers that were not owned by the Free Software Foundation. Software did not necessarily require ownership of the very expensive hardware needed in the early days of computers. Today you could do many forms of computer science research on an 80 USD (or cheaper) Raspberry Pi.

Unfortunately today many companies have retired or greatly reduced their Research groups. Only a very few of them do “pure research” and even fewer license their research out to other companies on an equitable basis.

If you measure research today, using patents as a measure, more than 75% of patents are awarded to small and medium sized companies, and the number of patents awarded per employee is astonishing when you look at companies that have 1-9 employees. While it is true that large companies like Google and Apple apply and receive a lot of patents overall, the number of patents per employee is won by small to medium companies hands down. Of course many readers of this do not like patents, and particularly patents on software, but it is a way of measuring research and it can be shown that a lot of research is currently done today by small companies and even “lone wolves”.

By 1994 I had lived through all of the major upgrades to “address space” in the computer world. I started with a twelve-bit address space (4096 twelve-bit words in a DEC PDP-8) to a 24-bit address space (16,,777,216 bytes) in an IBM mainframe to 16 bits ( 65,536 bytes) in the DEC PDP-11 to 32 bits (4,294,967,296 bytes) in the DEC VAX architecture. While many people never felt really constrained by the 32-bit architecture, I knew of many programmers and problems that were.

The problem was with what we call “edge programming”, where the dataset that you are working with is so big you can not have it all in the same memory space. When this happens you start to “organize” or “break down” the data, then program to transfer results from address space to address space. Often this means you have to save the meta data (or partial results) from one address space, then apply it to the next address space. Often this causes problems in getting the program correct.

What types of programs are these? Weather forecasting, climate study, genome research, digital movie production, emulating a wind tunnel, modeling an atomic explosion.

Of course all of these are application level programs, and any implementation of a 64-bit operating system would probably serve the purpose of writing that application.

However many of these problems are on a research level and whether or not then finished application was FOSS, the tools used could make a difference.

One major researcher in genome studies was using the proprietary database of a well-known database vendor. That vendor’s licensing made it impossible for the researcher to simply image a disk with the database on it, and send the backup to another researcher who had the same proprietary database with the same license as the first researcher. Instead the first researcher had to unload their data, send the tapes to the second researcher and have the second researcher load the tapes into their database system.

This might have been acceptable for a gigabyte or two of data, but was brutal for the petabytes (one million gigabytes) of data that was used to do the research.

This issue was solved by using an open database like MySQL. The researchers could just image the disks and send the images.

While I was interested in 64-bit applications and what they could do for humanity, I was much more interested in 64-bit libraries, system calls, and the efficient implementation of both, which would allow application vendors to use data sizes almost without bound in applications.

Another example is the rendering of digital movies. With analog film you have historically had 8mm, 16mm, 32mm and (most recently) 70 mm film and (of course) in a color situation you have each “pixel” of the color have (in effect) infinite color depth due to the analog qualities of film. With analog film this is also no concept of “compression” from frame to frame. Each frame is a separate “still image”, which our eye gives the illusion of movement.

With digital movies there are so many considerations that it is difficult to say what the “average” size of a movie or even one frame. Is the movie wide screen? 3D? Imax? Standard or High definition? What is the frame rate and the length of the video? What is the resolution of each frame?

We can get an idea of how big these video files could be are (for a 1 hour digital movie): 2K – 3GB, 4K – 22GB and 8K – 40GB in size. Since a 32 bit address space allows either 2GB or 4GB of address space (depending on implementation) at most you can see that fitting even a relatively short “low technology” film into memory at one time.

Why do you need the whole film? Why not just one frame at a time?

It has to do with compression. Films are not sent to movie theaters or put onto a physical medium like Blu-Ray in a “raw’ form. They are compressed with various compression techniques through the use of a “codec”, which uses a mathematical technique to compress, then later decompress, the images.

Many of these compression techniques use a “difference” between a particular frame used as a base and the differences applied over the next couple of (reduced size) frames. If this was continued over the course of the entire movie the problem comes when there is some glitch in the process. How far back in the file do you have to go in order to fix the “glitch”? The answer is to store another complete frame every so often to “reset” the process and start the “diffs” all over again. There might be some small “glitch” in the viewing, but typically so small no one would notice it.

Thrown in the coordination needed by something like 3D or Imax, and you can see the huge size of a movie cinematic event today.

Investigating climate change, it is nice to be able to access in 64-bit virtual memory over 32,000 bytes for every square meter on the surface of the earth including all the oceans.

When choosing an operating system for doing research there were several options.

You could use a closed source operating system. You *might* be able to get a source code license, sign a non-disclosure contract (NDA) , do your research and publish the results. The results would be some type of white-paper delivered at a conference (I have seen many of these white-papers) but there would be no source code published because that was proprietary. A collaborator would have to go through the same steps you did to get the sources (if they could), and then you could supply “diffs” to that source code. Finally, there was no guarantee that the research you had done would actually make it into the proprietary system….that would be up to the vendor of the operating system. Your research could be for nothing.

It was many years after Windows NT running as a 32-bit operating system on the Alpha that Microsoft released a 64-bit address space on any of their operating systems. Unfortunately this was too late for Digital, a strong partner of Microsoft, to take advantage of the 64-bit address space that the Alpha facilitated.

We are entering an interesting point in computer science. Many of the “bottlenecks” of computing power are, for the most part, overcome. No longer do we struggle over issues of having single-core, 16-bit monolithic sub-megabyte memory hardware running at sub-1MHz clock speeds that support only 90KB floppy disks. Today’s 64-bit, multi-core, multi-processor, multiple GigaByte memories with solid-state storage systems and multiple Gbit/second LAN networking fit into laptops, much less server systems gives a much more stable basic programming platform.

Personally I waited for a laptop that would support USB 40 Gbit per second and things like WiFi 7 before I purchased what might be the last laptop that I purchase in my lifetime.

At the same time we are moving from when SIMD means more than GPUs that can paint screens very fast, but are moving into MIMD programming hardware, with AI and Quantum computing pushing the challenges of programming even further. All of these will take additional research of how to integrate these into average-day programming. My opinion is that any collaborative research, to facilitate a greater chance of follow-on collaborative advanced development and implementation must be done with Free and Open Source Software.

Source: lpi.org

Saturday, 3 August 2024

Legal Linux: A Lawyer in Open Source

Legal Linux: A Lawyer in Open Source

In the ever-evolving landscape of technology, the boundaries between disciplines are becoming increasingly blurred. One individual who exemplifies this intersection of diverse fields is Andrea Palumbo, a lawyer who has made his mark in the legal support to IT and open source technology.

As a Solution Provider Partner with the Linux Professional Institute (LPI), Andrea’s journey challenges conventional notions of what it means to be an IT professional.

His unique perspective sheds light on the expanding role of legal expertise in shaping the future of the IT industry, particularly within the open source community.

In this exclusive interview, we delve into Andrea’s motivations, experiences, and insights as a Solution Provider LPI partner. From his initial inspiration to integrate legal knowledge with open source technologies to his contributions in advocating for the new Open Source Essentials exam and certificate, Andrea’s story is one of innovation and collaboration.

Andrea, as a lawyer, what inspired you to become a partner with the Linux Professional Institute (LPI)?


The driving force behind everything was undoubtedly my passion for technology and the FOSS philosophy. Consequently, I consider it essential to become part of a community that shares my principles and to improve my skills in the open-source domain.

How do you see the intersection between law and open source technology shaping the future of the IT industry?


I’ve always regarded open source as a delightful anomaly in the IT landscape—a place where seemingly incompatible elements like business, innovation, and knowledge sharing can harmoniously coexist. In this reality, made possible by FOSS technologies, I firmly believe that law, when studied and applied correctly, can facilitate the widespread adoption and understanding of this approach to new technologies.

What motivated you to write an article about LPI’s new Open Source Essentials Exam and Certificate?


As soon as I learned about LPI’s new Open Source Essentials Exam, I recognized its significance. It represents an essential step for anyone seeking to enhance their preparation in FOSS technologies.

In your opinion, what makes the Open Source Essentials Exam and Certificate valuable for professionals outside the traditional IT realm?


Obviously, this certification is not for everyone, but those who work in the legal field and provide advice or assistance related to digital law cannot afford to be unaware of the fundamental aspects of Open Source. The certificate, in addition to specific skills, demonstrates a professional’s ability to delve into certain areas, even highly complex ones, and to stay constantly updated—an approach that our partners notice and appreciate.

How do you believe the Open Source Essentials Certification can benefit professionals in legal fields or other non-technical sectors?


Certainly, the certificate assures clients and partners that the consultant they rely on possesses specific expertise in a very particular domain. On the other hand, as I mentioned earlier, I believe that every legal professional dealing with digital law should be familiar with the legal foundations of Open Source.

How do you stay updated with the latest developments in open source technology, considering your legal background?


I’m an avid reader of online magazines that focus on IT, and specialized websites.

What challenges have you faced as a non-technical professional in the IT industry, and how have you overcome them?


Many times, there are comprehension issues between the digital and legal worlds because both use technical language that is not understandable to the other party. In my experience, when unnecessary formalities have been abandoned between these two worlds, all problems have always been overcome.

And, finally, what message would you like to convey to professionals from diverse backgrounds who may be interested in partnering with LPI and exploring opportunities in the open source community?


The Open Source world, in my opinion, based on the idea of sharing, finds its greatest expression in FOSS communities. It is in them that you can experience the true value of this philosophy and derive significant benefits, both in terms of knowledge and, why not, business.

Source: lpi.org

Saturday, 20 July 2024

Top 5 Reasons to Enroll in Linux Professional Institute’s Open Source Essentials Today

Top 5 Reasons to Enroll in Linux Professional Institute’s Open Source Essentials Today

In the rapidly evolving landscape of technology, mastering open-source systems has become indispensable for IT professionals and enthusiasts alike. The Linux Professional Institute’s (LPI) Open Source Essentials course offers an unparalleled opportunity to gain foundational knowledge and hands-on skills in this domain. Here, we delve into the top five compelling reasons why enrolling in the LPI’s Open Source Essentials course should be your next career move.

1. Comprehensive Introduction to Open Source Technologies

Open source software is not just a trend but a fundamental aspect of modern technology. The LPI’s Open Source Essentials course provides a thorough introduction to key open-source technologies, including Linux, and various tools and applications essential for a career in IT. By understanding the principles behind open-source software, you gain insight into its development, deployment, and management.

The course covers:

◉ Core Linux concepts: Learn about Linux distributions, file systems, and the Linux command line.

◉ Open source software fundamentals: Understand the philosophy behind open-source and its advantages over proprietary software.

◉ Practical applications: Gain hands-on experience with essential open-source tools used in various IT roles.

2. Industry-Recognized Certification

Earning a certification from a globally recognized institution such as the Linux Professional Institute adds significant value to your professional profile. The Open Source Essentials course is designed to prepare you for certification that is respected and valued across the IT industry.

Certification benefits include:

◉ Enhanced credibility: Stand out in a competitive job market with a certification that demonstrates your commitment to mastering open-source technologies.

◉ Career advancement: Many organizations prefer or require certification for IT roles, making it easier to advance in your current job or find new opportunities.

◉ Global recognition: LPI’s certification is acknowledged worldwide, providing a gateway to international career prospects.

3. Hands-On Experience with Real-World Scenarios

The LPI’s Open Source Essentials course is not just about theory; it emphasizes practical experience with real-world scenarios. This hands-on approach ensures that you are not only familiar with the concepts but also capable of applying them effectively in professional settings.

The course includes:

◉ Lab exercises: Engage in practical labs that simulate real-world tasks and problem-solving scenarios.

◉ Case studies: Analyze and work through case studies that illustrate common challenges and solutions in open-source environments.

◉ Project work: Complete projects that require you to utilize the skills learned throughout the course, demonstrating your ability to manage and implement open-source technologies.

4. Access to Expert Instructors and Resources

Enrolling in the Open Source Essentials course provides access to a network of experienced instructors and valuable resources. The instructors are seasoned professionals who bring a wealth of knowledge and real-world experience to the course.

Resources include:

◉ Expert guidance: Benefit from the insights and tips provided by instructors who have extensive experience in open-source technologies.

◉ Learning materials: Access comprehensive learning materials, including textbooks, online resources, and interactive content that reinforce your understanding of the subject matter.

◉ Community support: Join a community of learners and professionals where you can exchange ideas, seek advice, and collaborate on projects.

5. Future-Proof Your Career

As technology continues to advance, open-source software is becoming increasingly integral to the IT landscape. Enrolling in the LPI’s Open Source Essentials course helps you future-proof your career by equipping you with skills that are relevant and in demand.

Long-term career benefits include:

◉ Adaptability: Gain skills that are transferable across various IT roles and industries, making you adaptable to changes in technology.

◉ Increased employability: Open-source skills are highly sought after, improving your chances of securing a desirable position.

◉ Continued growth: Stay updated with the latest developments in open-source technologies and trends, ensuring that your skills remain relevant in the evolving job market.

Conclusion

The Linux Professional Institute’s Open Source Essentials course offers a wealth of benefits that make it a valuable investment for anyone looking to advance their career in IT. With its comprehensive curriculum, industry-recognized certification, practical experience, expert instruction, and career longevity, the course provides everything you need to excel in the world of open-source technologies. Enroll today to unlock the full potential of your IT career and gain a competitive edge in the job market.

Saturday, 22 June 2024

University Academy 92 Partners with LPI for Disruptive Learning

University Academy 92 Partners with LPI for Disruptive Learning

“It’s my role to disrupt the status quo in education,” says Aaron Saxton. In fact, his job title at the University Academy 92 in Manchester, England is the “Director of Disruptive Learning.” UA92 has been partnering with the Linux Professional Institute for many years, but most recently they have bumped up their partnership level to a Gold Academic Partner.

In contrast to the lecture and exam format of traditional education, UA92 mixes class time with a great deal of hands-on practice to “make learning more interactive, engaging, and adaptive.” A little over a year ago, they started up an apprenticeship programme with local employers. The focus is on DevOps, along with CyberOps and other similar disciplines. This apprenticeship is defined as DevOps Engineer Level 4.

Because LPI certifications test real-world knowledge and expect exam-takers to have practiced their skills in the field, the LPI philosophy melds well with UA92’s teaching practices.

UA92 is a “deliberately different” higher education institution that is enabling education for all, enrolling more than 20% of its students from the most disadvantaged backgrounds. All UA92 students take part in the “92 Programme” that teaches them life skills: how to excel in the workplace, manage projects, be resilient, and be a great team member. Furthermore, UA92 provides employers with an enriched curriculum that prepares students for the world of work.

Future LPI certifications for which UA92 is planning to train include the LPIC Linux series, DevOps, and Web Essentials.

Some 18 months ago, UA92 launched digital bootcamps that last between 10 to 16 weeks and teach cloud, web dev, cyber, and data analytics skills. UA92’s computer science and apprenticeship programmes also let students run computer labs on a variety of GNU/Linux distributions, launched through the AWS Academy. Skills include bash scripting, PHP, Python, security, networking, databases, containers, web technologies, CI/CD, and Infrastructure as Code (IaC).

“Linux drives life,” Saxton expounds enthusiastically. “In the real world, 96.3 percent of the top 1 million web servers are running Linux, and companies need Linux Engineers.” UA92 is focusing on LPI’s Linux Essentials certification. Twenty-one apprentices took the LPI Linux Essentials exam this Spring, and 95% passed.

UA92 is cofounded by Lancaster University and the “Class of 92.”

“We really want to celebrate our relationship and partnership with the Linux Professional Institute. We see LPI as a vehicle for innovation and change. It’s about enabling Education for All: through great partnerships like this, enabling everyone to excel in education regardless of their background.”- Aaron Saxton, Director of Disruptive Learning, University Academy 92.

“The partnership with University Academy 92 is a splendid opportunity to accomplish LPI’s mission. By educating and training the less privileged young people, we can actively support them in building a future-oriented career in Linux and open source.” – Aida Rosenthal, Regional Customer Success Manager, Linux Professional Institute

Source: lpi.org

Thursday, 20 June 2024

Why I Joined the LPI Board of Directors — Uirá Ribeiro

Why I Joined the LPI Board of Directors — Uirá Ribeiro

As a technologist, author, and trainer, I have devoted my career to bringing technology to people who lack access. I’ll explain in this article how joining the LPI Board of Directors contributes to my goals.

Free and open source software has always been central to my work. In 1998, I established an Internet service provider in my town based on Linux. I have written 11 books covering open source software, published in English, Portuguese, Spanish, and Italian. One of these groundbreaking books, Linux Certification, published in 2005, initiated discussions about Linux certifications in Brazil.


I firmly believe that IT certifications can significantly improve people’s quality of life, and there is no greater joy for a teacher than to genuinely help others. I estimate that my training and books have empowered more than 14,000 people in Brazil to learn Linux, and helped thousands to achieve LPI certification.

I’ve also kept up with current computing trends during my 27 years of experience in Internet servers, software development, information security, cloud computing architecture, and IT service management. In addition to achieving all available Linux certifications, I am a AWS Certified Cloud Practitioner, AWS Certified Architect, and Certified Kubernetes Administrator (CKA).

Given LPI’s exceptional role in open source certification over the past 20 years, joining the LPI board of directors is a natural and exciting step for me. My extensive background in IT and education equips me to contribute significantly to LPI’s mission. I aim to expand LPI’s global reach and impact, particularly in emerging regions where digital inclusion can transform lives and communities.

Mark your calendars for the Linux Professional Institute Annual General Meeting, which will take place on Saturday, June 22, 2024, using an online platform. Members can vote at the meeting to choose the LPI Board of Directors.

Source: lpi.org

Tuesday, 18 June 2024

mundialis: Shaping OS Remote Sensing

mundialis: Shaping OS Remote Sensing

Introducing mundialis, a company at the forefront of FOSS businesses, blending free geodata and open-source software in remote sensing. mundialis is known for its commitment to generating spatial information and developing tailored FOSS solutions. We spoke with Markus Neteler, Co-Founder & Senior Consultant, about mundialis’ open source journey and community involvement.

Can you share the genesis of mundialis and what initially drew you into the geospatial realm? What was the first geospatial project that set the path for your future endeavors?

My interest in geospatial analysis began during my geography studies, where I discovered the power of GIS in addressing environmental issues. Engaging with the open-source community further fueled my passion and introduced me to innovative tools and collaboration opportunities.

mundialis, founded in Bonn, Germany in 2015, stemmed from a desire to leverage open-source software and scientific methods for broader geospatial projects. We utilize Earth observation data to develop cutting-edge GIS and EO analysis methods, incorporating AI and ML. My early involvement with the GRASS GIS project during university laid the foundation for mundialis, driven by the community’s importance and the potential of open-source software.

What is mundialis’s core offering and how does it differentiate itself?

Our primary focus is on custom geospatial solutions tailored to diverse client needs, ranging from remote sensing data processing to specialized land use mapping. What sets us apart is our strong commitment to open-source principles and sustainability. We leverage robust open-source software like GRASS GIS, integrated with cutting-edge techniques in remote sensing and geoinformatics. Our cloud-based processing platform, actinia, enables scalable and distributed geographic data processing. Our expertise in analyzing and processing big data from satellite imagery, powered by machine learning and artificial intelligence, allows us to address complex challenges in environmental monitoring, agriculture, and urban planning.

How does mundialis’s business model support open source and the geospatial community?

mundialis combines geospatial analysis with a commitment to open source and community engagement, fostering innovation and sustainable tech development. We offer customized open source solutions across sectors such as environmental monitoring and urban planning, including:

  • Delivering bespoke geospatial solutions of high quality.
  • Contributing to open source geospatial software development and collaboration.
  • Sharing knowledge through workshops and conferences.

What developments can we expect from mundialis?

mundialis is advancing cloud-based geospatial analytics with AI and machine learning for precise environmental insights, vital for monitoring, sustainable agriculture, and urban planning. Our use of Convolutional Neural Networks (CNNs) enables tasks like detecting sealed surfaces and individual trees in urban areas. We’re also exploring neural networks and transformer models for time series analysis, such as predicting El Niño events from sea surface temperatures. Our future products will be open-source, fostering industry collaboration and adoption of our advancements.

Over the next five years, what are mundialis’s ambitions for growth, and how do you plan to achieve them?

Over the next five years, mundialis aims to:

Support the EU Green Deal by contributing to climate, environment, and sustainable finance sectors, including projects like ecosystem restoration in sub-Saharan Africa’s Great Green Wall.

Lead technology innovation through research and development in AI, machine learning, and big data analytics for geospatial data, ensuring we offer cutting-edge solutions.

Deepen engagement with the open-source community, enhancing our market position and fostering a culture of innovation, collaboration, and sustainability within our company and the wider community.

What changes do you foresee in the competitive landscape of your key markets in the next half-decade?

In the upcoming five years, our key markets are poised for significant transformation fueled by technological innovation, growing environmental awareness, and the momentum of the open-source movement.

Technological Innovation: Rapid advancements in artificial intelligence (AI), machine learning (ML), and cloud computing will fundamentally reshape geospatial data analysis, creating a demand for specialized solutions capable of enabling complex data analysis.

Growing Environmental Awareness: Increasing concerns regarding climate change and biodiversity loss will elevate the importance of sustainable solutions. This shift will lead to a surge in demand for geospatial solutions tailored to environmental monitoring, sustainable urban planning, and efficient resource management.

The Dynamics of the Open Source Movement: The continued expansion of the open-source movement will drive the development of innovative geospatial solutions. As open-source software gains wider acceptance in the business world, it will accelerate innovation and lower barriers to entry for new players, fostering a highly competitive market where adaptability and the rapid integration of new technologies will be essential for success.

The leadership team at mundialis brings diverse expertise and experiences that shape the company’s direction.

Markus Eichhorn, CEO, possesses a strong background in business development and geomarketing. With a Geoinformatics degree from the University of Muenster, Markus has led numerous large projects and provides consultancy on geodata utilization.

Elisabeth Panzenboeck (PhD) contributes her extensive experience in satellite mission operations and space weather from her tenure at the German Aerospace Centre as Lead Project Manager at mundialis.

Hinrich Paulsen, co-founder and Senior Consultant, focuses on business development and Earth Sustainability Monitoring projects.

As co-founder and Senior Consultant, I bring over 25 years of experience in open-source GIS software development, specializing in GIS, remote sensing, and disease mapping. My contributions to OSGeo and FOSS4G underscore my commitment to advancing geospatial technology.

Can you discuss mundialis’s involvement with the GRASS GIS/OSGeo community and the benefits of such engagement?

At mundialis, our commitment to open source is ingrained in our core identity, demonstrated through our active involvement in the GRASS GIS/OSGeo community. We contribute to the development of GRASS GIS and actinia, continually enhancing features and ensuring their relevance. Our engagement extends to participation in events, contributions to mailing lists, and software reviews, strengthening our presence in the geospatial realm and fostering valuable connections and collaborations within the industry.

What social benefits have stemmed from mundialis’s use of FOSS/Linux?

Our embrace of FOSS and Linux propels social advancements alongside technological progress. Leveraging these open-source solutions democratizes technology, making advanced geospatial tools accessible and modifiable for all, especially benefiting NGOs, educational entities, and startups in developing regions. This practice aligns with our sustainability goals, optimizing resource use and minimizing e-waste.

How does the adoption of FOSS/Linux contribute to mundialis’s environmental sustainability efforts?

Linux and FOSS enhance energy efficiency, reducing CO2 emissions. Opting for Linux extends hardware lifespan and minimizes environmental impact. Our cloud-based geodata analysis allocates computing resources efficiently, reducing waste. FOSS’s open standards promote system interoperability, minimizing environmental footprint without specialized solutions.

Looking back, what are some of the pivotal projects that mundialis has undertaken, and what impact have they had?

mundialis has contributed to significant geospatial analysis and satellite data projects:

◉ GreenUr: Developed a QGIS plugin for the WHO to analyze urban green spaces’ impact on human health.
Fiber Optic Cable Planning: Utilized AI for spatial analysis to plan fiber optic infrastructure in urban areas.
◉ INCORA Project: Used Copernicus satellite data for land cover classifications, aiding decision-making on climate change and biodiversity.
◉ HERMOSA Project: Established a platform for ecosystem restoration and biodiversity conservation monitoring and reporting.

Source: lpi.org

Thursday, 13 June 2024

Linux Professional Secrets: How to Excel in a Competitive Job Market

Linux Professional Secrets: How to Excel in a Competitive Job Market

The job market for Linux professionals is increasingly competitive, driven by the widespread adoption of Linux in various industries. To excel as a Linux professional, one must not only master the technical skills but also understand the broader aspects of the role. In this comprehensive guide, we will uncover the secrets to standing out and thriving in this competitive field.

Understanding the Linux Ecosystem


Linux, an open-source operating system, has become a cornerstone of modern IT infrastructure. Its robustness, flexibility, and security make it the preferred choice for servers, cloud computing, and even desktop environments. Understanding the Linux ecosystem is crucial for anyone looking to excel as a Linux professional.

Core Components of Linux

Linux consists of several key components:

  1. Kernel: The core of the operating system, managing hardware resources and system calls.
  2. Distributions (Distros): Variants of Linux tailored for different use cases, such as Ubuntu, CentOS, and Red Hat Enterprise Linux (RHEL).
  3. Shell: The command-line interface for interacting with the system, with popular options including Bash and Zsh.
  4. Package Management: Tools for installing, updating, and managing software, such as APT for Debian-based systems and YUM/DNF for Red Hat-based systems.

Linux in Different Environments

Linux's versatility allows it to be used in various environments:

  • Servers: Linux dominates the server market due to its stability and performance.
  • Cloud Computing: Platforms like AWS, Azure, and Google Cloud extensively use Linux.
  • Embedded Systems: Linux is prevalent in devices ranging from routers to smart appliances.
  • Desktops: While less common, Linux desktops provide robust alternatives to Windows and macOS.

Essential Skills for Linux Professionals


To excel as a Linux professional, mastering the following skills is essential:

Proficiency in Shell Scripting

Shell scripting automates repetitive tasks and enhances productivity. Familiarity with Bash scripting, along with knowledge of other scripting languages like Python, can significantly boost your efficiency.

System Administration

Understanding system administration is fundamental. Key areas include:

  • User and Group Management: Managing user accounts, permissions, and authentication.
  • File System Management: Knowledge of file systems (ext4, XFS) and tools for managing disk space.
  • Networking: Configuring network interfaces, firewalls (iptables, nftables), and troubleshooting network issues.
  • Service Management: Managing services with systemd, init, or other service managers.

Security Best Practices

Security is paramount in any IT role. Linux professionals must be adept at:

  • Implementing Firewalls: Configuring and managing firewalls to protect the system.
  • System Hardening: Applying best practices to secure the system against vulnerabilities.
  • Monitoring and Auditing: Using tools like SELinux, AppArmor, and auditd to monitor and secure the system.

Understanding of Virtualization and Containerization

Modern IT environments rely heavily on virtualization and containerization technologies. Key technologies include:

  • Virtualization: Knowledge of hypervisors like KVM, VMware, and VirtualBox.
  • Containerization: Expertise in Docker, Kubernetes, and other container orchestration tools.

Familiarity with Cloud Platforms

As organizations migrate to the cloud, understanding cloud platforms is crucial. Familiarity with services like AWS EC2, S3, Lambda, and Azure Virtual Machines can set you apart.

Configuration Management

Automation and consistency are achieved through configuration management tools. Proficiency in tools like Ansible, Puppet, and Chef is highly desirable.

Certifications and Continuous Learning


Certifications validate your skills and enhance your credibility. Notable certifications include:

  • Red Hat Certified Engineer (RHCE)
  • Linux Professional Institute Certification (LPIC)
  • CompTIA Linux+

Staying Updated

The tech landscape is ever-evolving. Engage in continuous learning through:

  • Online Courses: Platforms like Coursera, Udemy, and edX offer relevant courses.
  • Technical Blogs and Forums: Follow blogs and participate in forums like Stack Overflow and Reddit.
  • Conferences and Meetups: Attend industry conferences and local meetups to network and learn from peers.

Building a Strong Professional Network


Networking is a vital aspect of career growth. Building a strong professional network can lead to job opportunities and professional development.

LinkedIn and Professional Profiles

Maintain an updated LinkedIn profile showcasing your skills, certifications, and projects. Engage with industry professionals and join relevant groups.

Contributing to Open Source Projects

Contributing to open source projects not only hones your skills but also demonstrates your commitment to the community. Platforms like GitHub and GitLab are excellent places to start.

Attending Industry Events

Participate in industry events such as Linux Foundation conferences, tech meetups, and hackathons. These events provide opportunities to learn, share knowledge, and connect with potential employers.

Crafting a Standout Resume and Cover Letter


A well-crafted resume and cover letter are essential for making a strong first impression.

Resume Tips

  • Highlight Relevant Skills: Focus on the skills and experiences most relevant to the job.
  • Certifications and Education: Clearly list your certifications and educational background.
  • Professional Experience: Detail your professional experience with a focus on achievements and responsibilities.

Cover Letter Tips

  • Personalization: Tailor each cover letter to the specific job and company.
  • Key Achievements: Highlight key achievements and how they align with the job requirements.
  • Enthusiasm: Express your enthusiasm for the role and the company.

Acing the Technical Interview


Preparation is key to succeeding in technical interviews. Focus on:

Common Interview Questions

  • System Administration Scenarios: Be prepared to discuss how you would handle various system administration tasks.
  • Problem-Solving Exercises: Practice solving common Linux-related problems and scenarios.
  • Hands-On Tasks: Be ready for hands-on tasks such as writing scripts or configuring services.

Practical Demonstrations

Some interviews may include practical demonstrations. Practice common tasks and be prepared to demonstrate your skills.

Soft Skills

Technical skills are crucial, but soft skills like communication, teamwork, and problem-solving are equally important. Demonstrate your ability to work effectively in a team and communicate complex ideas clearly.

Conclusion

Excelling as a Linux professional requires a combination of technical expertise, continuous learning, and strategic networking. By mastering the essential skills, obtaining relevant certifications, and building a strong professional network, you can stand out in the competitive job market and advance your career.

Thursday, 6 June 2024

Linux Shows What Computing Really Is: Jhenisson’s Journey

Linux Shows What Computing Really Is: Jhenisson’s Journey

In the field of Information Technology, every professional has a unique journey that shapes their expertise and career path. In this blog posting, we meet Jhenisson Brito, an LPI-certified professional from Brazil. His story unveils the challenges, motivations, and pivotal moments that steered him towards Linux and other free and open source (FOSS) technologies. From his initial encounters with Linux to his current role as a network analyst for a major Brazilian bank, his narrative offers valuable insights and inspiration for anyone interested in or currently navigating the IT landscape.

How did you first encounter Linux and open source software, and what sparked your interest in them?

My first contact with Linux, if I remember correctly… It was around 2016 with Ubuntu, when I entered university. At the time I was 17 years old. My interest was driven by comments from colleagues that for working with networks, Linux offered more tools and was cleaner than Windows, for example.

What role did computers play in your life when you were growing up, and did you have any exposure to Linux or other advanced tools at the time?

My contact with computers in childhood was almost entirely focused on the gaming world. Due to financial constraints, I felt compelled to learn more about computers to solve any problems my machine had, so I could play more.

What did you think of your formal education in school, and did it help prepare you for what you are doing at work now?

I am fully convinced that my formal education didn’t help much in my career. Brazilian public education teaches in an inadequate way. I have strong criticisms on this topic.

What inspired you to pursue a career in technology, and how did you get started in the field?

I received a 100% scholarship for the Computer Networks course when I was 17 years old, and due to financial difficulties at home, I dedicated myself intensively to IT to get a job in the following months, which happened quickly.

What do you do in your current position, and what are the major tools you work with?

I work as a network and data communication analyst for a traditional Brazilian bank with offices in all states of the country. My main task involves troubleshooting, using the SSH protocol for access to thousands of assets. My distinctive skill is an advanced knowledge of network administration and operating systems, which I employ to guide the development of scripts that can automate the tasks of the entire team and optimize services that I would find tedious.

Why did you decide to pursue an LPI certification, and how has it helped you in your career?

My LPIC certification was a requirement from my current company. Among CCNA and HCIA, I chose LPI because I have a greater affinity with the operating systems, and I was right in my choice, as the Linux path shows me what computing really is.

How did you prepare for the LPI certification exam, and what advice would you offer to others considering certification?

Study hard and learn for real. The knowledge tested in the exams is extremely useful for specialized careers in the technology field.

How do you think the rise of cloud computing and containerization is affecting the use of Linux and open source software, and how are you adapting to these changes?

This is a great and relevant question. Linux (LPI) is being widely required in job positions involving cloud and containers, such as Docker for example. I have worked in companies that have their own small data centers, and all of them are planning to migrate to the cloud. The cloud is the present and the future. Professionals who understand the concepts of open-source software and operating systems like Linux are highly sought after by companies. Indeed, this is one of the main reasons why I am on the Linux track.

Can you discuss any collaborations or partnerships you have been involved in within the open source community, and what you learned from them?

Currently, I am very active and consume a lot of content related to PowerShell. Many of my scripts use methods from other scripters to accomplish the goals. It’s a constant struggle to find information, and when things get tough, we turn to AI to clear up doubts about concepts that we can’t easily find in forums. I was very happy when I started studying Linux and realized that many of the shell commands are in PowerShell, and that we can install PS on Linux. This really makes my life a lot easier.

How do you think open source software can be used to promote diversity and inclusion in the tech industry, and have you been involved in any related initiatives?

I believe so, and I would certainly be happy to know that Brazilian schools teach Linux, although I think it’s a distant reality given the political scenario of our country. If it weren’t for the language barrier, Brazil would export many professionals to the whole world, as our professionals here have a lot of potential.

Can you talk about a specific project or accomplishment that you are particularly proud of, and how did you achieve it?

I feel good when I develop a script that helps my colleagues; They always praise me, although I don’t like receiving compliments. In my opinion, technology should optimize our time so we can make the most of our lives.

How do you stay up to date with the latest advancements in technology, and what resources do you rely on?

I enjoy following topics and people of interest on LinkedIn.

How do you approach problem-solving when working on a project or task, and what strategies do you find most effective?

The first step is to understand the problem and, with that insight, address the issues in the best possible way, with the most useful information and necessary support, always seeking improvement to achieve excellence.

Can you describe a time when you faced a difficult technical challenge, and how did you overcome it?

Years ago, I was asked to log into 300 computers to change the administrator user password in a company due to the dismissal of an IT colleague. I did it one by one, as requested, and lost sleep without getting paid extra for it. Weeks later they fired another person and asked me to do it again. I refused to do it manually. So I opened my browser and started researching a way to do it automatically. I discovered the WinRM protocol and developed my first shell script that imported the 300 computers from a list and changed the password. That’s where it all started.

What do you think are the most important qualities for success in a tech career, and how do you cultivate those qualities in yourself?

Curiosity. A curious person does not get bored when learning something new that they find interesting. The other would be creativity. But I believe these two qualities were born in me, so I don’t know how someone can develop them on their own.

What do nerds do when they are not nerding?

Here in Brazil, we watch football, hah 🙂

Source: lpi.org

Saturday, 1 June 2024

Why I Joined the LPI Board of Directors – Emmanuel Nguimbus

Why I Joined the LPI Board of Directors – Emmanuel Nguimbus

As a computer engineer with 12 years of experience in Africa, I joined the LPI Board to help LPI become the leader in the field of Linux certifications. I want to improve LPI’s marketing and increase the amount of translations of LPI documents into French.

Since 2015, I have been a trainer for Linux and for open source in general. I teach operating systems, cloud computing, and security at a university. Of course, all the courses I teach are based on open source. I’m an LPI Member, LPIC-3 certified, and an LPI Approved Trainer.

I founded Backbone Corp, my first company, in 2013 in Cameroon, and I’m still managing it. I’m also the co-founder of two projects that use only open source software: Teledocta, a digital health platform, and OpenStudi, a Francophone eLearning platform for FOSS.

Whether in the company I run or in the projects I am associated with, I constantly promote Linux. I am a consultant on several private and governmental projects that operate with 100% open source. I am a member of private and public working groups, project teams, monitoring committees, and steering committees. In these contexts, I have been able to familiarize myself with issues of training, management, entrepreneurship, strategy, and governance. All these experiences can add value to the LPI Board.

My current objectives include getting French translations for the Linux Essentials, LPIC-101, and LPIC-102 course materials.

I want to improve LPI’s presence in Africa, especially in French-speaking Africa. My aim is to have more communications efforts there. The more IT professionals receive at least one LPI training course, the more they will be willing to promote LPI.


Mark your calendars for the Linux Professional Institute Annual General Meeting, which will take place on Saturday, June 22, 2024, using an online platform. Members can vote at the meeting to choose the LPI Board of Directors.

Source: lpi.org

Thursday, 30 May 2024

Learning Equality: Transforming Education through FOSS

Learning Equality: Transforming Education through FOSS

In our series highlighting remarkable FOSS initiatives, we turn our attention to Learning Equality and their transformative platform, Kolibri. This innovative project tackles the critical issue of educational access, bridging the gap for learners in low-resource and disconnected environments. Through our interview, we delve into the vision that drives Learning Equality, the challenges and triumphs of developing Kolibri, and how this platform is reshaping the landscape of digital learning. Join us as we explore how Kolibri is making quality education accessible to all, fostering an equitable world where knowledge knows no boundaries. We asked a few questions to Richard Tibbles, Cofounder and Product Lead from Learning Equality.

Can you describe the inspiration behind Learning Equality and the genesis of the Kolibri platform?

Kolibri is the successor to our original KA Lite platform, the offline version of Khan Academy, which was created after my cofounder Jamie Alexandre was doing an internship at Khan Academy in the summer of 2012. He was inspired by the brand new Raspberry Pi on his desk to see if he could load all of Khan Academy on it. When he returned to UC San Diego, where he and I were completing our PhDs in Cognitive Science, we took this prototype and turned it into an offline-first learning platform with videos, exercises, and teacher support tools.

With ⅔ of the world lacking access to the internet at the time, it became clear very quickly that providing digital resources offline was a critical enabler for education around the world. To serve far-flung and remote communities, we recognized two essential needs: to provide even more resources than those available from Khan Academy, and to align resources to national/regional curriculum standards. These needs led to Kolibri, the offline-first learning platform enabling access to a full range of Open Educational Resources (OERs), and Kolibri Studio, a platform to allow the alignment of these OERs to curriculum standards to better enable discovery and use of these resources.

How does Kolibri address the challenge of the digital divide in education across different regions?

Kolibri has been designed with lessons we have learned across many regions and contexts. Its emphasis on offline-first (rather than online, but offline available) allows all learners to access Kolibri even when no connectivity is available, but still to take advantage of any available connectivity through low bandwidth data synching. We set up the physical transport of learning materials—“sneakernet”—to bring in large bundles of resources to areas with little or no connectivity.

Further, the Kolibri Library integrates resources in over 173 languages and covers a diverse set of K-12 subject areas, including STEM, public health, internet safety, teacher professional development, coding, and life skills. The library allows access to a broad base of resources across many different regions.

In addition, Kolibri Studio, our online curricular tool, allows curriculum experts to upload their own resources to supplement and complement those available in the Kolibri Library, Resources could be additional, contextualized interactive or video resources, or regularly updated digital textbooks. Digital resources allow new versions to be easily distributed through the sneakernet rather than relying on paper textbooks that become outdated quickly and are expensive to update.

What are the core features of Kolibri that make it suitable for low-resource environments?

Kolibri has been architected and implemented to run on low-power, low-cost devices. With the Raspberry Pi as the original inspiration for our work, we have continued to maintain focus on lower spec’d devices to ensure that Kolibri can be used on low-cost and legacy hardware that is affordable or found in lower-resource contexts.

Kolibri also aims to break the paradigm of “low resource equals low tech” that is often assumed – leading to many low-resource digital learning solutions that focus on passive consumption of digital resources, or purely exploratory learning, which benefits already high-achieving students but leaves others behind. By contrast, from its inception, Kolibri has focused heavily on equitable design principles and interactive feedback. We integrated the Khan Academy exercises first, and then additional interactive exercises uploaded by other resource providers or generated by users of Kolibri Studio online.

The design and implementation of the platform is also aimed at students and teachers who come into the platform with a significantly lower level of digital literacy than might be expected from users of other Learning Management Systems. As a result, we have seen rapid and easy adoption of Kolibri by both learners and teachers, even when digital learning is entirely novel in their contexts.

How do you ensure that the content available on Kolibri is relevant and culturally appropriate for its users?

A large amount of our collective time and energy as a team goes into curating the Kolibri Library – a collection of publicly available resources from a huge range of content providers that we have aggregated, bundled, and quality checked to ensure that the resources that we make available are broadly useful, meet our community standards, and meet identified needs, both in terms of subject matter and language of instruction.

Ultimately, those using and installing Kolibri (administrators) have complete control over which resources they import and make available to their students and teachers. This control can happen either through selective alignment work on the Kolibri Studio platform, where they choose to include or exclude specific content and add their own more culturally relevant resources; or in Kolibri itself, where bundles of content (channels) can be imported in bulk, or as folders, subfolders, or resources, and can be individually selected for import (or removal) in the local Kolibri installation. This flexibility ensures that the end user has final control over which materials they are hosting on their Kolibri device.

Could you share a success story or a particular case where Kolibri made a significant impact on a community?

The nonprofit organization Shoulder to Shoulder has been using Kolibri (and before that, KA Lite) in the southern part of the Intibucá department of Honduras since 2015. Access to digital resources and the coaching tools has allowed students in schools using Kolibri to see significant improvements in the Honduran test scores (when analyzed year by year before and after adoption of Kolibri). The adoption has been particularly heartening, as due to Shoulder to Shoulder’s strong community-focused approach, municipal mayoral offices have taken on some of the cost and responsibility for implementing the project, allowing it to expand and grow across the region. Furthermore, the organization created a partnership with the Ministry of Education, which provides Shoulder to Shoulder access to their digitized curriculum, so that all materials in their Kolibri channel can be aligned to the Honduran curriculum.

In addition, Kolibri has been used in a Reading Program sponsored by Shoulder to Shoulder, and institutionalized by the Intibucá Department of Education as part of the school calendar. The literacy and literature materials from the Kolibri Library are being harnessed to build literacy skills and foster a passion for reading in children and adolescents,

What role do Open Educational Resources (OERs) play in Kolibri’s strategy to provide quality education?

OERs are fundamental to the success of Kolibri. Openly licensed resources allow for offline distribution of digital content. Without the open licenses, any online hosted resource would not legally be able to be downloaded and copied to offline Kolibri instances, leaving learners in low-resource contexts completely cut off.

How does Kolibri facilitate teacher support and classroom integration in varied educational settings?

The Kolibri Edtech Toolkit is a set of training materials and guidance resources designed to equip organizations implementing Kolibri, support the training of trainers, and guide teachers using Kolibri to plan for and design the blended learning experience that works best for their context. The Toolkit includes resources such as a hardware guide and suggested hardware types, implementation models, training materials, and more. The Toolkit is openly licensed and has been translated into Spanish, French, Arabic, Brazilian Portuguese, Marathi, Hindi, and Swahili. It’s reused by many different organizations to facilitate the implementation of Kolibri in their contexts.

In addition, the Kolibri Learning Platform has a set of intuitive dashboards and planning tools to allow teachers to quickly assign relevant resources to their students, differentiate resources for different groups, and create formative assessments from existing exercise questions to gauge student learning across a range of concepts and skills.

In what ways does Learning Equality collaborate with local communities to implement and optimize Kolibri?

Learning Equality maintains an open community forum for technical support, implementation support, and feedback about our products. We have dedicated team members who work to understand how Kolibri is being used, and what gaps still exist for our community. The Kolibri Virtual Learning Spaces seminar series allows organizations from around the world to convene and share insights with each other and with Learning Equality about effective blended learning in low-resource contexts. Finally, our community team facilitates WhatsApp mediated regional Communities of Practice to share best practices more frequently and understand common pain points.

During virtual and in-person training workshops run by members of the Learning Equality team, we learn new needs, barriers, and opportunities for blended learning. During site visits to established implementations, we have the opportunity to interview teachers and administrators working day to day with Kolibri, and to observe classrooms and learning spaces where Kolibri is being actively used by learners.

We use this feedback as a basis for need finding and hypothesis generation about the problems that are most pressing for our globally distributed user community, and then do additional research through virtual interviews, feedback surveys, and impressions of design prototypes to better understand and refine the problem space and potential solutions.

And talking about community again: we know that the “community” is what makes a FOSS service or product a living thing. Do you have a strong, active community? What added value does the community give to the project? How can individuals or organizations contribute to or support the mission of Learning Equality and Kolibri?

Our community is multifaceted and globally distributed. It is a mixture of teachers, learners, program administrators, curriculum experts, researchers, hardware distributors, software developers, translation volunteers, and more – all either using Kolibri to further their own learning or integrating it into their products and services. Our developer community has historically been restricted mostly to the Learning Equality team and those working in close partnership with Learning Equality. However, due to our consistent participation in the Google Summer of Code program (which we first joined in 2015, but have participated in every year since 2021), we have seen consistent and growing contributions from open-source contributors looking to contribute to an interesting, engaging product that has a very strong prosocial impact focus.

We have a wide range of ways to help contribute, from translating the platform into a new language, to contributing code and updating documentation. Our developer documentation details this and more. We are also always on the lookout for new OERs that would be helpful to a global audience – and are happy to collaborate to bring this into Kolibri using either the Kolibri Studio platform or our content integration automation tooling.

Looking ahead, what are the future developments or expansions planned for Kolibri?

There are two major themes ahead for Kolibri. Offline content creation will fulfill the promise of a truly offline first platform by removing the online bottleneck of Kolibri Studio when bringing in new resources and when editing and remixing existing resources. Equity-focused AI employs AI to address the most critical needs in improving equity in education. Our first focus for AI is in massively improving the speed and efficiency of critical curriculum alignment work, in order to allow OERs to fulfill their promise of providing a free, open education for all.

Source: lpi.org

Saturday, 25 May 2024

Why I Joined the LPI Board of Directors – Jon “maddog” Hall

Why I Joined the LPI Board of Directors – Jon “maddog” Hall

Why am I active as a Board Director of the Linux Professional Institute?

It certainly is not the money that I am paid, as it is completely a voluntary position.

It is not the fame or stature that I receive, even having been the Board Chair since 2015, since I was already well know in the programming world and in the Open Source world long before I joined the Board of LPI.

It certainly was not because it was an “easy” job (it is often not easy) and most of the time it is not really as “fun” as solving an interesting programming problem or making a set of computers perform at their best.

It is not because I want to travel the world talking to company executive, leaders of countries, university professors and administrators….I had already visited over 100 countries (many more than one time, some dozens of times) in my career….and at the age of 73 coach seats, big airports and long immigration lines do not enthrall me.

Perhaps I am resigned that someone has to do the work that helps many people get better jobs, or even any job, but certainly jobs that I have found very rewarding over the years.

Perhaps it is my way of paying back all the people that helped me along the way over this past half-century.

Maybe, as a Board member, I can use my technical expertise, my business training and experience, my drive (sometimes modified a bit to make my personality more tolerable) to help to make Free and Open Source Software, Hardware and Culture (FOSSHC) a way that professionals can make a living and move society forward.

I did not learn this all at once. I learned it over the fore-mentioned fifty+ years of professional life.

Much of what I know I learned on my own and “certified” it though working in the professional sphere. However I also know that people can learn it much faster and much easier if they have a road-map, and LPI’s Certification and Certificate objectives go a long way to creating that road-map.

So the real reason I have been on the Board of Directors since 2015 (and helped create LPI in 1999) was to hear the words of people who say that FOSSHC has created a good living for them, and when they thank me for what I have done.

There is no greater feeling.

Unfortunately at the age of 73 with only 30% of my heart capacity left (after two massive heart attacks in 2016), and staring at another three-year term on the board (ending when I am 76) I feel it is time to turn the gavel to another person. Projects that I need to finish (and might cause a potential conflict of interest as a Board Chair or even a Board Director) call for less time with LPI and more time with those projects which will also be useful to FOSSHC. Therefore I now take on the role of LPI “Returning Officer”, required by the LPI Bylaws to not hold a future board seat, but to help find other good candidates for LPI Director.

To not let the reader know, at this point, that I am “retiring” from the Board of LPI in 2024 would be deceitful, and I like to think I am not a deceitful type of person.

Instead I offer the reader the chance to step up and utilize your skills, your knowledge, your love and your time for FOSSHC. Perhaps you are not a programmer or not a systems administrator. It has been three decades since I last programmed an operating system kernel, and two decades since I have been a systems administrator for more than my five or six systems at home. However, I know how to manage people, to balance a budget and more about business and IP law than I really want to know.

If you have skills like these, and a desire to learn and be responsible for an organization that has bettered the lives of more than 300,000 people in over 180 countries, than LPI wants you for its Board of Directors.

I want you for the Board.

I will not be leaving LPI completely. I will still help LPI any way that I can (and my body will allow), but we need more new and progressive Directors for the future of the organization.

Carpe Diem!

Warmest regards,

Jon “maddog” Hall, Board Chair and Returning Officer for 2024


Mark your calendars for the Linux Professional Institute Annual General Meeting, which will take place on Saturday, June 22, 2024, using an online platform. Members can vote at the meeting to choose the LPI Board of Directors.

Source: lpi.org