Tuesday, 28 September 2021

New Learning Materials in Italian, Spanish, and Polish

LPI Tutorial and Material, LPI Guides, LPI Certification, LPI Learning, LPI Preparation, LPI Career

Linux Professional Institute (LPI) recently released Italian and Spanish language versions of Learning Materials for LPIC-1, as well as the Learning Materials for the Linux Essentials exam in Polish.

Andrea Polidori finalized the translation into Italian of the LPIC-1 102 Lessons, edited by Max Roveri. Andrea says, “Often the students in my courses ask me for material in Italian to improve their preparation for the LPIC-1 certification exams. After months of working with my colleague and friend Max Roveri, I can finally tell them that everything they need is on the LPI Learning portal.”

José Alvarez is the Spanish translator of the LPIC-1 102 Learning Materials, honed by the editors Yoel Torres and Juan Ibarra. José says, “When I first started working with the Learning Materials, I realized what excellent support they are for my colleagues and everyone who works with Linux. Translating them into Spanish, therefore, became a major goal for me and a great experience with the LPI team,”

A profitable and busy summer also saw the release of the Polish translation of the Learning Portal, along with the translation of the Linux Essentials Materials in the same language. Krzysztof Lorenz, the translator, commented. “The translation of the Learning Materials into Polish is a good signal for the free and open source software (FOSS) movement in Poland and I am very pleased to be able to contribute to this achievement. I really enjoyed working with the LPI team.”

Source: lpi.org

Saturday, 25 September 2021

LPIC-3 Mixed Environments 3.0 Introduction #04: 304: Samba Client Configuration

LPIC-3 Mixed Environments 3.0, Samba Client Configuration, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Certification, LPI Preparation, LPI Guides, LPI Learning

This blog posting is the fourth in a series that helps you prepare for the new version 3.0 of the LPIC-3 Mixed Environments exam. In the previous post we set up an entire infrastructure, containing of two Active Directory domain controllers, a file server and a Windows client. This week we will learn how to add even more systems to our domain.

Sources for User Information

An Active Directory holds information about user accounts, including the credentials used to authenticate each user. The overall idea is to maintain this information in the directory and then get access to it on each and every computer that is joined into the domain. We have already achieved this for the virtual Windows client, but we have not considered how we can make the average Linux workstation recognize the domain users as well.

On Linux, there are multiple approaches to authenticate users against a remote repository of user information. All such approaches have a common approach: they add additional sources of user data that Linux queries when it looks up a user. Once these lookups are successful, Linux queries a single set of user information, no matter where this information came from.

Read More: LPIC-3 300: Linux Enterprise Professional Mixed Environment

One technology that merges many data sources is the Name Service Switch (NSS). It provides various databases, including those holding user and group information. NSS passes queries on to various associated modules, which then implement the procurement of this information from a specific source such as a local file (think of /etc/passwd) or remote services (like our Active Directory domain). Similarly, Pluggable Authentication Modules (PAM) execute a series of modules to authenticate a user, prepare their session, or change their password. Again, individual modules handle users from different sources, including Active Directory.

User Identities and Authentication via LDAP

Each Active Directory domain controller runs an LDAP server which provides access to the directory’s contents. This allows the NSS and PAM modules (nss_ldap.so and pam_ldap.so respectively) to make the directory’s users available to Linux. Configuration includes adjusting the [ nsswitch.conf file and the PAM configuration, as well as creating the ldap.conf file that defines the properties of the directory.

Some more PAM modules come in handy when authenticating remote users. These modules allow users to authenticate using Kerberos, create a home directory during their login, lock accounts after too many failed login attempts, and enforce a miniimum complexity for new passwords. When experimenting with these modules, remember also to take a look at the chage command, which allows adjustments to the password expiry information for a specific user.

Authentication via SSSD

A modern approach to configuring user databases and authentication in Linux is the System Security Services Daemon (SSSD). SSSD provides NSS and PAM modules as part of the various Linux databases. However, instead of just querying, for example, Active Directory directly, these modules forward queries to the locally running SSSD. The SSSD documentation provides a comprehensive overview of SSSD’s architecture. SSSD itself has a configuration file that defines one or more sources of user information. One of the supported back ends is Active Directory.

Besides just forwarding queries, SSSD provides more features. Upon successful login of a user, SSSD can cache their credentials for a specific time to allow these users to log in again, even when there is no connection to the authentication back end. This is especially useful on mobile devices that might be offline from time to time.

SSSD comes with a set of command line tools. A special feature of these commands is the ability to overwrite specific aspects of user accounts. This feature allows you, for example, to adjust the UID, the path of the home directory, or the login shell for a directory user. These adjustments are injected by the local SSSD and do not affect other computers, even if they query the same directory. SSSD also allows the management of local accounts, which do not even appear in a remote repository.

Accessing SMB File Shares

Once a user is logged in, they usually want to somehow process data. We have already set up a file server which can store data of any kind. The easiest way to connect to an SMB server is the smbclient command. It provides an interactive command line, similar to common FTP or SFTP clients. You should practice the basic operations, such as uploading and downloading single files as well as multiple files, creating directories, and moving files. Take a special look at smbclient’s command-line options, which allow you to enumerate the shares available on a specific server or adjust the SMB protocol used.

Although smbclient is easy to use, it is inconvenient to download each file before using it and to upload it again once it is changed. The mount.cifs command mounts an SMB share into the Linux file system tree. Again, review all of the available mount options.

Keep in mind that each SMB connection is authenticated for one specific user. All operations performed over this connection are executed as the same user on the SMB server, no matter what user performs the change on the client. If multiple users need to access their respective data on the server, each user must mount this data on their own. The pam_mount module triggers the mount of a specific share whenever a user logs in.

Besides smbclient, some more commands interact with SMB shares. The exam objectives explicitly mention smbget, smbtar, smbcquotas, getcifsacl and setfactl, as well as cifsiostat. You should try all of these and, as usual, review their individual options.

Two More Linux Playgrounds

To practise the setup of the various authentication approaches, two additional Linux virtual machines would be helpful. We will need these systems only this week; you can delete them once you’ve completed your experiments. Don’t use the file server for your experiments, as it is already a domain member.

Set up one of the new virtual machines to use nss_ldap.so and pam_ldap.so to allow directory users to sign in. This is also a great chance to get familiar with the most important Kerberos tools, such as kinit and klist. Create a Kerberos configuration, procure a Kerberos ticket for the file server, and confirm that you are able to log into the server using the Kerberos ticket. You could also use this virtual machine to test the various PAM modules and, for example, extend this system’s PAM configuration to mount the user’s home directory from a SMB share upon login.

In the other virtual machine, install SSSD and configure it to recognize Active Directory users. Test the various SSSD commands mentioned in the exam objectives and see how they affect the appearance of the users on the Linux side. Add and modify some users in the Active Directory and see how these changes become available on the two virtual machines. Also test the overwrite features of SSSD and create some local users in SSSD.

Handling Windows Domain Members

Some organizations tend to run Windows on their desktop computers. We have already joined a Windows virtual machine to the domain. After the virtual machine is added, domain users are able to log into it. Using SMB shares is quite easy: after a user enters the UNC path to a share in the Explorer, a connection to the respective server is established and the share is opened.

LPIC-3 Mixed Environments 3.0, Samba Client Configuration, LPI Exam Prep, LPI Tutorial and Material, LPI Career, LPI Certification, LPI Preparation, LPI Guides, LPI Learning
When a Windows computer joins an Active Directory domain, it becomes subject to several management features of Active Directory. One such feature is logon scripts, which run on the client when a user logs in. Samba can host such logon scripts and instruct Windows clients to execute them.

A more complex approach to Windows management is Group Policy Objects (GPO). This is a complex topic. GPOs can specify a vast amount of properties of Windows systems. You can use various criteria to define whether a GPO applies to a specific computer or a specific user. Microsoft provides a Group Policy for Beginners guide, which is a good first step into GPOs.

Samba Active Directory controllers can host GPOs. GPOs are stored on the SYSVOL share, which is replicated between the domain controllers. In the case of Samba, this replication could be unidirectional from the domain controller holding the PDC emulator FSMO role. In this case, make sure to run the GPO Management utility against that specific server.

To learn more about GPOs, try to define a GPO that mounts a CIFS share and another GPO that restricts access to the Control Panel. Try to assign these GPOs to some of your users and confirm that they are effective after logging into the Windows client as the respective users. Take some time to review the various options and become familiar with the handling of GPOs.

For each user, Windows creates a profile that stores configuration information as well as files, such as those placed on the user’s desktop. When a user uses multiple computers, it is beneficial to make sure all computers access the same profile. The Samba wiki explains how to configure the necessary profile folder redirections.

One More Step to Take

Today, we have learned a lot about the configuration of Samba clients. As usual, don’t forget to review the exam objectives and take a look at the aspects that weren’t explicitly mentioned in this post.

The next post in this series will complete the preparations for the LPIC-3 Mixed Environments exam. We will see how we can use FreeIPA to create a domain that allows the centralized management of Linux authentication clients and how to set up that domain to coexist with Active Directory. We will also review the NFS protocol, which is an alternative to SMB, especially when serving files to clients running an operating system other than Windows.

More Info: LPIC-3 Mixed Environments 3.0 Introduction #03: 303 Samba Share Configuration

Source: lpi.org

Thursday, 23 September 2021

What the “Glocal history of FOSS” project is and what you can do for it

LPI Exam Prep, LPI Tutorial and Materials, LPI Certification, LPI Career, LPI Learning, LPI Guides
As you know already the Italian Linux Society, fresh from Linux Professional Institute (LPI) Community Partnership, teamed up with LPI and the broader Brazilian FOSS community on the “Glocal history of FOSS” project.

Learn more, reading this post, about how you can interact with the project. And why doing that would end up in one of the nerdiest and coolest experiences of yours!

The story so far

This project is a spin-off from our collaboration with the 2021 Open Anniversary. It’s a brilliant initiative, and we started playing around the idea of making a legacy of it: 

Why not try to set up a framework that is “glocal” because, with the help of the FOSS local communities we will be writing the global history of the FOSS movement as the outcome of the local history of its development in various international regions?

Roberto Guido, the president of ILS - The Italian Linux Society - an LPI Community Partner, immediately joined this thread, starting to work on a .json timeline format derived from the format adopted by Timeline.JS, a popular Javascript library to visualize interactive sequences of events. The choice has been to extend the existing format to include translations in different languages for the contents, as required by a project of global and multi-cultural involvement.

Meanwhile, Cesar Brod, LPI’s Director of Community Engagement for Spanish and Portuguese speaking regions, started injecting in the framework Roberto created data from his own (long…) experience in the Brazilian FOSS landscape. Cesar was a Linux user before the kernel reached version 1.0 and since then he has been able to span several FOSS projects and entrepreneurial initiatives, mostly partnering with universities. He is working with Diolinux, an LPI Community Partner to organize the Brazilian community around the Country’s timeline.

LPI Exam Prep, LPI Tutorial and Materials, LPI Certification, LPI Career, LPI Learning, LPI Guides

The “Glocal history of FOSS” project was initially born from a request from Nick Vidal, from the Open Anniversary team, who asked LPI to help with a timeline for the Linux project to be portrayed in their web portal. LPI joined Open Anniversary from the beginning and it was already contributing with content to the project, under the coordination of Kaitlin Edwards and the participation of LPI’s Editorial Board. Cesar Brod started experimenting with an open-source JavaScript library and experimented with his own professional Linux timeline and both Max and he thought it was a very good place to start and get the broader FOSS community involved.

What’s next?

With this very post, we are bringing the whole project a (huge) step further: according to the Torvaldsian principle: “release early and often” we are releasing the project and its framework into the FOSS community. “LPI will be pleased to host a project that belongs to the whole community and by exposing their local achievements we believe even more connections and new and exciting free knowledge-based projects will evolve”, says Max Roveri, chief-editor of the project.

The Italian job

As this project has a few bits of Italian DNA, and as the Italian LinuxDay managed by the Italian Linux Society is at doors (Saturday, October 23rd), we decided to link GHOFOSS to the Italian celebration of Linux.

With the Italian LinuxDay we will be gathering information - “atoms” of Italian history of FOSS, until the LinuxDay. Data will be gathered via this form.

Those data will be used the day after for a hackathon in which the Italian Linux community (no worries: more will come for other geographical areas!) will work on the GHOFOSS mockup and backend.

Source: lpi.org

Tuesday, 21 September 2021

Kali Linux – Terminal and Shell

Generally, operating systems have 2 interfaces GUI(Graphical User Interface) and CLI(Command Line Interface) and the same is the case with Linux Based Operating Systems. Linux Operating Systems are generally packed with terminal emulator packages for CLI based functioning and Desktop environment packages for GUI based functioning. Some common ones are listed below:

Read More: 101-500: Linux Administrator - 101 (LPIC-1 101)

Terminals:

◉ Qterminal

◉ gnome-terminal

◉ MATE terminal

◉ xterm

◉ Terminator

◉ konsole

Desktop Environments:

◉ Xfce/Xfce server Desktop

◉ GNOME3

◉ KDE plasma 5

◉ cinnamon Desktop

◉ MATE Desktop

So being one of the Linux based Operating Systems Kali comes packed with a few of these terminals and Desktop environments. By default, the terminal of Kali 2020.2 Linux is Qterminal and the Desktop environment is Xfce/Xfce server.

CLI(Command Line Interface) vs GUI(Graphical User Interface)

Now most of us think that when we have Graphical User Interface what is the need for Command Line Interface. Our hardware understands instructions in the form of bits(0 or 1), which are to be processed by the kernel in the form of system calls and those system calls are to be made by some code or some commands. So in order to work with them, it is necessary to have a good hands-on Command Line interface. And when we host a server over Linux, there we only have Command Line Interface without any GUI based environment. So in order to work there, we should have a good command on Linux commands which could be done with the help of Linux Terminals.

Though in many cases GUI is better still, if it is the case of Linux then the terminal and Command Line interface plays a vital role as Linux has many tools that are command based and have no GUI interface.

So concluding from this, it depends on what is the task which is to be performed. Sometimes a task could be performed easily with GUI while other times it could be performed with feasibility through terminal.

Terminals vs Shells

Many people confuse between a shell and a terminal emulator. They both are different. Linux based Operating Systems come pre-packed with some shells. In these shells, we need to input the commands, then these shells send these commands to the processor for processing, and then it returns back output to the terminal. Now, Terminal emulator packages allow us to input commands to shell and it reflects the output by the shell.

In simple words, the shell is a program that is responsible for the execution of an instruction and returning the output while the terminal is responsible to send instructions to the shell by taking input from the user and displaying the output of the instruction to the user.

Examples of shells:

◉ bash

◉ Borne

◉ cshell

◉ Korn

◉ POSIX

Working with Kali Linux Terminal

1. Customizing the terminal. In order to customize the kali Linux Terminal. Go to the File menu and select the preferences option. It has a lot of options to customize your terminal, customize the terminal as per your convenience.

Kali Linux – Terminal and Shell, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Guides, LPI Learning

2. Executing a command through terminal. To execute a command in the terminal, just enter a command there and provide the appropriate input, the terminal will execute the command through the shell and will return the output. Just type the following lines in the terminal.

echo "This is a terminal"
pwd

Kali Linux – Terminal and Shell, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Guides, LPI Learning

3. Using comments in terminal. To put a comment in the terminal we use “#” character. Following is the example of a comment.

#this is a comment.

Kali Linux – Terminal and Shell, LPI Exam Prep, LPI Tutorial and Materials, LPI Career, LPI Guides, LPI Learning

Source: geeksforgeeks.org

Saturday, 18 September 2021

The Many Meanings of Linux, Part 1 of 2

LPI Exam Prep, LPI Tutorial and Materials, LPI Material, LPI Career, LPI Guides, LPI Study Materials, LPI Preparation

A startling title appeared in the prestigious Yale Law Journal in 2002. At that time, academics, governments, and companies were exploring an exciting and potentially liberating idea: to take input about policies and products not just from duly credentialed experts, but from the general public. Pushing forward that narrative, Harvard Law Professor Yochai Benkler published a Yale Law Journal paper with the title "Coase's Penguin, or, Linux and The Nature of the Firm."

How did Linux make it into a leading law professor's research? Why did Benkler feature Linux as a data point in "a much broader social-economic phenomenon" that could overturn centuries of corporate and government behavior?

More Info: 010-160: LPI Linux Essentials (Linux Essentials 010)

Benkler was not alone in elevating Linux to a principle and an exemplar. This article explores the many meanings that Linux has had since its emergence in the early 1990s. I explain how Linux altered history by:

◉ Providing a foundation for an entirely free computer system

◉ Proving to observers in all fields the viability of free and open source software

◉ Triggering a move of companies toward open standards and open source implementations of core parts of their software

◉ Bringing modern software within the reach of millions more people

◉ Restructuring the computer industry through virtualization, containers, and the cloud

◉ Sparking interest in the newly recognized phenomenon of crowdsourcing

◉ Accelerating development of new, low-cost hardware platforms

This part of the article will cover the first four points in the list, and an upcoming part will cover the other three.

Foundation for a Free Operating System

People have released free software since the beginning of computing, but few have comprehensively assessed user needs and addressed their wide range. Most free software developers were happy to contribute a library or tool that ran along with some vendor's computer system. The early free software project with the grandest vision was Berkeley Software Distribution (BSD), which started as a set of tweaks to Bell Labs' non-free Unix and evolved into an independent project with a broad mission. Although variants of BSD played important roles in many computing companies, it was a niche phenomenon compared to Microsoft and later Apple.

The GNU project was even more of a niche affair. This one also had a big scope: a band of developers methodically turned out one tool after another to recreate Unix from the ground up. Although the tools were all important developers—particularly the impressive compiler and C/C++ libraries—few held any interest for the average computer user. End-user projects such as GNU Cash accounting software were rare and difficult to use. The central selling point for the GNU tools was the GNU license, which guaranteed their freedom for all to use and share.

Whether because of their license, their quality, or their widespread use by C programmers, it was the GNU tools that Linus Torvalds used to create Linux. And as the importance of Linux grew, so did GNU. The development of GNU and Linux cannot be disentangled; that is why I agree with GNU proponents that full distributions of the operating system should be called GNU/Linux. (But I use the term Linux loosely throughout this article for convenience.)

What did a fully free software stack mean for the general public? It created an explosion of experimentation, especially in the areas of embedded systems, small Internet service providers and other Internet services, and cheap computers for underserved populations around the world. The free stack did even more for corporate data centers. We'll examine all these phenomena in later sections.

People running a completely free GNU/Linux stack would still need a proprietary computer and proprietary firmware. But eventually, both of these gaps were addressed as well. A growing open hardware movement, covered in another article, allows the distribution and customization of open designs. Many free firmware projects also exist.

Proving the Viability of Free and Open Source Software

Until the 1990s there was no debate among businesspeople: to bring an idea to life in software, you needed to form a company and hire a team of experts to code up the product. The waterfall model, where a team moved ponderously from requirements through development and testing to production, was almost universal. Until the dot-com boom (a bubble in the late 1990s created by irrational exuberance among investors), a software project couldn't get off the ground until the accountants and marketing staff had figured out how to make a profit from it.

Free software seemed to be taking place in a parallel universe. No one with money and institutional clout took it seriously.

Yes, Microsoft and AT&T and other companies shoved BSD innovations into their own proprietary software. And in the mid-1980s, the GNU C/C++ compiler jolted developers by outperforming all the commercial compilers. These isolated phenomena were hints that something powerful was going on with free software—but they were technically obscure enough to be ignored by policy-makers.

It was finally Linux that blew apart the complacence at the CxO level. Companies came to appreciate the perennial benefits of free software. After these companies get used to depending on free software and find it robust and reliable, they are more ready to open their own software. By then, they may be employing many developers who understand and love free software, and who urge the companies to contribute to it. I'll tell more of this story in the next section ("The Move Toward Open Standards and Open Source Implementations") and cover the importance of Linux to governments in the one that follows ("Bringing Software to Millions of People").

The Move Toward Open Standards and Open Source Implementations

Who would spend hard cash to develop software and give it away? This was the taunt aimed at free software by its critics for decades. But now it happens every day. Let's look at how the shift has taken place.

As explained in the section "Proving the Viability of Free and Open Source Software," businesses used to hide their source code and make sure no one else could derive benefit from their investment; no other course of action seemed rational. Linux taught them the opposite approach: if they share software, everyone moves ahead faster. Moving fast is critical to success in business during the twenty-first century, so free software becomes crucial.

LPI Exam Prep, LPI Tutorial and Materials, LPI Material, LPI Career, LPI Guides, LPI Study Materials, LPI Preparation
Historically, computer companies were the first to learn the importance of collaborative development. Name a  large, successful computer company—Intel, Amazon, Microsoft, Oracle, whatever—and you can find them working on free software projects. They may keep their core software proprietary (a trend I covered a few years under the term closed core), but they contribute a lot of their work to the community for a number of reasons, including the hope that it will be enhanced by other companies' and individuals' contributions. The demonstrable business value of free software propelled large corporate conferences such as LinuxWorld (Figure 1). Google's Android, an important but different kind of project, will be mentioned in a later section.

Figure 1: Golden penguin awarded for a trivia contest held at LinuxWorld 200 4 

Every company is a bit of a computing company nowadays, so free software appeals to them too. A good example is the automobile industry, which is loading new cars with software, and which has an alliance dedicated to free software in cars. Naturally, their output is based on Linux.

The open source movement has democratized hiring, to some extent. Aspiring developers contribute to free software projects and cite those contributions in job interviews. The popular GitHub and GitLab sites expose each person’s contributions to make them highly visible to employers.

Finally, this rush to open source drives the creation of professional organizations such as the Linux Professional Institute. When companies depend on skills, they want to see demonstrated proficiency among job applications, hence the development of certification programs such as those offered by LPI.

Bringing Software to Millions of People

Like so many things affluent people take for granted—drinkable water, for instance—computer access is strongly associated with economics. Middle-class people in developed countries automatically license a copy of Windows for home use. But in less wealthy countries, access is much more difficult—even in government offices. That's why a 2004 study determined that, "For every two dollars' worth of software purchased legitimately, one dollar's worth was obtained illegally."

Yes, huge swaths of the world's population use software in violation of the software vendors' rules. At times this is tolerated (because the vendors hope the users will eventually turn into paying customers); at other times crackdowns occur. But there seemed to be no alternative until Linux came along.

Nobody has to feel guilty or furtive using Linux, because the whole system is free software. Many governments—particularly in Latin America—declared a preference for free software in the decade or so after Linux became well known.For various reasons, the most idealistic free software adoptions failed (Munich was one highly publicized case of migration that has undergone turmoil), but Linux makes freedom possible.

Special computer systems were designed for low-income and underprivileged areas. Nicholas Negroponte's One Laptop Per Child garnered the most hype, but it didn't live up to the promise. More relevant now are the many distributions for schools and schoolchildren, covered in another article.

The Linux Professional Institute works with dozens of companies, particularly professional training firms, who base their business models on Linux and free software. Located in places as different as Brazil, Bulgaria, and Japan, these companies know that Linux and free software provide a universal platform for education and advancement. The people who take these courses and obtain LPI certifications can build a better economy in their countries. (Many immigrate to more developed countries with higher salaries, but a good number stay at home.)

Source: lpi.org

Thursday, 16 September 2021

Difference between Kali Linux and Parrot OS

Kali Linux, Parrot OS, LPI Exam Prep, LPI Tutorial and Material, LPI Certification, LPI Career, LPI Guides, LPI Leaning

Kali Linux:

Kali Linux is an operating system used for penetration testing and digital forensics, with Linux at its core. It is developed according to Debian (a Linux distribution) standards. It was first released in March 2013 with the aim to be the replacement of the BackTrackOS.

Parrot OS:

Parrot OS is similar to Kali Linux, and is an open-source Debian-based operating system. It is used for cloud pentesting, computer forensics, hacking and privacy/anonymity. It was first released in April 2013.

Read More: LPI Certifications

There are some similarities in these two operating systems:

◉ Both are useful for penetration testing.

◉ Both are developed on Debian standards.

◉ Both can support 32-bit and 64-bit architecture.

Let’s see the difference between Kali Linux and Parrot OS:

Kali Linux Parrot OS
It needs more RAM, about 1 GB. While it requires lesser RAM, about 320 MB.
In terms of GPU, it requires a graphical card, as it needs graphical acceleration.  While it does not need a graphical acceleration, hence no graphic card is needed. 
It requires about 20 GB free space for installation.  While it requires about 16 GB free space for installation. 
Its interface follows the Gnome desktop interface.  While its interface is built the Ubuntu-Matte-Desktop-Environment. 
It does not have pre-installed compilers and IDEs.  While it comes pre-installed with a bunch of compilers and IDEs. 
It has a simpler user interface.  While it has a much better user interface. 
It has heavyweight requirements and is a bit laggy.  While it is very lightweight and doesn’t lag much. 
It has all basic tools needed for hacking.  While it has all the tools that are available in Kali and also adds its own tools. Ex. AnonSurf, Wifiphisher, Airgeddon. 

Source: geeksforgeeks.org