Thursday, 20 January 2022

How to Handle Vulnerabilities in Third-Party Programming Libraries

LPI PDF, LPI Exam, LPI Exam Prep, LPI Exam Preparation, LPI Guides, LPI Certification, LPI Skills, LPI Certification, LPI Jobs

Almost all software calls multiple layers of third-party libraries. Suppose, for instance, that a Java program invokes a function from a standard library to format a date. That function might in turn call a function from another library that understands the calendar. And that function calls another, and so on.

What if a security flaw in one of those deeply nested libraries is publicized? Your program is now at risk of compromise, and a malicious intruder can get into the server on which your program is running--even if you didn't introduce a bug of your own.

There are lots of scanners to help you find vulnerabilities in dependencies, but handling them involves some subtleties. We'll look at the process in this article.

Sources of Information About Flaws

We all are protected by a far-flung network of security experts who put software through all kinds of tortuous tests to reveal dangerous flaws, and report these flaws to developers. Their tests may be as basic as throwing unusual input at a function to see whether the function gets confused and lets an intruder take over the program. An interesting discipline called “fuzzing” submits large quantities of randomly generated characters to programs, and is surprisingly effective at finding bugs and vulnerabilities. There are also comprehensive analysis tools that look for suspicious problems in stand-alone code (static analysis) or a running program (dynamic analysis).

Of course, less well-meaning researchers are also looking for such flaws, with the goal of creating malicious exploits for clients in governments and ransomware groups. Although security flaws not yet known to the public (zero day exploits) are dangerous, most attacks use flaws that are publicly known, and that victims have allowed to stay on their systems. Rest assured that malicious actors are reading the public lists of flaws.

Publicly known flaws are published on databases maintained by security organizations, notably the Common Vulnerabilities and Exposures (CVE) database. The National Institute of Standards and Technology (NIST), a leading U.S. government agency for standards in software and elsewhere, maintains the National Vulnerability Database, which adds more detail to the exploits in the CVE database. Another recent effort to collect known flaws in free software libraries is the Open Source Vulnerabilities (OSV) database. And some projects offer specific collections of relevant vulnerabilities, such as the Python Packaging Advisory Database.

You don't have to obsessively read the ever-growing list of vulnerabilities; so many are discovered each day that you couldn’t even keep up with them. For every popular programming language, you can run a tool to automatically search the lists and tell you what has been discovered for all the libraries your application uses. See GitLab's site for a list of automatic tools. GitHub also offers automated checks through a service called Dependabot.

It's convenient to use the tools offered by GitLab and GitHub because, with a few clicks, you can have the check run at key points in your development cycle. But you don't have to be on GitLab or GitHub to run the tools. You can manually integrate them into your development cycle. Java and .NET programs can also use the OWASP Dependency-Check tool.

When should you run a tool? If you can tolerate the time it adds to a check-in, I suggest you run it on every check-in you can. First, you might have added a new package to your application during your most recent edits, and if the package is flawed, you would like to know right away so you can take the steps listed in this article to address the problem. Second, new vulnerabilities are discovered so often that you will regularly turn up a new problem in a package that was fine before.

At the very least, run an automated vulnerability check before a major step in the life cycle, such as quality assurance or deployment. You don't want to go into a major phase of the life cycle with a vulnerability because fixing it becomes much more expensive.

Running vulnerability scanners on a regular basis is a central part of DevSecOps, a trending practice that integrates security into the application's life cycle.  Some regulatory environments, including both the CIA and FBI, require scans that follow the Security Content Automation Protocol (SCAP). SCAP was developed by NIST, and has an open source implementation called Open SCAP.

Easy Fixes

You've turned up a vulnerability! Hopefully, the fix is quick and painless. If the developers of the package have released a new version with the fix, all you need to do is rebuild your application using the fixed version. Of course, any change to a package potentially introduces new problems, so you also need to run your regression tests after the upgrade.

One sophisticated trend in software builds is represented by Project Thoth, an open source tool developed by Red Hat for finding safe libraries for Python applications. (I freelance for Red Hat.) Thoth doesn't simply pull in the latest stable version of each library your application uses; it consults various public databases and tries to recommend a combination of packages that work together without flaws. The same approach is being copied by developers in other programming languages.

If there is no new version yet that has fixed the software vulnerability, maybe you can find an old version of the library that doesn't contain the vulnerability. Of course, if the old version has other vulnerabilities, it won't help you much. And if the old version looks like it will meet your needs, you have to make sure you don't depend on features added to new versions, and again you have to run your regression tests.

Determining the Scope of a Flaw

Let's suppose the solutions suggested in the previous section aren't available. You're stuck building your program with a library that has an identified security flaw. Now some subtle research and reasoning is called for.

Look at the circumstances that can trigger a breach. Many exploits are theoretical when security researchers report them, but they can quickly become real. So read the vulnerability report to see the requirements for an attacker to become a risk. Do they need physical access to your system? Do they need to be superuser (root)? Once they become root, they probably don't need to exploit your flaw to create havoc. You might decide that an attacker is not likely to be able to run the exploit in your particular environment.

Some automated vulnerability scanners are openly sensitive. They might flag something as a problem, but you might decide it's not a problem in your case.

You might also be able to insert more checks to guarantee that the flaw isn't exploited. Suppose that one argument passed to the vulnerable function is the length of a buffer, and the exploit will be a risk only if that argument is negative. Of course, the length of a buffer should always be zero or positive. Your program will never legitimately call the function with a negative value in that argument. You can tighten security by adding this before each call to the function:

    if (argument < 0)

          exit;

Other exploits work by injecting characters that should never be used in legitimate input, so you can check for those characters before passing input to functions. Some languages, following an innovation introduced many years ago in Perl, mark risky variables as “tainted” so you know you have checked them for security violations.

It might be easier to add a check for dangerous input in an application proxy or other wrapper, instead of inserting such checks throughout the application.

This workaround should be temporary, because the maintainers of the library should fix the bug soon.

LPI PDF, LPI Exam, LPI Exam Prep, LPI Exam Preparation, LPI Guides, LPI Certification, LPI Skills, LPI Certification, LPI Jobs
If no one has shared this workaround with the community, add a comment to the issue that reported the flaw, and offer your solution to others.

By the way, you might determine that the reported flaw affects a function you're not calling. But be careful, because you might call some other function in the library that indirectly calls the insecure function. There are tracing and profiling tools that let you look at the entire hierarchy of function calls in your application, so you can see whether you're at risk.

Maybe you're squarely in the sights of attackers: you're using a function with a flaw you can't work around. So consider: do you need the function with the flaw? There are often alternative libraries that offer similar functions. Or the particular use you're making of the function might be simple enough for you to code it up yourself. But writing your own version of the function is a bad idea, because you're more likely to introduce bugs than to fix the problem. After all, you're even less knowledgeable about secure coding than the maintainers of the library. (If you're more knowledgeable than they are, help them fix the library!)

There's also a possibility that you feel confident enough of your coding skills, and familiar enough with the package you're using, to offer a bug fix. This is an option only for open source packages, but I hope you're using open source packages wherever you can.

I don't want to end without a reminder that defense in depth is always important. For instance, if your application is for internal use, firewall rules and authentication should ensure that you're communicating only with legitimate users. On the other hand, even an internal user might be malicious, or might be compromised by an outsider using them as a hop on the way into your server. So a secure application is still needed.

Source: lpi.org

Related Posts

0 comments:

Post a Comment