This article is in no way affiliated, sponsored, or endorsed with/by ScienceLogic, Inc. All graphics are being displayed under fair use for the purposes of this article.
Just another Day
About a year ago I came across an interesting target during a pentest that piqued my interest. This led to independent research into a web application called ScienceLogic Sl1. To facilitate this research I spun up a local instance using Azure Marketplace. The latest version of the software can be found here.
Once the VM was up, I navigated to the main web application to find the following endpoint.
After a brief investigation, I found a page that provided a clear overview of the web application’s potential function and its default credentials.
The default credentials for the phpmyadmin server on port 8008 was also provided. An additional service that should definitely be checked during an assessment as it grants the ability to directly modify records within the application database.
As the VM in Azure provides direct SSH access, I login using the credentials specified during setup.
Having access to the file system, I attempted to examine the web application’s source code. This would make identifying vulnerabilities simpler than through blackbox testing. However, when I tried to view the PHP files, they seemed to be indecipherable.
What about root?
Given the inability to read the web application’s source code, I shifted my focus towards pinpointing potential paths for privilege escalation to root. I uploaded and ran LinPeas on the system to identify any potential privilege escalation vulnerabilities. While LinPeas didn’t reveal anything of particular use, a copy of the sudoers file was located in one of the backup folders on the file system. It had several applications that could be executed as root, without a password, that have known privilege execution capabilities.
The find command is one of my gotos after running across this article years ago. Running the following command will execute the listed command as root on the system.
Copy to Clipboard
Having gained root privileges, I could now revisit the web application and hopefully find a way to review the source code for vulnerabilities.
Let’s briefly diverge to discuss the DMCA and the laws surrounding the circumvention of copyright protections. I bring this up because even basic encoding or encryption of source code might be viewed as a method to safeguard copyright. This could potentially hinder security researchers from examining the source code for vulnerabilities. Fortunately, Title 37 recently updated laws surrounding this topic effectively granting security researcher exception to this rule if done under the auspices of good-faith security research. Given that I am a security researcher performing good-faith security research, securing other customers against critical vulnerabilities, I clearly fall under this exception.
I took a look at the PHP configuration and noticed a custom module was being used to load each source file. Analyzing the module in IDA Pro, it appears to be a simple function that decrypts the file with a static AES 256 key and then decompresses the output with zlib. The python code snippet below illustrates the algorithm.
Copy to Clipboard
Running this algorithm against the garbled source does the trick and I end up with normal looking PHP.
Beware what’s inside!!!
With access to the source code, I started scrutinizing it for more serious vulnerabilities. I had previously observed a command injection bug where the command was saved in the database and subsequently fetched and executed. This prompted me to search for potential SQL injection vulnerabilities, which might be escalated to remote code execution. As I examined the application endpoints, I discovered what seemed like systematic SQL injection issues. After pinpointing roughly 20 SQL injection vulnerabilities, I chose to conclude my search. A few examples are provided below.
Having identified a command injection and a significant number of SQL injection vulnerabilities I decided to stop bug hunting and reach out to the vendor to begin the disclosure process.
Vendor Disclosure & Patch
I’d love to say responsibly disclosing the discovered vulnerabilities went smoothly, but it was easily the worst I’ve experienced. What will follow will be presented as a comical list of when responsible disclosure is probably going to go bad. In reality, all of these things happened during this one responsible disclosure.
The vendor has no public vulnerability disclosure policy
The vendor has no security related email contacts listed on their website
The vendor Twitter account refuses to give you a security contact after you explain you want to disclose a security vulnerability.
After spamming vendor emails harvested from OSINT, the only response you get is from a random engineer. Fortunately, he forwards the email to the security director.
The security director refuses to accept your report, and instead points you to a portal to submit a ticket.
After signing up for the ticketing portal, you find that you can’t submit a ticket unless you are a customer.
When you notify the company that you can’t submit a ticket unless you are a customer, they tell you to have your customer submit the report.
When you send the report anyways, encrypted, hosted on trusted website, they refuse to open it because they claim it could be a phish.
Individuals from the vendor, reach out to arbitrary contacts in your customer’s organization to report you for unusual, possibly malicious behavior.
Upon verification of your identity by multiple individuals in your customer’s organization, they agree to open the results but go silent for weeks.
You receive an email from @zerodaylaw (no seriously) saying they will be representing the vendor going forward in the disclosure process.
The law firm has no technical questions about the vulnerabilities themselves, but instead about behavior surrounding post-exploitation and why this software was “targeted“.
After multiple, unresponsive, follow-up emails with both the law firm & the vendor about coordinating with @MITREcorp to get CVEs reserved, you get an email asking to meet in person, that very week.
In the follow-up phone call (after declining to meet in person), the vendor claims most of the bugs were “features” or in “dead code“.
The primary focus on the call with the vendor is how we “got” the company’s code and not about vulnerabilities details.
The vendor claims that meeting the 90-day public disclosure is unlikely and given their customer base they have no estimate on when public disclosure could happen.
After the phone call, the vendor sends an email asking questions focused on exact times, people, authorizations, and details surrounding the vulnerability coordination with @MITREcorp
Lawyers from the vendor contact your customer’s organization requesting copies of all correspondence with @MITREcorp.
From the list, it’s evident that the interaction ranged from being challenging to outright hostile. However, there was a silver lining: Securifera opted to pursue the status of a CVE numbering authority (CNA). Obtaining CNA status enables an organization to reserve and disclose CVEs more conveniently.
While putting together this post, I came across this article about ScienceLogic that demonstrated similar behavior against vulnerability researchers disclosing issues, consistent bad form.
In the last email correspondence with the vendor, nearly 9 months ago, the security director asserted that the vulnerabilities were addressed. However, they remained reluctant to proceed with CVE issuance. Considering the extensive duration that’s transpired, I opted to independently proceed with CVE issuance and disclosure. As a result, the identified vulnerabilities are logged as CVE-2022-48584 through CVE-2022-48604. I hope the aforementioned list can act as a guide for vendors on practices to avoid in the realm of responsible vulnerability disclosure. For vendor’s looking for a good reference for how to properly run a coordinated vulnerability disclosure program, the following guide was put together to assist by the smart people at Carnegie Mellon.