Flexera logo
Image: Performing Security Testing in the Cloud

Security testing is one aspect of a security program that is often overlooked. Organizations who take security seriously understand that testing systems and applications is just smart business. We felt that one way we could help our customers is to describe the process, and nuances, that we go through during our testing. Since RightScale runs in the cloud, the information should help any RightScale customer accomplish the same tasks on their environment.

Our process is basically broken down into the following steps:

  1. Identify instances and applications that will be tested
  2. Select tools and systems that will be used to perform the testing
  3. Coordinate with the cloud service provider to get authorization for testing
  4. Execute the test
  5. Communicate the results

Below I have outlined some of the practical details of each of these steps.

Identify Targets

Before we start testing, we identify what we want to test. For this particular test, we decided that we would include all of the systems that make up our platform, as well as the main dashboard application. Since we use RightScale to manage RightScale, and one of the main functions of our service is using ServerTemplates™ and RightScripts™ to ensure that systems are deployed consistently, there was a temptation to select a representative sample.

Since this was my first time testing RightScale since becoming the Director of Security and Compliance, we decided to test them all. We figured it is good practice, and provided a “validation” of sorts that we were following the practices we champion. We did however decide to limit the testing to publicly addressable AWS IP addresses. (Note: Anyone trying to be PCI compliant in AWS will likely need to test private IPs as well.)

As for the application, we decided on the entire dashboard, and not just a portion (mostly because I wanted a good overview to have as a baseline).

Select Testing Tools

Along with determining which systems/instances and applications we were testing, we selected tools that would help us automate the testing. We had agreed that a primarily automated vulnerability test (with manual validation) was acceptable, but that the application scanning would require a more manual approach given the complexity of our application. To that end, we had the following basic selection criteria:

  • Vulnerability scanner: Number one criterion was its ability to appropriately identify vulnerabilities. We did not want a lot of false positives, but felt that false negatives would be much worse. A second criterion for the vulnerability scanner, was the flexibility of its reporting mechanism.
  • Application testing: Number one criterion was our ability to use it, not what others think of it. A second criterion for the application testing tool was its ability to test against the framework of our application.

Given those requirements we chose three vulnerability scanners that we wanted to evaluate, in hopes of selecting one as the foundation for our ongoing testing program. Those were SAINT, NeXpose, and OpenVAS. Many will point out that there are other tools out there, and I agree, but these were tools I personally have history with, and one is free. We had to start somewhere.

As far as the application testing, I have used Burp Pro for a number of years and am a fan of it, and selected that as an application testing tool of choice. It should be noted that a number of other tools have recently come out that may rival Burp Pro in its functionality, but familiarity of use was important. We wanted to test the application, not the tool.

Where to Run Them?

Once we determined the tools that we wanted to use, we had to figure out where we wanted to run them:

  • SaaS
  • Instance in the same cloud
  • Instance in a different cloud
  • Traditional hosting environment
  • Physical system on our network

We chose the “Instance in the same cloud” for a couple of reasons:

  • Flexibility: We were able to install multiple tools to evaluate and test
  • Eating our own dog food: RightScale is all about configuring and managing systems, so what better way for us to help our customers be able to deploy scanning systems than to do it ourselves
  • Bandwidth cost: By using an instance within the same availability zones on AWS, bandwidth was not an issue
  • Access to internal IPs: By running in the same cloud (AWS region) we can test internal IP addresses

Once we decided to build our own, we downloaded a trial version of SAINT, the community version of NeXpose, and followed the Ubuntu installation directions for OpenVAS. Then we wrote some RightScripts to automate the majority of the install and we were “cooking with gas” so to speak.

Get Authorization from Your Cloud Provider

Once we identified all the instances we were going to test and had our testing sources (one in our case), per the AWS usage agreement, we needed to get authorization from AWS to perform the testing. AWS provides a form that we filled out to request penetration testing of instances. We had to supply the AWS instance IDs and IPs that we obtained earlier, as well as the source of the testing. AWS uses this to create a ticket that AWS security team will get, and subsequently white list the account so the IDS systems are not triggering alerts during the testing. This prevents getting nasty emails about policy violation as well as port blocking, which would affect the test results.

AWS security responded back within a couple of days with approval for the scanning. It is interesting to note that it appears it is the vulnerability scanning that this applies to, for all intents and purposes you should make this request for application-based scanning as well, but it’s been my experience that testing the application does not cause abuse reports to be generated within AWS. During the testing, launching and relaunching of the scanner we did accidentally perform a number of scans from an IP address other than the one we provided to AWS and we did receive two abuse notices.

Probably the biggest point to note with respect to testing instances running in AWS is that instance size must be medium or greater. AWS policy does not allow pen testing, including port/service scanning, of smalls or below, presumably because they want to avoid that the testing degrades the other VMs on the same host. It should be noted, that we were just testing in AWS, depending on your cloud service provider, what you need to provide as far as what you are testing will vary. For AWS, we provided the instance ID as well as the public IP that will be tested, and the source of the testing.

For AWS, the quickest way to get the list of all AWS instance IDs and associated IPs is to use the rest_connection API. It can be used to programmatically generate a list of the instances and associated IP addresses that will be the targets of testing. We ignored the security groups in this test and hit all the “well known ports” that the tools scan. An alternative would be to only test the accessible ports.

Execute the Test

Once we obtained the authorization for the testing, we coordinated with the ops team to make sure they were ready for any potential problems. Once we got their “we are a go” signal, we commenced the testing. The general methodology looked something like this:

  1. A sequential vulnerability scan, using each of the scanners. For both SAINT and NeXpose, we utilized the “exploit” portion of the tools (when it existed) on any noted vulnerability. (Note that we performed multiple scans with each scanner over the course of our three weeks of testing.)
  2. General walk through and Burp Pro “passive” testing of the entire dashboard. Attempting to get an overall feel for the testing tool with the dashboard, and basically doing a full manual spider of the site.
  3. Next we specifically performed testing of our session state mechanism, looking for entropy, manipulation, and injection flaws.
  4. We then stepped through each of the dashboard’s main function areas, “Reports,” “Manage,” “Design,” “Clouds” and “Settings,” looking for well-known attack vectors. In particular focusing on identifying Cross Site Scripting and Request Forgers (XSS and CSRF), Injection, parameter manipulation, and other common web app exposures. See the OWASP testing guide for a good discussion of things that should be tested for in web applications.

Note that all testing we performed was done in both an authenticated state as well as an unauthenticated state.

As stated earlier, we made the decision that the vulnerability scanning portion of our testing would be mostly automated, and the application testing mostly manual. It took us approximately three weeks to identify the systems, get the authorization, and perform the testing. About 2 weeks of that was dedicated to the manual app testing.

A Bit More on the Application

It could be argued, that the bulk of “cloud” security testing should revolve around the application. This is not to say that making sure supporting services like Apache and MySQL versions are patched is not important (it is, just ask Sony), but meaning that much of the exposure to your data will come through the application. Taking the time to assess the mechanisms protecting the application is critical. For example:

  • Are the security groups appropriate?
  • Do you have appropriate controls on who can access API calls or make security related changes via the UI?
  • Does your authorization mechanism enforce appropriate controls via all interfaces?

Items like these are things that will be critical for long-term protection of information. Make sure that you include them in your testing regiment.

Communicate Results

We are an agile shop, so frequent communication is part of our culture, and we leveraged that to provide feedback from the testing to the appropriate engineering or ops teams as we uncovered potential threats. This allowed us to create records of our testing results, as well as provided timely information to be fed into our sprint process. At the completion of the testing, we wriote a summary report and included details of the vulnerabilities from each of the tools as appendices. Even though the information is already fed into the appropriate groups, including details along with the final report allowed stakeholders the ability to review the overall testing methodology and findings, as well as dig down into the details of any vulnerabilities found.

Cloud Management icon

Cloud Management

Take control of cloud use with out-of-the-box and customized policies to automate cost governance, operations, security and compliance.

Your process may vary, and you may have a much more formal reporting requirement. The most important part is to get the appropriate information to the people who can get the system services or applications fixed in a timely manner.

Summary

The process of identifying targets, maintaining testing tools, coordinating with cloud service providers, and communicating those results should be formalized within your organization. Security testing should become an integral part of the IT culture. There will always be issues, as nothing is absolutely secure, but trying to stay ahead of the curve is a worthy cause. With a formal process, you can make it a regular occurrence, thus enhancing your security program and likely meeting many practical as well as compliance requirements.

One side note about the testing is that for all practical purposes, it was exactly the same methodology and tools that I have used previously in non-cloud environments. So I encourage you to roll up your sleeves and implement a testing program for your infrastructure and applications.