A great thread. Seems like the heavy hitters have already discussed some great points that I use as well. Here are some of my notes:
Approaching a Target at a High Level:
As mentioned before *.acme.com scope is your friend. Subdomains are
notorious for not having the same amount of security focus as the
primary site. Subdomain enumeration has been discussed elsewhere in
the the forum so i won’t do into that here.
Don’t forget to portscan for obscure services on all hosts. Get
comfortable with nmap. Many high severity issues have been found on
non-standard ports either via service exploitation or finding even
more hosted web servers. A great and hilarious example is
Shahmeer’s IIS.net bug. Also, don’t forget that there are a
multitude of DNS and mail server bugs that might be in scope.
Not pertaining to Bugcrowd (I guess it could though) is looking for
acquisitions the company has had recently. Brands like Google and
Facebook acquire other companies rapidly. These sites usually have a
“grace” period for which you cannot test them or are not covered in
the scope of the bounty program… at first. Keep a calendar and
watchful eye on them, as they have undergone a whirlwind of IT Sec
auditing from the Google/Facbook/etc team but usually still hold a
higher percentage of bugs than normal. A good place to watch would
be something like these lists of Google and Facebook’s
acquisitions on wikipedia.
Focus on site functionality that has been redesigned or changed
since a previous version of the target. Sometimes, having seen/used
a bounty product before, you will notice right away any new
functionality. Other times you will read the bounty brief a few
times and realize that they are giving you a map. Developers often
point out the areas they think they are weak in. They/us want you to
succeed. A visual example would be new search functionality, role
based access, etc. A bounty brief example would be reading a brief
and noticing a lot of pointed references to the API or a particular
page/function in the site.
If the scope allows (and you have the skillset) test the crap out of
the mobile apps. While client side bugs continue to grow less
severe, the API’s/web-endpoints the mobile apps talk to often touch parts of the
application you wouldn’t have seen in a regular workflow. This is
not to say client side bugs are not reportable, they just become low
severity issues as the mobile OS’s raise the bar security-wise.
Lastly don’t forget to do OSINT on your target. I have found
many a quirky “bug” doing this.
- Comments or paths in a binary/site leading to a developer Github account
with creds and private certs.
- Parsed valid usernames of super users from metadata for brute force
- Discovered vulns already circulating the web that the client was
unaware of. On this one there are a number of sources to look at, but
there are some pretty public ones that you will want to check first
like; Xssed.com , Reddit XSS - /r/xss, Punkspider, xss.cx,
xssposed.org, twitter, and a plethora of forums that Google may or may not
Approaching a Target at an Application Level:
Fingerprinting and mapping are paramount when looking for bugs that aren’t shallow. For fingerprinting you need to understand (or at least identify) any frameworks you are testing against. Some quick Chrome extensions and tools here can help with that:
These are just some of the methods you can use. There are nmap scripts that are designed for this as well but i find these to be mostly sufficient.
When fingerprinting identifies some sort of COTS software/framework/++ you need to go searching for that versions update list and possibly any CVE’s or disclosures around it. Identifying a CMS is great because there are a few nifty tools to do this whole process that i like:
Mapping is the art of finding application paths. In large applications this art becomes a necessity. Traditional knowledge will tell you that your spider or scanner will give you a perfect site-tree to inspect but seasoned testers know that this is simply not true. A full browse of the site while connected to an interception proxy is mandatory. Are there ways to speed this up or ensure completeness? No, not 100% but i do like utilizing something like Linkclump to drive exploration.
Besides “walking” the app you need to discover unknown content. This is directory brute forcing. Many people will utilize Dirbuster or Burp Suite’s Discover Content here. The problem with that is both those have drawbacks:1) buggy code and 2) poor directory lists. I prefer using wfuzz or patador with the vastly superior lists from the RAFT project (included in the fuzzdb and seclists project), SVNDigger, and GitDigger projects. Why are these better? Well, the traditional lists were created by spidering the net and compiling common application paths. The RAFT lists were created by spidering the net’s robots.txt files, giving you awesome quick hits on stuff site admins do not want you seeing or testing. With the Digger projects you can find in depth functions becuase thay were created from reversing source code pathing.
So after you have a thorough “feeling” for the site you need to mentally or physically keep a record of workflows in the application. You need to start asking yourself questions like these:
- Does the page functionality display something to the users? (XSS,
Content Spoofing, etc)
- Does the page look like it might need to call on stored data?
(Injections of all type, Indirect object references, client side storage)
- Does it (or can it) interact with the server file system? (Fileupload
vulns, LFI, etc)
- Is it a function worthy of securing? (CSRF, Mixed-mode)
- Is this function a privileged one? (logic flaws, IDORs, priv
There are other things I look at like in-depth analysis of session management and authorization but those bugs are usually found pretty quick.
If you’ve approached a target like above and you truly understand it you will find bugs most of the time. Your troubles will be the next tier of testing which is understanding errors (always pay special attention to what triggered your errors! New people often forget to analyze these), getting injections to work, creating PoCs, fixing broken exploit code/scripts, etc.
Anyways, hope that helps! Happy Hacking!