Tutorials | Challenges | Tools | Downloads | Resources | Documentaries

HOW TO ACCESS SENSITIVE FILES IN A WEBSITE'S '403 FORBIDDEN' DIRECTORY

[Image: 403forbidden.png]
 

We have all came across 403 FORBIDDEN at some point while surfing the Internet. Although it is true that we are unable to venture further into the websites directory, by just thinking outside of the box, we can still gain access to any files that are beyond this location.

While doing some reconnaissance on a website, we noticed two interesting  directory paths:

www.example.com/documents/  and www.example.com/wp-uploads/


 First, we try the /documents/ path, but notice that it is made public, and this is where they keep files for visitors to read.

Second, we try the /wp-uploads/ path, which greets us with:

[Image: Screenshot%2Bfrom%2B2017-03-18%2B14-18-33.png]


Seems we are out of luck, and there is no going forward beyond this point.

However, we can work around this problem. By using Google Dorks, we can see if Google has any knowledge of files uploaded onto our target website.

We can go to the Google search engine, then type:

Quote:
site:example.com filetype:pdf OR filetype:docx OR filetype:xlsx OR filetype:pptx OR filetype:doc OR filetype:xls OR filetype:ppt

We can keep adding filetypes until we're satisfied.

Another options we have is to use recon-ng. This would be a more convenient options, as we can keep all our reconnaissance work in one place.

Let's open up recon-ng in our terminal and setup the metacrawler modules:
Quote:
recon-ng
show modules
use recon/domains-contacts/metacrawler
set SOURCE example.com

after typing run, well get some results.

[Image: ss.png]

Now we can open these links in our browser. We'll get access to the files, and not not be greeted by 403 FORBIDDEN.
Labels:
Reactions:

Post a Comment

[blogger]

GrayHatHackers

{twitter https://twitter.com/ghhackers}

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget