Craw*py: A powerful content discovery tool

0 23
What distinguishes this tool from similar tools is:1. This tool supports asynchr...

Craw*py: A powerful content discovery tool

What distinguishes this tool from similar tools is:

1. This tool supports asynchronous operations, allowing it to reach the maximum load limit of the device, so it runs very fast;

2. Provides calibration mode with self-implemented filters;

3. Provides a series of parameters to help us better perform tests;

4. Supports recursive scanning mode with given status codes and depths;

5. You can view the results at any time after generating the report;

6. Supports scanning of multiple URLs;

Tool Installation

Researchers can use the following commands to clone the source code of this project locally and install the relevant tool dependencies:

git clone https://github.com/morph3/craw*py

pip3 install -r requirements.txt

Or

python3 -m pip install -r requirements.txt

Tool Usage

morph3 ➜ craw*py/ [main✗] λ python3 craw*py.py --help

usage: craw*py.py [-h] [-u URL] [-w WORDLIST] [-t THREADS] [-rc RECURSIVE_CODES] [-rp RECURSIVE_PATHS] [-rd RECURSIVE_DEPTH] [-e EXTENSIONS] [-to TIMEOUT] [-follow] [-ac] [-fc FILTER_CODE] [-fs FILTER_SIZE] [-fw FILTER_WORD] [-fl FILTER_LINE] [-k] [-m MAX_RETRY]

[-H HEADERS] [-o OUTPUT_FILE] [-gr] [-l URL_LIST] [-lt LIST_THREADS] [-s] [-X HTTP_METHOD] [-p PROXY_SERVER]

 

optional arguments:

-h, --help		Display help information and exit

-u URL, --url URL	Target URL address

-w WORDLIST, --wordlist WORDLIST

Dictionary file used

-t THREADS, --threads THREADS

Semaphore pool size

-rc RECURSIVE_CODES, --recursive-codes RECURSIVE_CODES

Recursive scan codes used, for example 301, 302, 307

-rp RECURSIVE_PATHS, --recursive-paths RECURSIVE_PATHS

Recursive scan paths, initially only scan the given recursive paths, for example admin, support, js, buckup, etc.

-rd RECURSIVE_DEPTH, --recursive-depth RECURSIVE_DEPTH

Recursive scan depth, for example 2

-e EXTENSIONS, --extension EXTENSIONS

Add suffix at the end, can be separated by commas, for example -x .php,.html,.txt

-to TIMEOUT, --timeout TIMEOUT

Set timeout, not recommended to use this option

-follow, --follow-redirects

Follow redirects

-ac, --auto-calibrate

Automatic calibration

-fc FILTER_CODE, --filter-code FILTER_CODE

Filter status code

-fs FILTER_SIZE, --filter-size FILTER_SIZE

Filter size

-fw FILTER_WORD, --filter-word FILTER_WORD

Filter keyword

-fl FILTER_LINE, --filter-line FILTER_LINE

Filter line

-k, --ignore-ssl        Ignore untrusted SSL certificates

-m MAX_RETRY, --max-retry MAX_RETRY

Maximum retry value

-H HEADERS, --headers HEADERS

Set Header

-o OUTPUT_FILE, --output OUTPUT_FILE

Output folder

-gr, --generate-report

If you want Craw*py to generate a report, the default path is craw*py/reports/<url>.txt

-l URL_LIST, --list URL_LIST

Use the URL list as input through multiprocessor -l https://www.freebuf.com/articles/database/urls.txt

-lt LIST_THREADS, --list-threads LIST_THREADS

Number of threads to run Crawby in parallel when using the URL list

-s, --silent      Do not generate a report

-X HTTP_METHOD, --http-method HTTP_METHOD

HTTP request method

-p PROXY_SERVER, --proxy PROXY_SERVER

Proxy server, for example 'http://127.0.0.1:8080'

Tool usage example

python3 craw*py.py -u https://facebook.com/FUZZ -w https://www.freebuf.com/articles/database/common.txt -k -ac -e .php,.html

python3 craw*py.py -u https://google.com/FUZZ -w https://www.freebuf.com/articles/database/common.txt -k -fw 9,83 -rc 301,302 -rd 2 -ac

python3 craw*py.py -u https://morph3sec.com/FUZZ -w https://www.freebuf.com/articles/database/common.txt -e .php,.html -t 20 -ac -k

python3 craw*py.py -u https://google.com/FUZZ -w https://www.freebuf.com/articles/database/common.txt -ac -gr

python3 craw*py.py -u https://google.com/FUZZ -w https://www.freebuf.com/articles/database/common.txt -ac -gr -o /tmp/test.txt

sudo python3 craw*py.py -l urls.txt -lt 20 -gr -w https://www.freebuf.com/articles/database/common.txt -t 20 -o custom_reports -k -ac -s

python3 craw*py.py -u https://google.com/FUZZ -w https://www.freebuf.com/articles/database/common.txt -ac -gr -rd 1 -rc 302,301 -rp admin,backup,support -k

Report examples

The report examples generated by this tool can be obtained from the following address:

https://morph3sec.com/craw*py/example.html

https://morph3sec.com/craw*py/example.txt

Tool operation demonstration

Demo 1: Simple run

Video address:Click to watch

Demo 2: Running example with automatic calibration and recursive mode enabled

Video address:Click to watch

Project address

Craw*py:GitHub link


你可能想看:
最后修改时间:
admin
上一篇 2025年03月28日 03:26
下一篇 2025年03月28日 03:48

评论已关闭