# The Crawling API in minutes

We've created an API that will make integrating ProxyCrawl in your crawling project very easy.

# Free trial

The first 1,000 requests are free of charge.

Make sure to fully use the free trial!

# Rate limit

The API is rate limited to a maximum of 20 requests per second, per token (rate limit can be increased upon request).

This means that you can send up to 20 requests every second, which means around 51 million requests per month, regardless of the number of threads they use.

The API will respond with 429 status code when the rate limit is exceeded.

Note: Some specific websites might have lower limits. If you require higher limits on those, please contact support.

# API response times

The average API response time is between 4 to 10 seconds, but we recommend setting a timeout for the calls of at least 90 seconds.

# Success vs fail

We only charge successful requests (see original status and PC status in response parameters below).

# Other notes

  • If you prefer to use a library to integrate ProxyCrawl, you can see available API libraries here.
  • Using the Accept-Encoding gzip header is recommended.
  • If you use Scrapy for python, make sure to disable the DNS cache.

# Authentication

You will need authentication tokens to use the API.
You have two tokens; one for normal requests and another one for javascript requests (real browsers).

Normal token

_USER_TOKEN_

Javascript token

_JS_TOKEN_

Note: If you don't see your tokens, please login first here and then refresh then this page.

# Your first API call

All API URLs start with the following base part: https://api.proxycrawl.com

Therefore making your first call is as easy as running the following line in the terminal.
Go ahead and try it!

curl 'https://api.proxycrawl.com/?token=_USER_TOKEN_&url=https%3A%2F%2Fwww.amazon.com'

Note that POST requests are also supported. Please go here for more information or continue below with the API parameters.