To help you get started with Spider, we’ll give you $200 in credits when you spend $100.

Add credits

Add credits to start crawling any website, then pay per usage.

Credits: 50,000
Estimated pages: 6k - 48k

Pricing

Cost TypeAmount
File Size Per/Req$0.05 / GB
Uptime Cost Per/Req$0.02 / s latency
FeatureCost
Website Crawling/crawl$0.03 / GB
Collecting Links/links$0.02 / GB
Screenshot/screenshot$0.04 / GB
Data Pipelines/pipeline$0.04 / GB
Add-onsCost
Premium Proxies$0.01 / GB
Javascript Rendering$0.01 / GB
AI Scraping$0.01 / 1k tokens
Data Storage$0.30 / GB month
Metadata Processing$0.01 / GB

Uptime Cost

The cost of uptime is calculated based on the duration of the request in milliseconds. To compute the cost, the formula used is (ms / 500) / 3. For example, if a request takes 1500ms to complete, the cost would be calculated as follows: (1500 / 500) / 3 = 1 credit. Using webhooks costs 1 credits per sent hook.

FAQ

Frequently asked questions about Spider

What is Spider?

Spider is a leading web crawling tool designed for speed and cost-effectiveness, supporting various data formats including LLM-ready markdown.

Why is my website not crawling?

Your crawl may fail if it requires JavaScript rendering. Try setting your request to 'chrome' to solve this issue.

Can you crawl all pages?

Yes, Spider accurately crawls all necessary content without needing a sitemap.

What formats can Spider convert web data into?

Spider outputs HTML, raw, text, and various markdown formats. It supports JSON, JSONL, CSV, and XML for API responses.

Is Spider suitable for large scraping projects?

Absolutely, Spider is ideal for large-scale data collection and offers a cost-effective dashboard for data management.

How can I try Spider?

Purchase credits for our cloud system or test the Open Source Spider engine to explore its capabilities.

Does it respect robots.txt?

Yes, compliance with robots.txt is default, but you can disable this if necessary.