Fetch multiple URLs
Pass up to 10 URLs to fetch their content in parallel.Current platform behavior: The response array order may differ from the
request order. Match successful results by their
url field, not by array
index.Handle partial failures
When some URLs succeed and others fail, the request still returns200.
Failed URLs have success: false with all other fields as null.
Correlating failures to input URLs
Failed entries haveurl: null, so you cannot directly identify which
input URL failed. To correlate failures:
- Track the URLs you sent.
- Collect the
urlvalues from all successful entries. - Any input URL not in the successful set is the one that failed.
Bypass Cloudflare protection with human mode
Some websites use Cloudflare to block automated requests. Sethuman_mode: true to attempt a browser-like fetch path for these pages.
Current platform behavior: Cloudflare bypass is not guaranteed. Some
sites have additional protections that may still block the request.
Processing fetched content
Thecontent field returns raw HTML. Here are common next steps:
| Task | Approach |
|---|---|
| Extract text | Parse HTML and strip tags (BeautifulSoup, Cheerio, etc.) |
| Extract links | Find all <a> tags and their href attributes |
| Extract metadata | Parse <meta> tags for SEO data (description, og:title, etc.) |
| Detect changes | Fetch periodically and diff the content or title fields |
| Resolve relative URLs | Combine relative paths with the base url from the response |
Next steps
- Web Fetch — back to the main Fetch page.
- Reference — request parameters, error handling, and common gotchas.
- Web Search recipes — search-then-fetch workflow patterns.

