LogoLogo
TwitterWebsite
  • Getting Started
    • Introduction
    • Human UI
    • Examples
    • Monitoring
    • Workflows
    • Getting Help
  • Documentation
    • Humans
      • Introduction
      • Prompts
      • Tools
      • Knowledge
      • Memory
      • Storage
      • Structured Output
      • Reasoning
      • Teams
    • Models
      • Introduction
      • Open AI
      • Open AI Like
      • Anthropic Claude
      • AWS Bedrock Claude
      • Azure
      • Cohere
      • DeepSeek
      • Fireworks
      • Gemini
      • Gemini - VertexAI
      • Groq
      • HuggingFace
      • Mistral
      • Nvidia
      • Ollama
      • OpenRouter
      • Sambanova
      • Together
      • xAI
    • Tools
      • Introduction
      • Functions
      • Writing your own Toolkit
      • Airflow
      • Apify
      • Arxiv
      • AWS Lambda
      • BaiduSearch
      • Calculator
      • Cal.com
      • Composio
      • Crawl4AI
      • CSV
      • Dalle
      • DuckDb
      • DuckDuckGo
      • Email
      • Exa
      • Fal
      • File
      • Firecrawl
      • Giphy
      • Github
      • Google Calendar
      • Google Search
      • Hacker News
      • Jina Reader
      • Jira
      • Linear
      • Lumalabs
      • MLX Transcribe
      • ModelsLabs
      • Newspaper
      • Newspaper4k
      • OpenBB
      • Bitca
      • Postgres
      • Pubmed
      • Pyton
      • Replicate
      • Resend
      • Searxng
      • Serpapi
      • Shell
      • Slack
      • Sleep
      • Spider
      • SQL
      • Tavily
      • Twitter
      • Website
      • Yfinance
      • Zendesk
    • Knowledges
      • Introduction
      • ArXiv Knowledge Base
      • Combined KnowledgeBase
      • CSV Knowledge Base
      • CSV URL Knowledge Base
      • Docx Knowledge Base
      • Document Knowledge Base
      • JSON Knowledge Base
      • LangChain Knowledge Base
      • LlamaIndex Knowledge Base
      • PDF Knowledge Base
      • PDF URL Knowledge Base
      • S3 PDF Knowledge Base
      • S3 Text Knowledge Base
      • Text Knowledge Base
      • Website Knowledge Base
    • Chunking
      • Fixed Size Chunking
      • Agentic Chunking
      • Semantic Chunking
      • Recursive Chunking
      • Document Chunking
    • VectorDBS
      • Introduction
      • PgVector Agent Knowledge
      • Qdrant Agent Knowledge
      • Pinecone Agent Knowledge
      • LanceDB Agent Knowledge
      • ChromaDB Agent Knowledge
      • SingleStore Agent Knowledge
    • Storage
      • Introduction
      • Postgres Agent Storage
      • Sqlite Agent Storage
      • Singlestore Agent Storage
      • DynamoDB Agent Storage
      • JSON Agent Storage
      • YAML Agent Storage
    • Embeddings
      • Introduction
      • OpenAI Embedder
      • Gemini Embedder
      • Ollama Embedder
      • Voyage AI Embedder
      • Azure OpenAI Embedder
      • Mistral Embedder
      • Fireworks Embedder
      • Together Embedder
      • HuggingFace Embedder
      • Qdrant FastEmbed Embedder
      • SentenceTransformers Embedder
    • Workflows
      • Introduction
      • Session State
      • Streaming
      • Advanced Example - News Report Generator
  • How To
    • Install & Upgrade
    • Upgrade to v2.5.0
Powered by GitBook
LogoLogo

© 2025 Bitca. All rights reserved.

On this page
  • ​Prerequisites
  • ​Example
  • ​Toolkit Params
  • ​Toolkit Functions
Export as PDF
  1. Documentation
  2. Tools

Firecrawl

PreviousFileNextGiphy

Last updated 4 months ago

FirecrawlTools enable an Agent to perform web crawling and scraping tasks.

Prerequisites

The following example requires the firecrawl-py library and an API key which can be obtained from .

pip install -U firecrawl-py
export FIRECRAWL_API_KEY=***

Example

The following agent will scrape the content from and return a summary of the content:

cookbook/tools/firecrawl_tools.py

from bitca.agent import Agent
from bitca.tools.firecrawl import FirecrawlTools

agent = Agent(tools=[FirecrawlTools(scrape=False, crawl=True)], show_tool_calls=True, markdown=True)
agent.print_response("Summarize this https://finance.yahoo.com/")

Toolkit Params

Parameter
Type
Default
Description

api_key

str

None

Optional API key for authentication purposes.

formats

List[str]

None

Optional list of formats to be used for the operation.

limit

int

10

Maximum number of items to retrieve. The default value is 10.

scrape

bool

True

Enables the scraping functionality. Default is True.

crawl

bool

False

Enables the crawling functionality. Default is False.

Function
Description

scrape_website

Scrapes a website using Firecrawl. Parameters include url to specify the URL to scrape. The function supports optional formats if specified. Returns the results of the scraping in JSON format.

crawl_website

Crawls a website using Firecrawl. Parameters include url to specify the URL to crawl, and an optional limit to define the maximum number of pages to crawl. The function supports optional formats and returns the crawling results in JSON format.

Toolkit Functions

​
Firecrawl
​
https://finance.yahoo.com/
​
​