LogoLogo
TwitterWebsite
  • Getting Started
    • Introduction
    • Human UI
    • Examples
    • Monitoring
    • Workflows
    • Getting Help
  • Documentation
    • Humans
      • Introduction
      • Prompts
      • Tools
      • Knowledge
      • Memory
      • Storage
      • Structured Output
      • Reasoning
      • Teams
    • Models
      • Introduction
      • Open AI
      • Open AI Like
      • Anthropic Claude
      • AWS Bedrock Claude
      • Azure
      • Cohere
      • DeepSeek
      • Fireworks
      • Gemini
      • Gemini - VertexAI
      • Groq
      • HuggingFace
      • Mistral
      • Nvidia
      • Ollama
      • OpenRouter
      • Sambanova
      • Together
      • xAI
    • Tools
      • Introduction
      • Functions
      • Writing your own Toolkit
      • Airflow
      • Apify
      • Arxiv
      • AWS Lambda
      • BaiduSearch
      • Calculator
      • Cal.com
      • Composio
      • Crawl4AI
      • CSV
      • Dalle
      • DuckDb
      • DuckDuckGo
      • Email
      • Exa
      • Fal
      • File
      • Firecrawl
      • Giphy
      • Github
      • Google Calendar
      • Google Search
      • Hacker News
      • Jina Reader
      • Jira
      • Linear
      • Lumalabs
      • MLX Transcribe
      • ModelsLabs
      • Newspaper
      • Newspaper4k
      • OpenBB
      • Bitca
      • Postgres
      • Pubmed
      • Pyton
      • Replicate
      • Resend
      • Searxng
      • Serpapi
      • Shell
      • Slack
      • Sleep
      • Spider
      • SQL
      • Tavily
      • Twitter
      • Website
      • Yfinance
      • Zendesk
    • Knowledges
      • Introduction
      • ArXiv Knowledge Base
      • Combined KnowledgeBase
      • CSV Knowledge Base
      • CSV URL Knowledge Base
      • Docx Knowledge Base
      • Document Knowledge Base
      • JSON Knowledge Base
      • LangChain Knowledge Base
      • LlamaIndex Knowledge Base
      • PDF Knowledge Base
      • PDF URL Knowledge Base
      • S3 PDF Knowledge Base
      • S3 Text Knowledge Base
      • Text Knowledge Base
      • Website Knowledge Base
    • Chunking
      • Fixed Size Chunking
      • Agentic Chunking
      • Semantic Chunking
      • Recursive Chunking
      • Document Chunking
    • VectorDBS
      • Introduction
      • PgVector Agent Knowledge
      • Qdrant Agent Knowledge
      • Pinecone Agent Knowledge
      • LanceDB Agent Knowledge
      • ChromaDB Agent Knowledge
      • SingleStore Agent Knowledge
    • Storage
      • Introduction
      • Postgres Agent Storage
      • Sqlite Agent Storage
      • Singlestore Agent Storage
      • DynamoDB Agent Storage
      • JSON Agent Storage
      • YAML Agent Storage
    • Embeddings
      • Introduction
      • OpenAI Embedder
      • Gemini Embedder
      • Ollama Embedder
      • Voyage AI Embedder
      • Azure OpenAI Embedder
      • Mistral Embedder
      • Fireworks Embedder
      • Together Embedder
      • HuggingFace Embedder
      • Qdrant FastEmbed Embedder
      • SentenceTransformers Embedder
    • Workflows
      • Introduction
      • Session State
      • Streaming
      • Advanced Example - News Report Generator
  • How To
    • Install & Upgrade
    • Upgrade to v2.5.0
Powered by GitBook
LogoLogo

© 2025 Bitca. All rights reserved.

On this page
  • ​Authentication
  • ​Example
  • ​Params
Export as PDF
  1. Documentation
  2. Models

Open AI

PreviousIntroductionNextOpen AI Like

Last updated 4 months ago

The GPT models are the best in class LLMs and used as the default LLM by Agents.

Authentication

Set your OPENAI_API_KEY environment variable. You can get one .

MacWindows

export OPENAI_API_KEY=sk-***

Example

Use OpenAIChat with your Agent:

agent.py


from bitca.agent import Agent, RunResponse
from bitca.model.openai import OpenAIChat

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    markdown=True
)

# Get the response in a variable
# run: RunResponse = agent.run("Share a 2 sentence horror story.")
# print(run.content)

# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story.")
Name
Type
Default
Description

id

str

"gpt-4o"

The id of the OpenAI model to use.

name

str

"OpenAIChat"

The name of this chat model instance.

provider

str

"OpenAI " + id

The provider of the model.

store

Optional[bool]

None

Whether or not to store the output of this chat completion request for use in the model distillation or evals products.

frequency_penalty

Optional[float]

None

Penalizes new tokens based on their frequency in the text so far.

logit_bias

Optional[Any]

None

Modifies the likelihood of specified tokens appearing in the completion.

logprobs

Optional[bool]

None

Include the log probabilities on the logprobs most likely tokens.

max_tokens

Optional[int]

None

The maximum number of tokens to generate in the chat completion.

presence_penalty

Optional[float]

None

Penalizes new tokens based on whether they appear in the text so far.

response_format

Optional[Any]

None

An object specifying the format that the model must output.

seed

Optional[int]

None

A seed for deterministic sampling.

stop

Optional[Union[str, List[str]]]

None

Up to 4 sequences where the API will stop generating further tokens.

temperature

Optional[float]

None

Controls randomness in the model's output.

top_logprobs

Optional[int]

None

How many log probability results to return per token.

user

Optional[str]

None

A unique identifier representing your end-user.

top_p

Optional[float]

None

Controls diversity via nucleus sampling.

extra_headers

Optional[Any]

None

Additional headers to send with the request.

extra_query

Optional[Any]

None

Additional query parameters to send with the request.

request_params

Optional[Dict[str, Any]]

None

Additional parameters to include in the request.

api_key

Optional[str]

None

The API key for authenticating with OpenAI.

organization

Optional[str]

None

The organization to use for API requests.

base_url

Optional[Union[str, httpx.URL]]

None

The base URL for API requests.

timeout

Optional[float]

None

The timeout for API requests.

max_retries

Optional[int]

None

The maximum number of retries for failed requests.

default_headers

Optional[Any]

None

Default headers to include in all requests.

default_query

Optional[Any]

None

Default query parameters to include in all requests.

http_client

Optional[httpx.Client]

None

An optional pre-configured HTTP client.

client_params

Optional[Dict[str, Any]]

None

Additional parameters for client configuration.

client

Optional[OpenAIClient]

None

The OpenAI client instance.

async_client

Optional[AsyncOpenAIClient]

None

The asynchronous OpenAI client instance.

structured_outputs

bool

False

Whether to use the structured outputs from the Model.

supports_structured_outputs

bool

True

Whether the Model supports structured outputs.

add_images_to_message_content

bool

True

Whether to add images to the message content.

Params

For more information, please refer to the as well.

​
from OpenAI here
​
​
OpenAI docs