Aize Platform LogoAize Platform Docs

Available Models

List of supported AI models and pricing

Available Models

Aize Platform provides access to multiple AI models through a unified API. All models use the OpenAI-compatible interface at https://api.aize.dev/v1.

Model Aliases

We use model aliases that map to the actual underlying models. This allows us to:

  • Seamless Updates: Update underlying models without breaking your code
  • Consistent Naming: Use familiar model names across providers
  • Easy Migration: Switch between providers transparently

Supported Models

Below is a list of currently supported models and their pricing per 1M tokens.

Loading models...

Pricing Details

Input vs Output Tokens

  • Input Tokens: The tokens you send in your request (prompt)
  • Output Tokens: The tokens generated in the response (completion)

Most models charge different rates for input and output tokens.

Token Calculation

Approximate token counts:

  • 1 token ≈ 4 characters
  • 1 token ≈ 0.75 words
  • 100 tokens ≈ 75 words

Model Selection Guide

GPT-4 Models

Best for:

  • Complex reasoning tasks
  • Long-form content generation
  • Code generation
  • Multi-step problem solving

GPT-4o (or GPT-4 Turbo) is recommended for most use cases (faster and more cost-effective).

GPT-3.5 Models

Best for:

  • Quick responses
  • Simple queries
  • High-volume applications
  • Cost-sensitive workloads

GPT-3.5 Turbo offers the best balance of speed and cost.

Claude Models

Best for:

  • Long documents (up to 100K tokens)
  • Detailed analysis
  • Creative writing
  • Nuanced conversations

Gemini Models

Best for:

  • Multimodal tasks (text + images)
  • Real-time applications
  • Cost-effective alternatives

Model Features

Context Windows

Different models support different context window sizes:

  • GPT-4 Turbo: 128K tokens
  • GPT-3.5 Turbo: 16K tokens
  • Claude: 100K tokens
  • Gemini Pro: 32K tokens

Function Calling

Most models support function calling (tools) for structured outputs. See the Quick Start for implementation details.

Vision Capabilities

Some models support image inputs. See the Quick Start for implementation details.

On this page