Introducing Structured Outputs and ModelBox's Support for GPT-4o-2024-08-06

Introducing Structured Outputs and ModelBox's Support for GPT-4o-2024-08-06

2024/08/08

By

ModelBox Team

OpenAI has recently introduced structured outputs in their API, enhancing the reliability and consistency of responses by ensuring outputs adhere to developer-supplied JSON Schemas. This development is particularly beneficial for developers who require precise and structured responses for their applications.

GPT-4o-2024-08-06: What's New?

Historically, LLMs have excelled at generating human-like text but struggled with consistently producing structured data adhering to specific formats. GPT-4o-2024-08-06 addresses this challenge head-on with the introduction of Structured Outputs, ensuring model-generated outputs exactly match JSON Schemas provided by developers.

GPT-4o-2024-08-06 Is Awesome At JSON Schema

  • The performance improvement of GPT-4o-2024-08-06 with Structured Outputs is remarkable.

  • In OpenAI's evaluations of complex JSON schema following, this new model achieves a perfect score of 100%, compared to its predecessor, GPT-4-0613, which scored less than 40% on the same tests.

GPT-4o-2024-08-06 Is Better, And Cheaper Than GPT-4O

Let's compare GPT-4o-2024-08-06 with its predecessors and the more compact GPT-4o-mini:

MMLU (Massive Multitask Language Understanding)

MMMU (Massive Multitask Multimodal Understanding)

Pricing (per million tokens)

Context Window

These benchmarks demonstrate that GPT-4o-2024-08-06 maintains the high performance of GPT-4o while introducing the new Structured Outputs feature. GPT-4o-mini, while not as powerful, offers a more cost-effective solution for many applications.

Structured Outputs: The New Trick from OpenAI

OpenAI has introduced Structured Outputs in two primary forms within the API:

  1. Function Calling

  2. Response Format Parameter

Let's explore how to use these features with step-by-step guides and sample codes.

Step-by-Step Guide: Using Structured Outputs

  1. Function Calling with Structured Outputs

Step 1: Define your function with a strict schema

import openai

function_schema = {
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "temperature_unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
            }
        },
        "required": ["location", "temperature_unit"]
    },
    "strict": True  # This enables Structured Outputs
}

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-2024-08-06",
    messages=[
        {"role": "user", "content": "What's the weather like in Boston?"}
    ],
    functions=[function_schema],
    function_call={"name": "get_current_weather"}
)

print(response.choices[0].message.function_call)
  1. Response Format Parameter with Structured Outputs

Step 1: Define your JSON schema

json_schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer"},
        "cities_visited": {
            "type": "array",
            "items": {"type": "string"}
        }
    },
    "required": ["name", "age", "cities_visited"]
}

response = client.chat.completions.create(
    model="gpt-4o-2024-08-06",
    messages=[
        {"role": "user", "content": "Generate a profile for a world traveler named John who is 30 years old."}
    ],
    response_format={"type": "json_object", "json_schema": json_schema}
)

print(response.choices[0].message.content)
  1. Using Structured Outputs with SDKs

OpenAI has updated its Python and Node SDKs with native support for Structured Outputs. Here's an example using the Python SDK with Pydantic:

from pydantic import BaseModel, Field
from typing import List
from openai import OpenAI

class Traveler(BaseModel):
    name: str
    age: int
    cities_visited: List[str] = Field(min_items=1)

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-2024-08-06",
    messages=[
        {"role": "user", "content": "Generate a profile for a world traveler named Sarah who is 28 years old."}
    ],
    response_format={"type": "json_object", "schema": Traveler.model_json_schema()}
)

traveler = Traveler.model_validate_json(response.choices[0].message.content)
print(f"Name: {traveler.name}, Age: {traveler.age}, Cities visited: {', '.join(traveler.cities_visited)}")

Best Practices for Using Structured Outputs

  1. Define Clear Schemas: Ensure your JSON schemas are well-defined and cover all possible outputs.

  2. Handle Refusals: Implement logic to handle cases where the model refuses to generate output due to safety concerns.

  3. Validate Outputs: Although Structured Outputs guarantees schema compliance, always validate the content for accuracy.

  4. Optimize for Performance: Cache preprocessed schemas to reduce latency on subsequent requests.

  5. Combine with Function Calling: Use Structured Outputs in conjunction with function calling for more complex applications.

Limitations and Considerations

Despite its advancements, GPT-4o-2024-08-06 and Structured Outputs have some limitations:

  • Only a subset of JSON Schema is supported.

  • The first API response with a new schema incurs additional latency.

  • While structure is guaranteed, content accuracy is not.

  • Structured Outputs is not compatible with parallel function calls.

  • JSON Schemas used are not eligible for Zero Data Retention (ZDR).

GPT-4o-2024-08-06 and its Structured Outputs feature represent a significant advancement in AI-generated content reliability. By solving the challenge of consistently producing structured data, OpenAI has unlocked new possibilities for developers and businesses. As the AI landscape continues to evolve, GPT-4o-2024-08-06 sets a new standard for precision and structure in AI-powered applications, paving the way for more sophisticated and dependable AI systems across various industries.

ModelBox's Support for GPT-4o-2024-08-06

At ModelBox, we are excited to announce our support for the GPT-4o-2024-08-06 inference, enabling our users to leverage the latest advancements in AI from OpenAI. This integration brings several benefits:

  • Cost Efficiency: GPT-4o-2024-08-06 offers a 50% reduction in input costs and a 33% reduction in output costs, making it a more economical choice for developers (OpenAI Developer Forum).

  • Enhanced Capabilities: With support for up to 16,384 output tokens, the new model is ideal for applications requiring extensive outputs.

  • Structured Outputs: ModelBox now fully supports the structured outputs feature, allowing developers to enforce strict schema compliance in their applications, reducing errors, and improving data reliability.

Why Choose ModelBox?

ModelBox provides a comprehensive platform for AI model integration, management, and optimization. By supporting the latest models like GPT-4o-2024-08-06, we ensure that our users have access to cutting-edge technology with the following advantages:

  • Unified API Key: Simplifies the integration of various LLMs, including most mainstream Claude Sonnet 3.5, GPT4o Mini, Mistral Large 2, etc, streamlining the development process.

  • Prompt Management: Facilitates easier debugging and testing with structured outputs.

  • Analytics: Allows users to monitor usage and performance, ensuring optimal resource utilization.

  • Optimization: Enables experimentation and evaluation of different models to find the best fit for specific applications.

By integrating GPT-4o-2024-08-06, ModelBox continues to provide a robust and versatile platform for AI development, catering to the evolving needs of developers and businesses.

For more information on structured outputs and the capabilities of GPT-4o-2024-08-06, visit the OpenAI announcement and Simon Willison's Weblog.

Learn more about ModelBox

Official Website: https://www.model.box/

Models: https://app.model.box/models

Medium: https://medium.com/@modelbox

Discord: discord.gg/HCKfwFyF

Ship with ModelBox

Ship with ModelBox

Ship with ModelBox

Build, analyze and optimize your LLM workflow with magic power of ModelBox

Build, analyze and optimize your LLM workflow with magic power of ModelBox

Build, analyze and optimize your LLM workflow with magic power of ModelBox