OpenAI OSS 120B
The gpt-oss-120b is a state-of-the-art open-weight language model developed by OpenAI, featuring 117 billion parameters and utilizing a Mixture-of-Experts (MoE) architecture.It activates 5.1 billion parameters per token, enabling efficient processing on a single Nvidia H100 GPU.This model excels in complex reasoning tasks, including coding, mathematics, and health-related queries, outperforming OpenAI's proprietary o4-mini model in several benchmarks.It supports chain-of-thought reasoning, tool usage (such as web browsing and code execution), and can be fine-tuned for specific applications.Trained on a diverse text-only dataset with a focus on STEM and general knowledge, gpt-oss-120b is available under the Apache 2.0 license, allowing for commercial use and customization. OpenAI emphasizes safety, having rigorously tested the model for potential misuse, ensuring it adheres to high safety standards.
Tools
Function Calling
Community
Open Source
Context Window
128,000
Max Output Tokens
32,768