Meta
Meta

Llama 3.2 3B Instruct

meta-llama/llama-3.2-3b-instruct

Llama 3.2 is the latest iteration of Meta's open-source AI model family, offering enhanced capabilities and versatility. The new release includes models of various sizes: 1B, 3B, 11B, and 90B parameters. The 1B and 3B models are lightweight, multilingual, and text-only, designed for efficient deployment on mobile and edge devices. The larger 11B and 90B models are multimodal, capable of processing both text and high-resolution images.

Key features of Llama 3.2 include:

  1. Improved performance across over 150 benchmark datasets in multiple languages.
  2. Multimodal capabilities in larger models for image understanding and visual reasoning.
  3. Integration with Llama Stack, providing a streamlined developer experience with support for multiple programming languages and deployment options.
  4. Enhanced support for agentic components, including tool calling, safety guardrails, and retrieval augmented generation.
  5. Compatibility with various hardware platforms, including ARM, MediaTek, and Qualcomm for mobile and edge devices.

Llama 3.2 has garnered significant attention, with over 350 million downloads on Hugging Face alone. It's being utilized across various industries for applications such as data privacy, productivity enhancement, contextual understanding, and solving complex business needs. The ecosystem around Llama continues to grow, with partners like Dell, Zoom, DoorDash, and KPMG leveraging the technology for diverse use cases.

Community

Open Source

Context Window

128,000

Max Output Tokens

4,096

Using Llama 3.2 3B Instruct with Python API

Using Llama 3.2 3B Instruct with OpenAI compatible API

import openai

client = openai.Client(
  api_key= '{your_api_key}',
  base_url="https://api.model.box/v1",
)
response = client.chat.completions.create(
model="meta-llama/llama-3.2-3b-instruct",
messages: [
  {
    role: 'user',
    content:
      'introduce your self',
    },
  ]
)
print(response)