JavaScript
    rest-api

    JavaScript OpenAI Chat Completions API Example with Fetch

    Step-by-step JavaScript OpenAI Chat Completions API example using fetch. Learn how to send chat prompts, handle errors, and test requests in Apicurl with environment-based API keys.

    0 views
    Updated 2/24/2026

    Ready to test this code?

    Load this example into the app

    Code Example

    Copy and run
    const OPENAI_API_KEY = process.env.OPENAI_API_KEY || "sk-your-openai-api-key";
    
    async function runChatCompletion() {
      const response = await fetch("https://api.openai.com/v1/chat/completions", {
        method: "POST",
        headers: {
          "Authorization": `Bearer ${OPENAI_API_KEY}`,
          "Content-Type": "application/json"
        },
        body: JSON.stringify({
          model: "gpt-4o-mini",
          messages: [
            { role: "system", content: "You are a helpful assistant that writes concise answers." },
            { role: "user", content: "Give me three ideas for API test cases." }
          ]
        })
      });
    
      if (!response.ok) {
        console.error("OpenAI error", response.status, await response.text());
        throw new Error(`OpenAI API request failed with status ${response.status}`);
      }
    
      const data = await response.json();
      const message = data.choices?.[0]?.message?.content;
      console.log("Model reply:\n", message);
    }
    
    runChatCompletion().catch(console.error);
    

    Overview

    JavaScript OpenAI Chat Completions API Example with Fetch

    The OpenAI API powers chatbots, assistants, and content generation workflows. This example shows how to call the Chat Completions API from JavaScript using the fetch API in a secure, production-friendly way.

    You will learn how to:

    • Authenticate using an API key loaded from environment variables
    • Call the Chat Completions endpoint with a system and user message
    • Handle JSON responses and basic errors
    • Understand how streaming responses work
    • Reuse the exact same request inside Apicurl for testing and debugging

    This example uses placeholder values for the API key and model. Always use your own key and preferred model in practice.

    1. Prerequisites

    Before you start, make sure you have:

    • An OpenAI account and API key
    • Node.js or a browser environment with fetch
    • Apicurl open at /app if you want to test the HTTP request interactively

    2. Configure Your OpenAI API Key

    In production, you should read your OpenAI key from an environment variable. For example:

    export OPENAI_API_KEY="sk-your-openai-api-key"
    

    In Windows PowerShell:

    $Env:OPENAI_API_KEY="sk-your-openai-api-key"
    

    In JavaScript (Node.js), you can access it like this:

    const OPENAI_API_KEY = process.env.OPENAI_API_KEY || "sk-your-openai-api-key";
    

    Never commit real API keys to Git. Use environment variables or a secret manager, and treat your key like a password.

    3. Basic Chat Completions Request with Fetch

    The Chat Completions endpoint accepts a list of messages and returns a model-generated response. Here is a minimal JavaScript example using fetch:

    const OPENAI_API_KEY = process.env.OPENAI_API_KEY || "sk-your-openai-api-key";
    
    async function runChatCompletion() {
      const response = await fetch("https://api.openai.com/v1/chat/completions", {
        method: "POST",
        headers: {
          "Authorization": `Bearer ${OPENAI_API_KEY}`,
          "Content-Type": "application/json"
        },
        body: JSON.stringify({
          model: "gpt-4o-mini",
          messages: [
            { role: "system", content: "You are a helpful assistant that writes concise answers." },
            { role: "user", content: "Give me three ideas for API test cases." }
          ]
        })
      });
    
      if (!response.ok) {
        console.error("OpenAI error", response.status, await response.text());
        throw new Error(`OpenAI API request failed with status ${response.status}`);
      }
    
      const data = await response.json();
      const message = data.choices?.[0]?.message?.content;
      console.log("Model reply:\n", message);
    }
    
    runChatCompletion().catch(console.error);
    

    This pattern works in Node.js and modern browsers, as long as your key is never exposed to untrusted clients.

    4. Handling Errors and Rate Limits

    The OpenAI API returns error information in the response body when something goes wrong. Typical issues include:

    • Missing or invalid API key (401)
    • Exceeded quota or rate limits (429)
    • Invalid model names or parameters (400)

    You can wrap your call in a small helper that throws structured errors:

    async function safeOpenAIRequest(payload) {
      const response = await fetch("https://api.openai.com/v1/chat/completions", {
        method: "POST",
        headers: {
          "Authorization": `Bearer ${OPENAI_API_KEY}`,
          "Content-Type": "application/json"
        },
        body: JSON.stringify(payload)
      });
    
      const text = await response.text();
    
      if (!response.ok) {
        let error;
        try {
          error = JSON.parse(text);
        } catch {
          error = { message: text };
        }
    
        console.error("OpenAI request failed", {
          status: response.status,
          error
        });
    
        throw new Error(`OpenAI request failed with status ${response.status}`);
      }
    
      return JSON.parse(text);
    }
    

    This helper ensures you always see meaningful error details in logs.

    5. Streaming Responses (Conceptual Overview)

    The OpenAI API also supports streaming responses via server-sent events (SSE). You enable this by setting stream: true in the request body and reading the response as a stream.

    In Node.js, you would typically:

    • Call the same endpoint with stream: true
    • Read chunks from response.body
    • Parse data: lines as they arrive

    The exact implementation varies between runtimes, but the underlying HTTP request (URL, headers, body shape) is the same. You can still design and test your base request in Apicurl before writing the streaming code.

    6. Using This Example in Apicurl

    To test this request in Apicurl:

    1. Open /examples/javascript/rest-api/javascript-openai-chat-api.
    2. Click "Try It in Apicurl".
    3. The configured POST request will load into the Apicurl app.
    4. Replace the placeholder sk-your-openai-api-key with your real OpenAI API key.
    5. Send the request and inspect the JSON response, headers, and error cases.

    You can then:

    • Save the request into a collection for your AI integrations.
    • Duplicate it for different prompts, models, or system instructions.
    • Generate code for other languages directly from Apicurl.

    7. Best Practices Recap

    When calling the OpenAI Chat API from JavaScript:

    • Always load your API key from environment variables or a secret manager.
    • Use HTTPS and the official endpoint (https://api.openai.com/v1/chat/completions).
    • Log errors with both status code and body for easier debugging.
    • Design and test your base payload in Apicurl before wiring up streaming.
    • Avoid calling the API directly from untrusted frontends; proxy through a backend when possible.

    Following these patterns gives you a clean, composable OpenAI integration that you can evolve as your use cases grow.

    Related Topics

    javascript
    openai api
    chat completions
    llm
    api authentication
    bearer token
    streaming api
    api testing
    apicurl

    Ready to test APIs like a pro?

    Apicurl is a free, powerful API testing tool.