Extending GPT with External Tools: How AI Can Call APIs for Smarter Answers

Extending OpenAI with External Tools

When working with OpenAI models, you might face situations where the model cannot directly provide real-time or domain-specific answers — such as looking up the weather, querying a database, or controlling a device.

This is where tool integration (also known as function calling or tool use) comes into play. Instead of relying only on the model’s built-in knowledge, you can connect it with your own external programs or APIs. The model acts like the brain, while your tools act like the hands that execute real operations.

How it Works

  1. User asks a question “What’s the weather in Taipei tomorrow?”
  2. Model decides it needs a tool
    The model identifies that answering requires calling a weather function, not just generating text.
  3. Your backend runs the tool
    The model outputs the function name and parameters, e.g., { "city": "Taipei", "date": "2025-09-17" } Your system then executes get_weather(city, date) by calling a weather API.
  4. Return results to the model
    Suppose the API returns: { "temp": 28, "condition": "Partly Cloudy" }
  5. Model crafts the final response
    The model reformulates the raw tool output into natural language: “The forecast for Taipei tomorrow is partly cloudy with a high around 28°C.”

Why It Matters

This approach turns OpenAI into a central reasoning engine that can:

  • Query live data sources (databases, APIs, logs)
  • Control hardware or IoT devices
  • Extend AI with custom business logic
  • Keep responses accurate and updated

Instead of hardcoding knowledge in prompts, you give the model tools it can use on demand.

Example: OpenAI Tool Integration with Python

from openai import OpenAI

client = OpenAI()

# Define an external tool
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Retrieve weather information for a given city and date",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"},
                    "date": {"type": "string", "description": "Date in YYYY-MM-DD format"}
                },
                "required": ["city", "date"]
            }
        }
    }
]

# User asks a question
user_message = {"role": "user", "content": "What's the weather in Taipei tomorrow?"}
print("🧑 User:", user_message["content"])

# Send the request to GPT
response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[user_message],
    tools=tools
)

message = response.choices[0].message
print("🤖 GPT (raw):", message)

# Check if GPT requested a tool
tool_call = message.tool_calls
if tool_call:
    fn_name = tool_call[0].function.name
    args = tool_call[0].function.arguments
    print(f"🤖 GPT requested tool → {fn_name} with args {args}")

    # Simulate running the tool
    tool_result = {"temp": 28, "condition": "Partly Cloudy"}
    print("🔧 Tool result:", tool_result)

    # Send tool result back to GPT
    followup = client.chat.completions.create(
        model="gpt-4.1",
        messages=[
            user_message,
            message,  # includes tool call
            {"role": "tool", "tool_call_id": tool_call[0].id, "content": str(tool_result)}
        ]
    )

    final_msg = followup.choices[0].message
    print("🤖 GPT (final):", final_msg.content)

📝 How this works

  1. Define your tool (get_weather) so the model knows how it can be used.
  2. The user asks a question.
  3. The model responds with a request to call the tool.
  4. Your backend executes the function (calls an API, queries a DB, etc.).
  5. The tool result is returned to the model, which reformulates it into a human-friendly response.

👉 You can replace get_weather with your own functions — for example:

  • query_database(customer_id)
  • run_camera_inspection(image_path)
  • control_robotic_arm(position)
🧑 User: What's the weather in Taipei tomorrow?
🤖 GPT (raw): tool_calls=[{'id': 'call_abc123', 'function': {'name': 'get_weather', 'arguments': '{"city":"Taipei","date":"2025-09-17"}'}}]
🤖 GPT requested tool → get_weather with args {"city":"Taipei","date":"2025-09-17"}
🔧 Tool result: {'temp': 28, 'condition': 'Partly Cloudy'}
🤖 GPT (final): The forecast for Taipei tomorrow is partly cloudy with a high of about 28°C.

Whole prompt like the following

[
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": "What’s the weather in Taipei?"},
  {"role": "assistant", "content": "Let me check the weather in Taipei.", "tool_call": {"name": "get_weather", "arguments": {"location": "Taipei"}}},
  {"role": "tool", "name": "get_weather", "content": "{\"temperature\": 28, \"condition\": \"Sunny\"}"},
  {"role": "assistant", "content": "It’s currently 28°C and sunny in Taipei."}
]

Reference: OpenAI

Spread the love
Posted in AI

Leave a Reply

Your email address will not be published. Required fields are marked *