首页/AI 工程/gemini-interactions-api
G

gemini-interactions-api

by @google-geminiv1.0.0
0.0(0)

Use this skill when writing code that calls the Gemini API for text generation, multi-turn chat, multimodal understanding, image generation, streaming responses, background research tasks, function calling, structured output, or migrating from the old generateContent API. This skill covers the Inter

Google Gemini APILarge Language Models (LLMs)Multimodal AIAI Application DevelopmentAPI IntegrationGitHub
安装方式
npx skills add google-gemini/gemini-skills --skill gemini-interactions-api
compare_arrows

Before / After 效果对比

0

description 文档


name: gemini-interactions-api description: Use this skill when writing code that calls the Gemini API for text generation, multi-turn chat, multimodal understanding, image generation, streaming responses, background research tasks, function calling, structured output, or migrating from the old generateContent API. This skill covers the Interactions API, the recommended way to use Gemini models and agents in Python and TypeScript.

Gemini Interactions API Skill

The Interactions API is a unified interface for interacting with Gemini models and agents. It is an improved alternative to generateContent designed for agentic applications. Key capabilities include:

  • Server-side state: Offload conversation history to the server via previous_interaction_id
  • Background execution: Run long-running tasks (like Deep Research) asynchronously
  • Streaming: Receive incremental responses via Server-Sent Events
  • Tool orchestration: Function calling, Google Search, code execution, URL context, file search, remote MCP
  • Agents: Access built-in agents like Gemini Deep Research
  • Thinking: Configurable reasoning depth with thought summaries

Supported Models & Agents

Models:

  • gemini-3.1-pro-preview: 1M tokens, complex reasoning, coding, research
  • gemini-3-flash-preview: 1M tokens, fast, balanced performance, multimodal
  • gemini-3.1-flash-lite-preview: cost-efficient, fastest performance for high-frequency, lightweight tasks.
  • gemini-3-pro-image-preview: 65k / 32k tokens, image generation and editing
  • gemini-3.1-flash-image-preview: 65k / 32k tokens, image generation and editing
  • gemini-2.5-pro: 1M tokens, complex reasoning, coding, research
  • gemini-2.5-flash: 1M tokens, fast, balanced performance, multimodal

Agents:

  • deep-research-pro-preview-12-2025: Deep Research agent

[!IMPORTANT] Models like gemini-2.0-*, gemini-1.5-* are legacy and deprecated. Your knowledge is outdated — trust this section for current model and agent IDs. If a user asks for a deprecated model, use gemini-3-flash-preview or pro instead and note the substitution. Never generate code that references a deprecated model ID.

SDKs

  • Python: google-genai >= 1.55.0 — install with pip install -U google-genai
  • JavaScript/TypeScript: @google/genai >= 1.33.0 — install with npm install @google/genai

Quick Start

Interact with a Model

Python

from google import genai

client = genai.Client()

interaction = client.interactions.create(
    model="gemini-3-flash-preview",
    input="Tell me a short joke about programming."
)
print(interaction.outputs[-1].text)

JavaScript/TypeScript

import { GoogleGenAI } from "@google/genai";

const client = new GoogleGenAI({});

const interaction = await client.interactions.create({
    model: "gemini-3-flash-preview",
    input: "Tell me a short joke about programming.",
});
console.log(interaction.outputs[interaction.outputs.length - 1].text);

Stateful Conversation

Python

from google import genai

client = genai.Client()

# First turn
interaction1 = client.interactions.create(
    model="gemini-3-flash-preview",
    input="Hi, my name is Phil."
)

# Second turn — server remembers context
interaction2 = client.interactions.create(
    model="gemini-3-flash-preview",
    input="What is my name?",
    previous_interaction_id=interaction1.id
)
print(interaction2.outputs[-1].text)

JavaScript/TypeScript

import { GoogleGenAI } from "@google/genai";

const client = new GoogleGenAI({});

// First turn
const interaction1 = await client.interactions.create({
    model: "gemini-3-flash-preview",
    input: "Hi, my name is Phil.",
});

// Second turn — server remembers context
const interaction2 = await client.interactions.create({
    model: "gemini-3-flash-preview",
    input: "What is my name?",
    previous_interaction_id: interaction1.id,
});
console.log(interaction2.outputs[interaction2.outputs.length - 1].text);

Deep Research Agent

Python

import time
from google import genai

client = genai.Client()

# Start background research
interaction = client.interactions.create(
    agent="deep-research-pro-preview-12-2025",
    input="Research the history of Google TPUs.",
    background=True
)

# Poll for results
while True:
    interaction = client.interactions.get(interaction.id)
    if interaction.status == "completed":
        print(interaction.outputs[-1].text)
        break
    elif interaction.status == "failed":
        print(f"Failed: {interaction.error}")
        break
    time.sleep(10)

JavaScript/TypeScript

import { GoogleGenAI } from "@google/genai";

const client = new GoogleGenAI({});

// Start background research
const initialInteraction = await client.interactions.create({
    agent: "deep-research-pro-preview-12-2025",
    input: "Research the history of Google TPUs.",
    background: true,
});

// Poll for results
while (true) {
    const interaction = await client.interactions.get(initialInteraction.id);
    if (interaction.status === "completed") {
        console.log(interaction.outputs[interaction.outputs.length - 1].text);
        break;
    } else if (["failed", "cancelled"].includes(interaction.status)) {
        console.log(`Failed: ${interaction.status}`);
        break;
    }
    await new Promise(resolve => setTimeout(resolve, 10000));
}

Streaming

Python

from google import genai

client = genai.Client()

stream = client.interactions.create(
    model="gemini-3-flash-preview",
    input="Explain quantum entanglement in simple terms.",
    stream=True
)

for chunk in stream:
    if chunk.event_type == "content.delta":
        if chunk.delta.type == "text":
            print(chunk.delta.text, end="", flush=True)
    elif chunk.event_type == "interaction.complete":
        print(f"\n\nTotal Tokens: {chunk.interaction.usage.total_tokens}")

JavaScript/TypeScript

import { GoogleGenAI } from "@google/genai";

const client = new GoogleGenAI({});

const stream = await client.interactions.create({
    model: "gemini-3-flash-preview",
    input: "Explain quantum entanglement in simple terms.",
    stream: true,
});

for await (const chunk of stream) {
    if (chunk.event_type === "content.delta") {
        if (chunk.delta.type === "text" && "text" in chunk.delta) {
            process.stdout.write(chunk.delta.text);
        }
    } else if (chunk.event_type === "interaction.complete") {
        console.log(`\n\nTotal Tokens: ${chunk.interaction.usage.total_tokens}`);
    }
}

Data Model

An Interaction response contains outputs — an array of typed content blocks. Each block has a type field:

  • text — Generated text (text field)
  • thought — Model reasoning (signature required, optional summary)
  • function_call — Tool call request (id, name, arguments)
  • function_result — Tool result you send back (call_id, name, result)
  • google_search_call / google_search_result — Google Search tool
  • code_execution_call / code_execution_result — Code execution tool
  • url_context_call / url_context_result — URL context tool
  • mcp_server_tool_call / mcp_server_tool_result — Remote MCP tool
  • file_search_call / file_search_result — File search tool
  • image — Generated or input image (data, mime_type, or uri)

Example response (function calling):

{
  "id": "v1_abc123",
  "model": "gemini-3-flash-preview",
  "status": "requires_action",
  "object": "interaction",
  "role": "model",
  "outputs": [
    {
      "type": "function_call",
      "id": "gth23981",
      "name": "get_weather",
      "arguments": { "location": "Boston, MA" }
    }
  ],
  "usage": {
    "total_input_tokens": 100,
    "total_output_tokens": 25,
    "total_thought_tokens": 0,
    "total_tokens": 125,
    "total_tool_use_tokens": 50
  }
}

Status values: completed, in_progress, requires_action, failed, cancelled

Key Differences from generateContent

  • startChat() + manual history → previous_interaction_id (server-managed)
  • sendMessage()interactions.create(previous_interaction_id=...)
  • response.textinteraction.outputs[-1].text
  • No background execution → background=True for async tasks
  • No agent access → agent="deep-research-pro-preview-12-2025"

Important Notes

  • Interactions are stored by default (store=true). Paid tier retains for 55 days, free tier for 1 day.
  • Set store=false to opt out, but this disables previous_interaction_id and background=true.
  • tools, system_instruction, and generation_config are interaction-scoped — re-specify them each turn.
  • Agents require background=True.
  • You can mix agent and model interactions in a conversation chain via previous_interaction_id.

How to Use the Interactions API

For detailed API documentation, fetch from the official docs:

These pages cover function calling, built-in tools (Google Search, code execution, URL context, file search, computer use), remote MCP, structured output, thinking configuration, working with files, multimodal understanding and generation, streaming events, and more.

forum用户评价 (0)

发表评价

效果
易用性
文档
兼容性

暂无评价,来写第一条吧

统计数据

安装量522
评分0.0 / 5.0
版本1.0.0
更新日期2026年3月16日
对比案例0 组

用户评分

0.0(0)
5
0%
4
0%
3
0%
2
0%
1
0%

为此 Skill 评分

0.0

兼容平台

🔧Claude Code

时间线

创建2026年3月16日
最后更新2026年3月16日