OpenAI の互換性

Gemini モデルには、OpenAI ライブラリ(Python と TypeScript / Javascript)と REST API を使用してアクセスできます。これには、3 行のコードを更新し、Gemini API キーを使用します。OpenAI ライブラリを使用していない場合は、Gemini API を直接呼び出すことをおすすめします。

Python

from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  response = client.chat.completions.create(     model="gemini-2.5-flash",     messages=[         {"role": "system", "content": "You are a helpful assistant."},         {             "role": "user",             "content": "Explain to me how AI works"         }     ] )  print(response.choices[0].message) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  const response = await openai.chat.completions.create({     model: "gemini-2.0-flash",     messages: [         { role: "system", content: "You are a helpful assistant." },         {             role: "user",             content: "Explain to me how AI works",         },     ], });  console.log(response.choices[0].message); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer GEMINI_API_KEY" \ -d '{     "model": "gemini-2.0-flash",     "messages": [         {"role": "user", "content": "Explain to me how AI works"}     ]     }' 

変更点3 行だけです。

  • api_key="GEMINI_API_KEY": 「GEMINI_API_KEY」を実際の Gemini API キーに置き換えます。このキーは Google AI Studio で取得できます。

  • base_url="https://generativelanguage.googleapis.com/v1beta/openai/": これにより、OpenAI ライブラリはデフォルトの URL ではなく Gemini API エンドポイントにリクエストを送信します。

  • model="gemini-2.0-flash": 互換性のある Gemini モデルを選択する

思考

Gemini 2.5 モデルは、複雑な問題を思考するようトレーニングされているため、推論が大幅に改善されています。Gemini API には、モデルの思考量をきめ細かく制御できる「思考予算」パラメータが用意されています。

Gemini API とは異なり、OpenAI API には 3 つの思考制御レベル("low""medium""high")があり、それぞれ 1,024、8,192、24,576 のトークンに対応しています。

思考を無効にする場合は、reasoning_effort"none" に設定します(2.5 Pro モデルでは推論をオフにできないことに注意してください)。

Python

from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  response = client.chat.completions.create(     model="gemini-2.5-flash",     reasoning_effort="low",     messages=[         {"role": "system", "content": "You are a helpful assistant."},         {             "role": "user",             "content": "Explain to me how AI works"         }     ] )  print(response.choices[0].message) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  const response = await openai.chat.completions.create({     model: "gemini-2.5-flash",     reasoning_effort: "low",     messages: [         { role: "system", content: "You are a helpful assistant." },         {             role: "user",             content: "Explain to me how AI works",         },     ], });  console.log(response.choices[0].message); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer GEMINI_API_KEY" \ -d '{     "model": "gemini-2.5-flash",     "reasoning_effort": "low",     "messages": [         {"role": "user", "content": "Explain to me how AI works"}       ]     }' 

Gemini 思考モデルは思考の要約も生成し、正確な思考予算を使用できます。extra_body フィールドを使用して、これらのフィールドをリクエストに含めることができます。

reasoning_effortthinking_budget は機能が重複しているため、同時に使用することはできません。

Python

from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  response = client.chat.completions.create(     model="gemini-2.5-flash",     messages=[{"role": "user", "content": "Explain to me how AI works"}],     extra_body={       'extra_body': {         "google": {           "thinking_config": {             "thinking_budget": 800,             "include_thoughts": True           }         }       }     } )  print(response.choices[0].message) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  const response = await openai.chat.completions.create({     model: "gemini-2.5-flash",     messages: [{role: "user", content: "Explain to me how AI works",}],     extra_body: {       "google": {         "thinking_config": {           "thinking_budget": 800,           "include_thoughts": true         }       }     } });  console.log(response.choices[0].message); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer GEMINI_API_KEY" \ -d '{     "model": "gemini-2.5-flash",       "messages": [{"role": "user", "content": "Explain to me how AI works"}],       "extra_body": {         "google": {            "thinking_config": {              "include_thoughts": true            }         }       }     }' 

ストリーミング

Gemini API はストリーミング レスポンスをサポートしています。

Python

from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  response = client.chat.completions.create(   model="gemini-2.0-flash",   messages=[     {"role": "system", "content": "You are a helpful assistant."},     {"role": "user", "content": "Hello!"}   ],   stream=True )  for chunk in response:     print(chunk.choices[0].delta) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  async function main() {   const completion = await openai.chat.completions.create({     model: "gemini-2.0-flash",     messages: [       {"role": "system", "content": "You are a helpful assistant."},       {"role": "user", "content": "Hello!"}     ],     stream: true,   });    for await (const chunk of completion) {     console.log(chunk.choices[0].delta.content);   } }  main(); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer GEMINI_API_KEY" \ -d '{     "model": "gemini-2.0-flash",     "messages": [         {"role": "user", "content": "Explain to me how AI works"}     ],     "stream": true   }' 

関数呼び出し

関数呼び出しを使用すると、生成モデルから構造化データ出力を簡単に取得できます。これは Gemini API でサポートされています

Python

from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  tools = [   {     "type": "function",     "function": {       "name": "get_weather",       "description": "Get the weather in a given location",       "parameters": {         "type": "object",         "properties": {           "location": {             "type": "string",             "description": "The city and state, e.g. Chicago, IL",           },           "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},         },         "required": ["location"],       },     }   } ]  messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}] response = client.chat.completions.create(   model="gemini-2.0-flash",   messages=messages,   tools=tools,   tool_choice="auto" )  print(response) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  async function main() {   const messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}];   const tools = [       {         "type": "function",         "function": {           "name": "get_weather",           "description": "Get the weather in a given location",           "parameters": {             "type": "object",             "properties": {               "location": {                 "type": "string",                 "description": "The city and state, e.g. Chicago, IL",               },               "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},             },             "required": ["location"],           },         }       }   ];    const response = await openai.chat.completions.create({     model: "gemini-2.0-flash",     messages: messages,     tools: tools,     tool_choice: "auto",   });    console.log(response); }  main(); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer GEMINI_API_KEY" \ -d '{   "model": "gemini-2.0-flash",   "messages": [     {       "role": "user",       "content": "What'\''s the weather like in Chicago today?"     }   ],   "tools": [     {       "type": "function",       "function": {         "name": "get_weather",         "description": "Get the current weather in a given location",         "parameters": {           "type": "object",           "properties": {             "location": {               "type": "string",               "description": "The city and state, e.g. Chicago, IL"             },             "unit": {               "type": "string",               "enum": ["celsius", "fahrenheit"]             }           },           "required": ["location"]         }       }     }   ],   "tool_choice": "auto" }' 

画像理解

Gemini モデルはネイティブにマルチモーダルであり、多くの一般的なビジョン タスクでクラス最高のパフォーマンスを提供します。

Python

import base64 from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  # Function to encode the image def encode_image(image_path):   with open(image_path, "rb") as image_file:     return base64.b64encode(image_file.read()).decode('utf-8')  # Getting the base64 string base64_image = encode_image("Path/to/agi/image.jpeg")  response = client.chat.completions.create(   model="gemini-2.0-flash",   messages=[     {       "role": "user",       "content": [         {           "type": "text",           "text": "What is in this image?",         },         {           "type": "image_url",           "image_url": {             "url":  f"data:image/jpeg;base64,{base64_image}"           },         },       ],     }   ], )  print(response.choices[0]) 

JavaScript

import OpenAI from "openai"; import fs from 'fs/promises';  const openai = new OpenAI({   apiKey: "GEMINI_API_KEY",   baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  async function encodeImage(imagePath) {   try {     const imageBuffer = await fs.readFile(imagePath);     return imageBuffer.toString('base64');   } catch (error) {     console.error("Error encoding image:", error);     return null;   } }  async function main() {   const imagePath = "Path/to/agi/image.jpeg";   const base64Image = await encodeImage(imagePath);    const messages = [     {       "role": "user",       "content": [         {           "type": "text",           "text": "What is in this image?",         },         {           "type": "image_url",           "image_url": {             "url": `data:image/jpeg;base64,${base64Image}`           },         },       ],     }   ];    try {     const response = await openai.chat.completions.create({       model: "gemini-2.0-flash",       messages: messages,     });      console.log(response.choices[0]);   } catch (error) {     console.error("Error calling Gemini API:", error);   } }  main(); 

REST

bash -c '   base64_image=$(base64 -i "Path/to/agi/image.jpeg");   curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \     -H "Content-Type: application/json" \     -H "Authorization: Bearer GEMINI_API_KEY" \     -d "{       \"model\": \"gemini-2.0-flash\",       \"messages\": [         {           \"role\": \"user\",           \"content\": [             { \"type\": \"text\", \"text\": \"What is in this image?\" },             {               \"type\": \"image_url\",               \"image_url\": { \"url\": \"data:image/jpeg;base64,${base64_image}\" }             }           ]         }       ]     }" ' 

画像を生成する

画像を生成する:

Python

import base64 from openai import OpenAI from PIL import Image from io import BytesIO  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/", )  response = client.images.generate(     model="imagen-3.0-generate-002",     prompt="a portrait of a sheepadoodle wearing a cape",     response_format='b64_json',     n=1, )  for image_data in response.data:   image = Image.open(BytesIO(base64.b64decode(image_data.b64_json)))   image.show() 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({   apiKey: "GEMINI_API_KEY",   baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/", });  async function main() {   const image = await openai.images.generate(     {       model: "imagen-3.0-generate-002",       prompt: "a portrait of a sheepadoodle wearing a cape",       response_format: "b64_json",       n: 1,     }   );    console.log(image.data); }  main(); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/images/generations" \   -H "Content-Type: application/json" \   -H "Authorization: Bearer GEMINI_API_KEY" \   -d '{         "model": "imagen-3.0-generate-002",         "prompt": "a portrait of a sheepadoodle wearing a cape",         "response_format": "b64_json",         "n": 1,       }' 

音声の理解

音声入力を分析する:

Python

import base64 from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  with open("/path/to/your/audio/file.wav", "rb") as audio_file:   base64_audio = base64.b64encode(audio_file.read()).decode('utf-8')  response = client.chat.completions.create(     model="gemini-2.0-flash",     messages=[     {       "role": "user",       "content": [         {           "type": "text",           "text": "Transcribe this audio",         },         {               "type": "input_audio",               "input_audio": {                 "data": base64_audio,                 "format": "wav"           }         }       ],     }   ], )  print(response.choices[0].message.content) 

JavaScript

import fs from "fs"; import OpenAI from "openai";  const client = new OpenAI({   apiKey: "GEMINI_API_KEY",   baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/", });  const audioFile = fs.readFileSync("/path/to/your/audio/file.wav"); const base64Audio = Buffer.from(audioFile).toString("base64");  async function main() {   const response = await client.chat.completions.create({     model: "gemini-2.0-flash",     messages: [       {         role: "user",         content: [           {             type: "text",             text: "Transcribe this audio",           },           {             type: "input_audio",             input_audio: {               data: base64Audio,               format: "wav",             },           },         ],       },     ],   });    console.log(response.choices[0].message.content); }  main(); 

REST

bash -c '   base64_audio=$(base64 -i "/path/to/your/audio/file.wav");   curl "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions" \     -H "Content-Type: application/json" \     -H "Authorization: Bearer GEMINI_API_KEY" \     -d "{       \"model\": \"gemini-2.0-flash\",       \"messages\": [         {           \"role\": \"user\",           \"content\": [             { \"type\": \"text\", \"text\": \"Transcribe this audio file.\" },             {               \"type\": \"input_audio\",               \"input_audio\": {                 \"data\": \"${base64_audio}\",                 \"format\": \"wav\"               }             }           ]         }       ]     }" ' 

構造化出力

Gemini モデルは、定義した構造で JSON オブジェクトを出力できます。

Python

from pydantic import BaseModel from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  class CalendarEvent(BaseModel):     name: str     date: str     participants: list[str]  completion = client.beta.chat.completions.parse(     model="gemini-2.0-flash",     messages=[         {"role": "system", "content": "Extract the event information."},         {"role": "user", "content": "John and Susan are going to an AI conference on Friday."},     ],     response_format=CalendarEvent, )  print(completion.choices[0].message.parsed) 

JavaScript

import OpenAI from "openai"; import { zodResponseFormat } from "openai/helpers/zod"; import { z } from "zod";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai" });  const CalendarEvent = z.object({   name: z.string(),   date: z.string(),   participants: z.array(z.string()), });  const completion = await openai.chat.completions.parse({   model: "gemini-2.0-flash",   messages: [     { role: "system", content: "Extract the event information." },     { role: "user", content: "John and Susan are going to an AI conference on Friday" },   ],   response_format: zodResponseFormat(CalendarEvent, "event"), });  const event = completion.choices[0].message.parsed; console.log(event); 

エンベディング

テキスト エンベディングは、テキスト文字列の関連性を測定し、Gemini API を使用して生成できます。

Python

from openai import OpenAI  client = OpenAI(     api_key="GEMINI_API_KEY",     base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  response = client.embeddings.create(     input="Your text string goes here",     model="gemini-embedding-001" )  print(response.data[0].embedding) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({     apiKey: "GEMINI_API_KEY",     baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/" });  async function main() {   const embedding = await openai.embeddings.create({     model: "gemini-embedding-001",     input: "Your text string goes here",   });    console.log(embedding); }  main(); 

REST

curl "https://generativelanguage.googleapis.com/v1beta/openai/embeddings" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer GEMINI_API_KEY" \ -d '{     "input": "Your text string goes here",     "model": "gemini-embedding-001"   }' 

extra_body

Gemini でサポートされている機能の中には、OpenAI モデルでは利用できないものの、extra_body フィールドを使用して有効にできるものがあります。

extra_body 機能

safety_settings Gemini の SafetySetting に対応します。
cached_content Gemini の GenerateContentRequest.cached_content に対応します。
thinking_config Gemini の ThinkingConfig に対応します。

cached_content

extra_body を使用して cached_content を設定する例を次に示します。

Python

from openai import OpenAI  client = OpenAI(     api_key=MY_API_KEY,     base_url="https://generativelanguage.googleapis.com/v1beta/" )  stream = client.chat.completions.create(     model="gemini-2.5-pro",     n=1,     messages=[         {             "role": "user",             "content": "Summarize the video"         }     ],     stream=True,     stream_options={'include_usage': True},     extra_body={         'extra_body':         {             'google': {               'cached_content': "cachedContents/0000aaaa1111bbbb2222cccc3333dddd4444eeee"           }         }     } )  for chunk in stream:     print(chunk)     print(chunk.usage.to_dict()) 

モデルの一覧表示

利用可能な Gemini モデルのリストを取得します。

Python

from openai import OpenAI  client = OpenAI(   api_key="GEMINI_API_KEY",   base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  models = client.models.list() for model in models:   print(model.id) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({   apiKey: "GEMINI_API_KEY",   baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/", });  async function main() {   const list = await openai.models.list();    for await (const model of list) {     console.log(model);   } } main(); 

REST

curl https://generativelanguage.googleapis.com/v1beta/openai/models \ -H "Authorization: Bearer GEMINI_API_KEY" 

モデルを取得する

Gemini モデルを取得します。

Python

from openai import OpenAI  client = OpenAI(   api_key="GEMINI_API_KEY",   base_url="https://generativelanguage.googleapis.com/v1beta/openai/" )  model = client.models.retrieve("gemini-2.0-flash") print(model.id) 

JavaScript

import OpenAI from "openai";  const openai = new OpenAI({   apiKey: "GEMINI_API_KEY",   baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/", });  async function main() {   const model = await openai.models.retrieve("gemini-2.0-flash");   console.log(model.id); }  main(); 

REST

curl https://generativelanguage.googleapis.com/v1beta/openai/models/gemini-2.0-flash \ -H "Authorization: Bearer GEMINI_API_KEY" 

現在の制限事項

機能サポートの拡大に伴い、OpenAI ライブラリのサポートはまだベータ版です。

サポートされているパラメータ、今後の機能についてご不明な点がある場合や、Gemini の使用を開始する際に問題が発生した場合は、デベロッパー フォーラムにご参加ください。

次のステップ

OpenAI 互換性 Colab を試して、より詳細な例を確認してください。