NAVNavbar
cURL php NodeJS Python cSharp

Live Gemini LLM Responses

‌‌
Live Gemini LLM Responses endpoint allows you to retrieve structured responses from a specific Gemini AI model, based on the input parameters.

Instead of ‘login’ and ‘password’ use your credentials from https://app.dataforseo.com/api-access

<?php
// You can download this file from here https://cdn.dataforseo.com/v3/examples/php/php_RestClient.zip
require('RestClient.php');
$api_url = 'https://api.dataforseo.com/';
try {
   // Instead of 'login' and 'password' use your credentials from https://app.dataforseo.com/api-access
   $client = new RestClient($api_url, null, 'login', 'password');
} catch (RestClientException $e) {
   echo "n";
   print "HTTP code: {$e->getHttpCode()}n";
   print "Error code: {$e->getCode()}n";
   print "Message: {$e->getMessage()}n";
   print  $e->getTraceAsString();
   echo "n";
   exit();
}
$post_array = array();
// You can set only one task at a time
$post_array[] = array(
        "system_message" => "communicate as if we are in a business meeting",
        "message_chain" => [
            [
                "role"    => "user",
                "message" => "Hello, what's up?"
            ],
            [
                "role"    => "ai",
                "message" => "Hello! I’m doing well, thank you. How can I assist you today? Are there any specific topics or projects you’d like to discuss in our meeting?"
            ]
        ],
        "max_output_tokens" => 200,
        "temperature" => 0.3,
        "top_p" => 0.5,
        "model_name" => "gemini-2.5-flash",
        "web_search" => true,
        "user_prompt" => "provide information on how relevant the amusement park business is in France now"
);
if (count($post_array) > 0) {
try {
    // POST /v3/ai_optimization/gemini/llm_responses/live
    // in addition to 'google' and 'ai_mode' you can also set other search engine and type parameters
    // the full list of possible parameters is available in documentation
    $result = $client->post('/v3/ai_optimization/gemini/llm_responses/live', $post_array);
    print_r($result);
    // do something with post result
} catch (RestClientException $e) {
    echo "n";
    print "HTTP code: {$e->getHttpCode()}n";
    print "Error code: {$e->getCode()}n";
    print "Message: {$e->getMessage()}n";
    print  $e->getTraceAsString();
    echo "n";
}
$client = null;
?>

The above command returns JSON structured like this:

{
  "version": "0.1.20250526",
  "status_code": 20000,
  "status_message": "Ok.",
  "time": "14.6761 sec.",
  "cost": 0.0357548,
  "tasks_count": 1,
  "tasks_error": 0,
  "tasks": [
    {
      "id": "07021706-1535-0612-0000-e8acb319f072",
      "status_code": 20000,
      "status_message": "Ok.",
      "time": "14.3327 sec.",
      "cost": 0.0357548,
      "result_count": 1,
      "path": [
        "v3",
        "ai_optimization",
        "gemini",
        "llm_responses",
        "live"
      ],
      "data": {
        "api": "ai_optimization",
        "function": "llm_responses",
        "se": "gemini",
        "system_message": "communicate as if we are in a business meeting",
        "message_chain": [
          {
            "role": "user",
            "message": "Hello, what’s up?"
          },
          {
            "role": "ai",
            "message": "Hello! I’m doing well, thank you. How can I assist you today? Are there any specific topics or projects you’d like to discuss in our meeting?"
          }
        ],
        "max_output_tokens": 200,
        "temperature": 0.3,
        "top_p": 0.5,
        "model_name": "gemini-2.5-flash",
        "user_prompt": "provide information on how relevant the amusement park business is in France now"
      },
      "result": [
        {
          "model_name": "gemini-2.5-flash",
          "input_tokens": 68,
          "output_tokens": 241,
          "web_search": true,
          "money_spent": 0.0351548,
          "datetime": "2025-07-02 14:06:32 +00:00",
          "items": [
            {
              "type": "message",
              "sections": [
                {
                  "type": "text",
                  "text": "The amusement park business in France is highly relevant and a significant part of the country's tourism and leisure industry. Here's a breakdown of its current relevance:nn**1. Market Size and Growth:**n*   The French amusement parks market generated a revenue of USD 3,249.3 million in 2024.n*   It is projected to reach USD 4,274.3 million by 2030, demonstrating a compound annual growth rate (CAGR) of 4.4% from 2025 to 2030.n*   France accounted for 3.2% of the global amusement parks market in 2024.n*   In Europe, the French amusement parks market is expected to lead in terms of revenue by 2030 and is projected to be the fastest-growing regional market.nn**",
                  "annotations": [
                    {
                      "title": "grandviewresearch.com",
                      "url": "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQE4kVSlqaUzZmAm6xWUASs0ppDa88LJ3WrthsLwppW3uzY6ROF9gQAeT1Q85e5W4etkjCovvSU8ygGEPgCcs0eC46cdz8IOjbyGJXbAvC5UPmsL2MWW5nCMa7JNk7rsMimpbiBDyXzpO_YZCecF-egFpoGFq3UN-GQ8wlYKgpZ7Z7kP8uHLWc2eOw=="
                    }
                  ]
                }
              ]
            }
          ]
        }
      ]
    }
  ]
}

All POST data should be sent in the JSON format (UTF-8 encoding). The task setting is done using the POST method. When setting a task, you should send all task parameters in the task array of the generic POST array. You can send up to 2000 API calls per minute, each Live Gemini LLM Responses call can contain only one task.

Execution time for tasks set with the Live Chat GPT LLM Responses endpoint is currently up to 120 seconds.

Below you will find a detailed description of the fields you can use for setting a task.

Description of the fields for setting a task:

Field name Type Description
user_prompt string prompt for the AI model
required field
the question or task you want to send to the AI model;
you can specify up to 500 characters in the user_prompt field
model_name string name of the AI model
required field
model_nameconsists of the actual model name and version name;
if the basic model name is specified, its latest version will be set by default;
for example, if gemini-1.5-pro is specified, the gemini-1.5-pro-002 will be set as model_name automatically;
you can receive the list of available LLM models by making a separate request to the https://api.dataforseo.com/v3/ai_optimization/gemini/llm_responses/models
max_output_tokens integer maximum number of tokens in the AI response
optional field
minimum value: 1
maximum value: 2048
default value: 2048
Note: when web_search is set to true, the output token count may exceed the specified max_output_tokens limit
temperature float randomness of the AI response
optional field
higher values make output more diverse
lower values make output more focused
minimum value: 0
maximum value: 2
default value: 1.3
top_p float diversity of the AI response
optional field
controls diversity of the response by limiting token selection
minimum value: 0
maximum value: 1
default value: 0.9
web_search boolean enable web search for current information
optional field
when enabled, the AI model can access and cite current web information;
Note: refer to the Models endpoint for a list of models that support web_search;
default value: false;

system_message string instructions for the AI behavior
optional field
defines the AI’s role, tone, or specific behavior
you can specify up to 500 characters in the system_message field
message_chain array conversation history
optional field
array of message objects representing previous conversation turns;
each object must contain role and message parameters:
role string with either user or ai role;
message string with message content (max 500 characters);
you can specify the maximum of 10 message objects in the array;
example:
"message_chain": [{"role":"user","message":"Hello, what’s up?"},{"role":"ai","message":"Hello! I’m doing well, thank you. How can I assist you today?"}]
tag string user-defined task identifier
optional field
the character limit is 255
you can use this parameter to identify the task and match it with the result
you will find the specified tag value in the data object of the response


‌‌As a response of the API server, you will receive JSON-encoded data containing a tasks array with the information specific to the set tasks.

Description of the fields in the results array:

Field name Type Description
version string the current version of the API
status_code integer general status code
you can find the full list of the response codes here
Note: we strongly recommend designing a necessary system for handling related exceptional or error conditions
status_message string general informational message
you can find the full list of general informational messages here
time string execution time, seconds
cost float total tasks cost, USD
tasks_count integer the number of tasks in the tasks array
tasks_error integer the number of tasks in the tasks array returned with an error
tasks array array of tasks
        id string task identifier
unique task identifier in our system in the UUID format
        status_code integer status code of the task
generated by DataForSEO; can be within the following range: 10000-60000
you can find the full list of the response codes here
        status_message string informational message of the task
you can find the full list of general informational messages here
        time string execution time, seconds
        cost float cost of the task, USD
includes the base task price plus the money_spent value
        result_count integer number of elements in the result array
        path array URL path
        data object contains the same parameters that you specified in the POST request
        result array array of results
            model_name string name of the AI model used
            input_tokens integer number of tokens in the input
total count of tokens processed
            output_tokens integer number of tokens in the output
total count of tokens generated in the AI response
            web_search boolean indicates if web search was used
            money_spent float cost of AI tokens, USD
the price charged by the third-party AI model provider for according to its Pricing
            datetime string date and time when the result was received
in the UTC format: “yyyy-mm-dd hh-mm-ss +00:00”
example:
2019-11-15 12:57:46 +00:00
            items array array of response items
contains structured AI response data
                type string type of the element = ‘message’
                sections array array of content sections
contains different parts of the AI response
                    type string type of element = ‘text’
                    text string AI-generated text content
                    annotations array array of references used to generate the response
equals null if the web_search parameter is not set to true
Note: annotations may return empty even when web_search is true, as the AI will attempt to retrieve web information but may not find relevant results
                       title string the domain name or title of the quoted source
                       url string redirect URL to the quoted source
contains a Vertex AI redirect that leads to the original source

‌‌