{"id":21737,"date":"2025-07-10T16:55:30","date_gmt":"2025-07-10T16:55:30","guid":{"rendered":"https:\/\/docs.dataforseo.com\/v3\/?page_id=21737"},"modified":"2026-04-07T22:29:59","modified_gmt":"2026-04-07T22:29:59","slug":"ai_optimization-gemini-llm_responses-live","status":"publish","type":"page","link":"https:\/\/docs.dataforseo.com\/v3\/ai_optimization-gemini-llm_responses-live\/","title":{"rendered":"ai_optimization\/gemini\/llm_responses\/live"},"content":{"rendered":"<div class=\"wpb-content-wrapper\"><p>[vc_row][vc_column][vc_column_text]<\/p>\n<h2>Live Gemini LLM Responses<\/h2>\n<p>\u200c\u200c<br \/>\nLive Gemini LLM Responses endpoint allows you to retrieve structured responses from a specific Gemini AI model, based on the input parameters.<\/p>\n<p>[\/vc_column_text]    <div class=\"endpoint\">\n        <img decoding=\"async\" class=\"endpoint__icon\" src=\"https:\/\/docs.dataforseo.com\/v3\/wp-content\/themes\/dataforseo\/assets\/img\/icons\/checked-circle.svg\" alt=\"checked\">\n\n                    POST            <button class=\"btn-reset button-link copy-button\" data-href=\"https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live\">\n                https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live                <svg width=\"16\" height=\"16\" viewBox=\"0 0 16 16\">\n                    <use href=\"https:\/\/docs.dataforseo.com\/v3\/wp-content\/themes\/dataforseo\/assets\/img\/icons\/sprite.svg#layers\"><\/use>\n                <\/svg>\n            <\/button>\n            <\/div>\n    \t<article class=\"info-card info-card--yellow\">\n\t\t<header class=\"info-card__header\">\n\t\t\t<div class=\"info-card__icon\">\n\t\t\t\t<svg width=\"16\" height=\"16\" viewBox=\"0 0 16 16\">\n\t\t\t\t\t<use href=\"https:\/\/docs.dataforseo.com\/v3\/wp-content\/themes\/dataforseo\/assets\/img\/icons\/sprite.svg#label\"><\/use>\n\t\t\t\t<\/svg>\n\t\t\t<\/div>\n\t\t\t<div class=\"info-card__title\">Pricing<\/div>\n\t\t<\/header>\n\t\t<div class=\"info-card__content\">\n\t\t\t<p>The cost of the task can be calculated on the <a href=\"https:\/\/dataforseo.com\/pricing\/ai-optimization\/llm-responses\" target=\"_blank\">Pricing page<\/a><\/a>. <\/p>\n\t\t<\/div>\n\t<\/article>\n\t[vc_column_text]All POST data should be sent in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/JSON\">JSON<\/a> format (UTF-8 encoding). The task setting is done using the POST method. When setting a task, you should send all task parameters in the task array of the generic POST array. You can send up to 2000 API calls per minute, each Live Gemini LLM Responses call can contain only one task.<\/p>\n<p><strong>The number of concurrent Live tasks is currently limited to 30 per account for each platform in the LLM Responses.<\/strong><\/p>\n<p><strong>Execution time for tasks set with the Live Gemini LLM Responses endpoint is currently up to 120 seconds.<\/strong><\/p>\n<p>Below you will find a detailed description of the fields you can use for setting a task.<\/p>\n<p><strong>Description of the fields for setting a task:<\/strong><br \/>\n<div class=\"dfs-doc-container dfs-doc-request\"><table><thead><tr><th>Field name<\/th><th>Type<\/th><th>Description<\/th><\/tr><\/thead><tbody><tr data-doc-id=\"user_prompt\"><td><code>user_prompt<\/code><\/td><td>string<\/td><td><p><em>prompt for the AI model<\/em><br><strong>required field<\/strong><br>the question or task you want to send to the AI model;<br>you can specify <strong>up to 500 characters<\/strong> in the <code>user_prompt<\/code> field<\/p><\/td><\/tr><tr data-doc-id=\"model_name\"><td><code>model_name<\/code><\/td><td>string<\/td><td><p><em>name of the AI model<\/em><br><strong>required field<\/strong><br><code>model_name<\/code >consists of the actual model name and version name;<br>if the basic model name is specified, its latest version will be set by default;<br>for example, if <code>gemini-1.5-pro<\/code> is specified, the <code>gemini-1.5-pro-002<\/code> will be set as <code>model_name<\/code> automatically;<br>you can receive the list of available LLM models by making a separate request to the <code><a href=\"https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models\">https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models<\/a><\/code><\/p><\/td><\/tr><tr data-doc-id=\"max_output_tokens\"><td><code>max_output_tokens<\/code><\/td><td>integer<\/td><td><p><em>maximum number of tokens in the AI response<\/em><br>optional field<br>minimum value: <code>1<\/code><br>maximum value: <code>4096<\/code>;<br>default value: <code>2048<\/code>;<br><strong>Note:<\/strong> if <code>web_search<\/code> is set to <code>true<\/code> or the reasoning model is specified in the request, the output token count may exceed the specified <code>max_output_tokens<\/code> limit<br><strong>Note #2:<\/strong> if <code>use_reasoning<\/code> is set to <code>true<\/code>, the minimum value for <code>max_output_tokens<\/code> is <code>1024<\/code><\/p><\/td><\/tr><tr data-doc-id=\"temperature\"><td><code>temperature<\/code><\/td><td>float<\/td><td><p><em>randomness of the AI response<\/em><br>optional field<br>higher values make output more diverse <br>lower values make output more focused<br>minimum value: <code>0<\/code><br>maximum value: <code>2<\/code><br>default value: <code>1.3<\/code><\/p><\/td><\/tr><tr data-doc-id=\"top_p\"><td><code>top_p<\/code><\/td><td>float<\/td><td><p><em>diversity of the AI response<\/em><br>optional field <br>controls diversity of the response by limiting token selection<br>minimum value: <code>0<\/code><br>maximum value: <code>1<\/code> <br>default value: <code>0.9<\/code><\/p><\/td><\/tr><tr data-doc-id=\"web_search\"><td><code>web_search<\/code><\/td><td>boolean<\/td><td><p><em>enable web search for current information<\/em><br>optional field<br>when enabled, the AI model can access and cite current web information;<br><strong>Note:<\/strong> refer to the <a href=\"https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models\/\">Models endpoint<\/a> for a list of models that support <code>web_search<\/code>; <br>default value: <code>false<\/code>;<br>The cost of the parameter can be calculated on the <a title=\"Gemini API Pricing\" href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/pricing\" target=\"_blank\" rel=\"noopener noreferrer\">Pricing<\/a> page<\/p><\/td><\/tr><tr data-doc-id=\"system_message\"><td><code>system_message<\/code><\/td><td>string<\/td><td><p><em>instructions for the AI behavior<\/em><br>optional field<br>defines the AI's role, tone, or specific behavior <br>you can specify <strong>up to 500 characters<\/strong> in the <code>system_message<\/code> field<\/p><\/td><\/tr><tr data-doc-id=\"message_chain\"><td><code>message_chain<\/code><\/td><td>array<\/td><td><p><em>conversation history<\/em><br>optional field<br>array of message objects representing previous conversation turns;<br>each object must contain <code>role<\/code> and <code>message<\/code> parameters:<br><code>role<\/code> string with either <code>user<\/code> or <code>ai<\/code> role;<br><code>message<\/code> string with message content (max 500 characters);<br>you can specify <strong> the maximum of 10 message objects<\/strong> in the array;<br>example:<br><code>\"message_chain\": [{\"role\":\"user\",\"message\":\"Hello, what\u2019s up?\"},{\"role\":\"ai\",\"message\":\"Hello! I\u2019m doing well, thank you. How can I assist you today?\"}]<\/code><\/p><\/td><\/tr><tr data-doc-id=\"use_reasoning\"><td><code>use_reasoning<\/code><\/td><td>boolean<\/td><td><p><em>enable reasoning for the AI model<\/em><br>optional field<br>when enabled, the model will perform reasoning before generating a response<br>refer to the <a href=\"https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models\/\" target=\"_blank\">Models endpoint<\/a> for a list of models that support <code>reasoning<\/code><br>default value: <code>false<\/code><br><strong>Note:<\/strong> if set to <code>true<\/code>, the minimum value for <code>max_output_tokens<\/code> is <code>1024<\/code><br><strong>Note #2:<\/strong> for Gemini Pro models, the <code>use_reasoning<\/code> will automatically be set to <code>true<\/code><\/p><\/td><\/tr><tr data-doc-id=\"tag\"><td><code>tag<\/code><\/td><td>string<\/td><td><p><em>user-defined task identifier<\/em><br>optional field<br><em>the character limit is 255<\/em><br>you can use this parameter to identify the task and match it with the result<br>you will find the specified <code>tag<\/code> value in the <code>data<\/code> object of the response<\/p><\/td><\/tr><\/tbody><\/table><\/div><br \/>\n\u200c<br \/>\n\u200c\u200cAs a response of the API server, you will receive <a href=\"https:\/\/en.wikipedia.org\/wiki\/JSON\">JSON<\/a>-encoded data containing a <code>tasks<\/code> array with the information specific to the set tasks.<br \/>\n\u200c<br \/>\n<strong>Description of the fields in the results array:<\/strong><br \/>\n<div class=\"dfs-doc-container dfs-doc-response\"><div class=\"api-block-main\"><div class=\"api-section\"><table><thead><tr><th>Field name<\/th><th>Type<\/th><th>Description<\/th><\/tr><\/thead><tbody><tr data-doc-id=\"version\"><td><code>version<\/code><\/td><td>string<\/td><td><p><em>the current version of the API<\/em><\/p><\/td><\/tr><tr data-doc-id=\"status_code\"><td><code>status_code<\/code><\/td><td>integer<\/td><td><p><i>general status code<\/i><br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a><br><strong>Note:<\/strong> we strongly recommend designing a necessary system for handling related exceptional or error conditions<\/p><\/td><\/tr><tr data-doc-id=\"status_message\"><td><code>status_message<\/code><\/td><td>string<\/td><td><p><em>general informational message<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a><\/p><\/td><\/tr><tr data-doc-id=\"time\"><td><code>time<\/code><\/td><td>string<\/td><td><p><em>execution time, seconds<\/em><\/p><\/td><\/tr><tr data-doc-id=\"cost\"><td><code>cost<\/code><\/td><td>float<\/td><td><p><em>total tasks cost, USD<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks_count\"><td><code>tasks_count<\/code><\/td><td>integer<\/td><td><p><em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks_error\"><td><code>tasks_error<\/code><\/td><td>integer<\/td><td><p><em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array returned with an error<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks\"><td><strong><code>tasks<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of tasks<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-id\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>id<\/code><\/td><td>string<\/td><td><p><em>task identifier<\/em><br><strong>unique task identifier in our system in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Universally_unique_identifier\">UUID<\/a> format<\/strong><\/p><\/td><\/tr><tr data-doc-id=\"tasks-status_code\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>status_code<\/code><\/td><td>integer<\/td><td><p><em>status code of the task<\/em><br>generated by DataForSEO; can be within the following range: 10000-60000<br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a><\/p><\/td><\/tr><tr data-doc-id=\"tasks-status_message\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>status_message<\/code><\/td><td>string<\/td><td><p><em>informational message of the task<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a><\/p><\/td><\/tr><tr data-doc-id=\"tasks-time\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>time<\/code><\/td><td>string<\/td><td><p><em>execution time, seconds<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-cost\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>cost<\/code><\/td><td>float<\/td><td><p><em>cost of the task, USD<\/em><br>includes the base task price plus the <code>money_spent<\/code> value<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result_count\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>result_count<\/code><\/td><td>integer<\/td><td><p><em>number of elements in the <code>result<\/code> array<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-path\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>path<\/code><\/td><td>array<\/td><td><p><em>URL path<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-data\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>data<\/code><\/td><td>object<\/td><td><p><em>contains the same parameters that you specified in the POST request<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>result<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of results<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-model_name\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>model_name<\/code><\/td><td>string<\/td><td><p><em>name of the AI model used<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-input_tokens\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>input_tokens<\/code><\/td><td>integer<\/td><td><p><em>number of tokens in the input<\/em><br>total count of tokens processed<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-output_tokens\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>output_tokens<\/code><\/td><td>integer<\/td><td><p><em>number of tokens in the output<\/em><br>total count of tokens generated in the AI response<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-reasoning_tokens\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>reasoning_tokens<\/code><\/td><td>integer<\/td><td><p><em>number of reasoning tokens<\/em><br>total count of tokens used to generate reasoning content<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-web_search\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>web_search<\/code><\/td><td>boolean<\/td><td><p><em>indicates if web search was used<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-money_spent\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>money_spent<\/code><\/td><td>float<\/td><td><p><em>cost of AI tokens, USD<\/em><br>the price charged by the third-party AI model provider for according to its <a href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/pricing\" target=\"_blank\">Pricing<\/a><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-datetime\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>datetime<\/code><\/td><td>string<\/td><td><p><em>date and time when the result was received<\/em><br>in the UTC format: \u201cyyyy-mm-dd hh-mm-ss +00:00\u201d<br>example:<br><code class=\"long-string\">2019-11-15 12:57:46 +00:00<\/code><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>items<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of response items<\/em><br>contains structured AI response data<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>reasoning<\/code><\/strong><\/td><td>object<\/td><td><p><em>element in the response<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of the element = <strong>'reasoning'<\/strong><\/em><br><strong>Note:<\/strong> this element is supported only in reasoning models and is not guaranteed to be returned<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-sections\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>sections<\/code><\/strong><\/td><td>array<\/td><td><p><em>reasoning chain sections<\/em><br>array of objects containing the reasoning chain sections generated by the LLM<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-sections-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of element<em>=<\/em><strong>'summary_text'<\/strong><\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-sections-text\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>text<\/code><\/td><td>string<\/td><td><p><em>text of the reasoning chain section<\/em><br>text of the reasoning chain  section summarizing the model's thought process<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>message<\/code><\/strong><\/td><td>object<\/td><td><p><em>element in the response<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of the element = <strong>'message'<\/strong><\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>sections<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of content sections<\/em><br>contains different parts of the AI response<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of element<em>=<\/em><strong>'text'<\/strong><\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-text\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>text<\/code><\/td><td>string<\/td><td><p><em>AI-generated text content<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-annotations\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>annotations<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of references used to generate the response<\/em><br>equals <code>null<\/code> if the <code>web_search<\/code> parameter is not set to <code>true<\/code><br><strong>Note:<\/strong> <code>annotations<\/code> may return empty even when <code>web_search<\/code> is <code>true<\/code>, as the AI will attempt to retrieve web information but may not find relevant results<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-annotations-title\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>title<\/code><\/td><td>string<\/td><td><p><em>the domain name or title of the quoted source<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-annotations-url\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>url<\/code><\/td><td>string<\/td><td><p><em>redirect URL to the quoted source<\/em><br>contains a Vertex AI redirect that leads to the original source<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-fan_out_queries\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>fan_out_queries<\/code><\/td><td>array<\/td><td><p><em>array of fan-out queries<\/em><br>contains related search queries derived from the main query to provide a more comprehensive response<\/p><\/td><\/tr><\/tbody><\/table><\/div><\/div><\/div><br \/>\n\u200c\u200c[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n<blockquote><p>Instead of \u2018login\u2019 and \u2018password\u2019 use your credentials from https:\/\/app.dataforseo.com\/api-access<\/p><\/blockquote><div id=\"curl\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-bash hljs\"># Instead of &#039;login&#039; and &#039;password&#039; use your credentials from https:\/\/app.dataforseo.com\/api-access \r\nlogin=&quot;login&quot; \r\npassword=&quot;password&quot; \r\ncred=&quot;$(printf ${login}:${password} | base64)&quot; \r\ncurl --location --request POST &quot;https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live&quot; \r\n--header &quot;Authorization: Basic ${cred}&quot;  \r\n--header &quot;Content-Type: application\/json&quot; \r\n--data-raw &#039;[\r\n  {\r\n    &quot;system_message&quot;: &quot;communicate as if we are in a business meeting&quot;,\r\n    &quot;message_chain&quot;: [\r\n      {\r\n        &quot;role&quot;: &quot;user&quot;,\r\n        &quot;message&quot;: &quot;Hello, what\u2019s up?&quot;\r\n      },\r\n      {\r\n        &quot;role&quot;: &quot;ai&quot;,\r\n        &quot;message&quot;: &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n      }\r\n    ],\r\n    &quot;max_output_tokens&quot;: 200,\r\n    &quot;temperature&quot;: 0.3,\r\n    &quot;top_p&quot;: 0.5,\r\n    &quot;model_name&quot;: &quot;gemini-2.5-flash&quot;,\r\n    &quot;web_search&quot;: true,\r\n    &quot;user_prompt&quot;: &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n  }\r\n]&#039;<\/code><\/pre><\/div><\/div><div id=\"php\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-php hljs\">&lt;?php\r\n\/\/ You can download this file from here https:\/\/cdn.dataforseo.com\/v3\/examples\/php\/php_RestClient.zip\r\nrequire(&#039;RestClient.php&#039;);\r\n$api_url = &#039;https:\/\/api.dataforseo.com\/&#039;;\r\ntry {\r\n   \/\/ Instead of &#039;login&#039; and &#039;password&#039; use your credentials from https:\/\/app.dataforseo.com\/api-access\r\n   $client = new RestClient($api_url, null, &#039;login&#039;, &#039;password&#039;);\r\n} catch (RestClientException $e) {\r\n   echo &quot;n&quot;;\r\n   print &quot;HTTP code: {$e-&gt;getHttpCode()}n&quot;;\r\n   print &quot;Error code: {$e-&gt;getCode()}n&quot;;\r\n   print &quot;Message: {$e-&gt;getMessage()}n&quot;;\r\n   print  $e-&gt;getTraceAsString();\r\n   echo &quot;n&quot;;\r\n   exit();\r\n}\r\n$post_array = array();\r\n\/\/ You can set only one task at a time\r\n$post_array[] = array(\r\n        &quot;system_message&quot; =&gt; &quot;communicate as if we are in a business meeting&quot;,\r\n        &quot;message_chain&quot; =&gt; [\r\n            [\r\n                &quot;role&quot;    =&gt; &quot;user&quot;,\r\n                &quot;message&quot; =&gt; &quot;Hello, what&#039;s up?&quot;\r\n            ],\r\n            [\r\n                &quot;role&quot;    =&gt; &quot;ai&quot;,\r\n                &quot;message&quot; =&gt; &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n            ]\r\n        ],\r\n        &quot;max_output_tokens&quot; =&gt; 200,\r\n        &quot;temperature&quot; =&gt; 0.3,\r\n        &quot;top_p&quot; =&gt; 0.5,\r\n        &quot;model_name&quot; =&gt; &quot;gemini-2.5-flash&quot;,\r\n        &quot;web_search&quot; =&gt; true,\r\n        &quot;user_prompt&quot; =&gt; &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n);\r\nif (count($post_array) &gt; 0) {\r\ntry {\r\n    \/\/ POST \/v3\/ai_optimization\/gemini\/llm_responses\/live\r\n    \/\/ in addition to &#039;google&#039; and &#039;ai_mode&#039; you can also set other search engine and type parameters\r\n    \/\/ the full list of possible parameters is available in documentation\r\n    $result = $client-&gt;post(&#039;\/v3\/ai_optimization\/gemini\/llm_responses\/live&#039;, $post_array);\r\n    print_r($result);\r\n    \/\/ do something with post result\r\n} catch (RestClientException $e) {\r\n    echo &quot;n&quot;;\r\n    print &quot;HTTP code: {$e-&gt;getHttpCode()}n&quot;;\r\n    print &quot;Error code: {$e-&gt;getCode()}n&quot;;\r\n    print &quot;Message: {$e-&gt;getMessage()}n&quot;;\r\n    print  $e-&gt;getTraceAsString();\r\n    echo &quot;n&quot;;\r\n}\r\n$client = null;\r\n?&gt;<\/code><\/pre><\/div><\/div><div id=\"javascript\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-javascript hljs\">const axios = require(&#039;axios&#039;);\r\n\r\naxios({\r\n    method: &#039;post&#039;,\r\n    url: &#039;https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live&#039;,\r\n    auth: {\r\n        username: &#039;login&#039;,\r\n        password: &#039;password&#039;\r\n    },\r\n    data: [{\r\n    system_message: encodeURI(&quot;communicate as if we are in a business meeting&quot;),\r\n    message_chain: [\r\n      {\r\n        role: &quot;user&quot;,\r\n        message: &quot;Hello, what\u2019s up?&quot;\r\n      },\r\n      {\r\n        role: &quot;ai&quot;,\r\n        message: encodeURI(&quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;)\r\n      }\r\n    ],\r\n    max_output_tokens: 200,\r\n    temperature: 0.3,\r\n    top_p: 0.5,\r\n    model_name: &quot;gemini-2.5-flash&quot;,\r\n    web_search: true,\r\n    user_prompt: encodeURI(&quot;provide information on how relevant the amusement park business is in France now&quot;)\r\n    }],\r\n    headers: {\r\n        &#039;content-type&#039;: &#039;application\/json&#039;\r\n    }\r\n}).then(function (response) {\r\n    var result = response[&#039;data&#039;][&#039;tasks&#039;];\r\n    \/\/ Result data\r\n    console.log(result);\r\n}).catch(function (error) {\r\n    console.log(error);\r\n});<\/code><\/pre><\/div><\/div><div id=\"python\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-python hljs\">&quot;&quot;&quot;\r\nMethod: POST\r\nEndpoint: https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live\r\n@see https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live\r\n&quot;&quot;&quot;\r\n\r\nimport sys\r\nimport os\r\nsys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), &#039;..\/..\/..\/..\/..\/&#039;)))\r\nfrom lib.client import RestClient\r\nfrom lib.config import username, password\r\nclient = RestClient(username, password)\r\n\r\npost_data = []\r\npost_data.append({\r\n        &#039;system_message&#039;: &#039;communicate as if we are in a business meeting&#039;,\r\n        &#039;message_chain&#039;: [\r\n            {\r\n                &#039;role&#039;: &#039;user&#039;,\r\n                &#039;message&#039;: &#039;Hello, what&#039;s up?&#039;\r\n            },\r\n            {\r\n                &#039;role&#039;: &#039;ai&#039;,\r\n                &#039;message&#039;: &#039;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&#039;\r\n            }\r\n        ],\r\n        &#039;max_output_tokens&#039;: 200,\r\n        &#039;temperature&#039;: 0.3,\r\n        &#039;top_p&#039;: 0.5,\r\n        &#039;model_name&#039;: &#039;gemini-2.5-flash&#039;,\r\n        &#039;web_search&#039;: True,\r\n        &#039;user_prompt&#039;: &#039;provide information on how relevant the amusement park business is in France now&#039;\r\n    })\r\ntry:\r\n    response = client.post(&#039;\/v3\/ai_optimization\/gemini\/llm_responses\/live&#039;, post_data)\r\n    print(response)\r\n    # do something with post result\r\nexcept Exception as e:\r\n    print(f&#039;An error occurred: {e}&#039;)<\/code><\/pre><\/div><\/div><div id=\"csharp\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-csharp hljs\">using System;\r\nusing System.Linq;\r\nusing System.Net.Http;\r\nusing System.Net.Http.Headers;\r\nusing System.Text;\r\nusing System.Collections.Generic;\r\nusing System.Threading.Tasks;\r\nusing Newtonsoft.Json;\r\nnamespace DataForSeoSdk;\r\n\r\npublic class AiOptimization\r\n{\r\n\r\n    private static readonly HttpClient _httpClient;\r\n    \r\n    static AiOptimization()\r\n    {\r\n        _httpClient = new HttpClient\r\n        {\r\n            BaseAddress = new Uri(&quot;https:\/\/api.dataforseo.com\/&quot;)\r\n        };\r\n        _httpClient.DefaultRequestHeaders.Authorization =\r\n            new AuthenticationHeaderValue(&quot;Basic&quot;, ApiConfig.Base64Auth);\r\n    }\r\n\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Method: POST\r\n    \/\/\/ Endpoint: https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live\r\n    \/\/\/ &lt;\/summary&gt;\r\n    \/\/\/ &lt;see href=&quot;https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live&quot;\/&gt;\r\n    \r\n    public static async Task GeminiLlmResponsesLive()\r\n    {\r\n        var postData = new List&lt;object&gt;();\r\n        \/\/ a simple way to set a task, the full list of possible parameters is available in documentation\r\n        postData.Add(new\r\n        {\r\n            system_message = &quot;communicate as if we are in a business meeting&quot;,\r\n            message_chain = new object[]\r\n            {\r\n                new\r\n                {\r\n                    role = &quot;user&quot;,\r\n                    message = &quot;Hello, what&#039;s up?&quot;\r\n                },\r\n                new\r\n                {\r\n                    role = &quot;ai&quot;,\r\n                    message = &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n                }\r\n            },\r\n            max_output_tokens = 200,\r\n            temperature = 0.3,\r\n            top_p = 0.5,\r\n            model_name = &quot;gemini-2.5-flash&quot;,\r\n            web_search = true,\r\n            user_prompt = &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n        });\r\n\r\n        var content = new StringContent(JsonConvert.SerializeObject(postData), Encoding.UTF8, &quot;application\/json&quot;);\r\n        using var response = await _httpClient.PostAsync(&quot;\/v3\/ai_optimization\/gemini\/llm_responses\/live&quot;, content);\r\n        var result = JsonConvert.DeserializeObject&lt;dynamic&gt;(await response.Content.ReadAsStringAsync());\r\n        \/\/ you can find the full list of the response codes here https:\/\/docs.dataforseo.com\/v3\/appendix\/errors\r\n        if (result.status_code == 20000)\r\n        {\r\n            \/\/ do something with result\r\n            Console.WriteLine(result);\r\n        }\r\n        else\r\n            Console.WriteLine($&quot;error. Code: {result.status_code} Message: {result.status_message}&quot;);\r\n    }<\/code><\/pre><\/div><\/div><blockquote><p>The above command returns JSON structured like this:<\/p><\/blockquote><div class=\"example example--json\"><div class=\"example__content\"><div class=\"example__code example__code-json\"><pre><code class=\"language-json hljs\">{\r\n  &quot;version&quot;: &quot;0.1.20251208&quot;,\r\n  &quot;status_code&quot;: 20000,\r\n  &quot;status_message&quot;: &quot;Ok.&quot;,\r\n  &quot;time&quot;: &quot;5.5958 sec.&quot;,\r\n  &quot;cost&quot;: 0.0376568,\r\n  &quot;tasks_count&quot;: 1,\r\n  &quot;tasks_error&quot;: 0,\r\n  &quot;tasks&quot;: [\r\n    {\r\n      &quot;id&quot;: &quot;12111456-1535-0612-0000-26ea2dc16c11&quot;,\r\n      &quot;status_code&quot;: 20000,\r\n      &quot;status_message&quot;: &quot;Ok.&quot;,\r\n      &quot;time&quot;: &quot;5.5290 sec.&quot;,\r\n      &quot;cost&quot;: 0.0376568,\r\n      &quot;result_count&quot;: 1,\r\n      &quot;path&quot;: [\r\n        &quot;v3&quot;,\r\n        &quot;ai_optimization&quot;,\r\n        &quot;gemini&quot;,\r\n        &quot;llm_responses&quot;,\r\n        &quot;live&quot;\r\n      ],\r\n      &quot;data&quot;: {\r\n        &quot;api&quot;: &quot;ai_optimization&quot;,\r\n        &quot;function&quot;: &quot;llm_responses&quot;,\r\n        &quot;se&quot;: &quot;gemini&quot;,\r\n        &quot;system_message&quot;: &quot;communicate as if we are in a business meeting&quot;,\r\n        &quot;message_chain&quot;: [\r\n          {\r\n            &quot;role&quot;: &quot;user&quot;,\r\n            &quot;message&quot;: &quot;Hello, what&#039;s up?&quot;\r\n          },\r\n          {\r\n            &quot;role&quot;: &quot;ai&quot;,\r\n            &quot;message&quot;: &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n          }\r\n        ],\r\n        &quot;temperature&quot;: 0.3,\r\n        &quot;model_name&quot;: &quot;gemini-2.5-flash&quot;,\r\n        &quot;top_p&quot;: 0.5,\r\n        &quot;web_search&quot;: true,\r\n        &quot;user_prompt&quot;: &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n      },\r\n      &quot;result&quot;: [\r\n        {\r\n          &quot;model_name&quot;: &quot;gemini-2.5-flash&quot;,\r\n          &quot;input_tokens&quot;: 206,\r\n          &quot;output_tokens&quot;: 798,\r\n          &quot;reasoning_tokens&quot;: 576,\r\n          &quot;web_search&quot;: true,\r\n          &quot;money_spent&quot;: 0.0370568,\r\n          &quot;datetime&quot;: &quot;2025-12-11 14:56:47 +00:00&quot;,\r\n          &quot;items&quot;: [\r\n            {\r\n              &quot;type&quot;: &quot;reasoning&quot;,\r\n              &quot;sections&quot;: [\r\n                {\r\n                  &quot;type&quot;: &quot;summary_text&quot;,\r\n                  &quot;text&quot;: &quot;**Exploring a riddle**nnThis likely refers to a riddle or joke. The classic answer seems to be a cup that&#039;s closed at the top and bottom, making it essentially useless. Or, is it a trophy cup or even a cupcake? I need to think through this riddle: I have a cup with no bottom and a closed top. How can I drink from it? The punchline might simply be that you can&#039;t drink from it. Hm, I wonder if there might be other interpretations too!&quot;\r\n                },\r\n                {\r\n                  &quot;type&quot;: &quot;summary_text&quot;,\r\n                  &quot;text&quot;: &quot;**Pondering a riddle\u2019s meaning**nnI\u2019m considering if this cup could also mean something like hiccup \u2014 closed at the top and bottom? But if there\u2019s no bottom, anything liquid just falls out. A closed top means you can\u2019t pour anything in. So it may not be a traditional drinking cup. Maybe it\u2019s an acorn cup, although it has an open top. The joke could suggest inverting it, but that still leaves it open. I\u2019m not sure how it all ties back to drinking from it!&quot;\r\n                },\r\n                {\r\n                  &quot;type&quot;: &quot;summary_text&quot;,\r\n                  &quot;text&quot;: &quot;**Clarifying the riddle\u2019s punchline**nnSo, the answer seems to be that you&#039;re meant to drink from the rim, but since there\u2019s no bottom and the top is closed, it&#039;s impossible to do that. The correct response points to a thimble instead, which has an open bottom. The punchline is clear: You can&#039;t drink; it&#039;s a thimble. The riddle plays on the expectation of a witty response. To keep it light, I could say, You don&#039;t! That&#039;s a thimble! and add a playful tone.&quot;\r\n                }\r\n              ]\r\n            },\r\n            {\r\n              &quot;type&quot;: &quot;message&quot;,\r\n              &quot;sections&quot;: [\r\n                {\r\n                  &quot;type&quot;: &quot;text&quot;,\r\n                  &quot;text&quot;: &quot;The amusement park business in France is currently a significant and growing market, demonstrating strong relevance within the leisure and tourism industry.nnHere&#039;s a breakdown of its current state:nn**Market Size and Growth:**n*   The French amusement parks market generated an estimated revenue of USD 3,249.3 million in 2024.n*   It is projected to reach USD 4,274.3 million by 2030, growing at a Compound Annual Growth Rate (CAGR) of 4.4% from 2025 to 2030.n*   Another report indicates the France Theme Park Tourism Market size was USD 2033.82 million in 2024 and is projected to reach USD 3849.39 million by 2033, with a CAGR of 7.26% between 2025 and 2033.n*   The broader leisure destinations sector in France, which includes amusement parks, has a turnover of 3 billion euros and employs over 50,000 people across 500 companies.nn**Key Drivers and Trends:**n*   **Strong Tourism Industry:** France&#039;s robust tourism sector, attracting millions of visitors annually, significantly contributes to the amusement park market&#039;s growth.n*   **Increasing Disposable Income:** Rising disposable income among consumers is leading to increased spending on leisure activities.n*   **Changing Consumer Preferences:** There&#039;s a growing demand for experiential entertainment and family-oriented leisure activities.n*   **Technological Integration:** Parks are investing in augmented reality (AR) and virtual reality (VR) attractions to offer immersive and interactive experiences.n*   **Sustainability Focus:** Many parks are implementing eco-friendly practices and sustainability initiatives.n*   **Innovation in Attractions:** Continuous expansion and innovation in rides and attractions cater to diverse audiences, including families and thrill-seekers.n*   **Development of Hospitality:** Parks are increasingly focusing on developing their hospitality offerings, including themed hotels, to become comprehensive destinations.nn**Competitive Landscape:**n*   The French amusement park market is becoming highly competitive.n*   Major players like Disneyland Paris, Parc Ast\u00e9rix, Puy du Fou, and Futuroscope dominate the market, accounting for a significant portion of visits to the top European parks.n*   Disneyland Paris, in particular, is a major driver of the European amusement park market due to its global popularity.n*   In 2023, Disneyland Paris welcomed 10.4 million visitors, making it the #1 park in EMEA and #2 globally. Its Walt Disney Studios Park saw 5.7 million visitors in 2023.n*   Puy du Fou is another highly popular park, ranking second in France by visitor numbers in 2023 with over 2.5 million visitors, and is top-rated for customer satisfaction.n*   Regional parks often focus on a family-friendly and affordable approach to attract visitors.nn**Post-Pandemic Recovery:**n*   After a challenging 2020 due to shutdowns, the market has seen a direct and continuous recovery.n*   Average attendance at French parks in 2024 was 15-20% higher than in 2019.nnIn conclusion, the amusement park business in France is a dynamic and thriving sector, driven by strong tourist appeal, evolving consumer demands, and continuous innovation.&quot;,\r\n                  &quot;annotations&quot;: [\r\n                    {\r\n                      &quot;title&quot;: &quot;grandviewresearch.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQGh7v1fRFZwvtWdXLLSQs6g6xtu5ZeD-I1435mjDt4EoLLyCtmyTU8GimmxhM7zb76c-yxwlONcoDySHnQ2KFfPujgdAZXO8LFZk851Ur9WaLzoNfARNQ0JbA4ORvsX7OAnVSBxhN1WiS4fZw6XQMVIAEqvUxoERX7iywCW0FNrPDVnMtECr41aKB4=&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;deepmarketinsights.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQGLM08MiqQsO4rzsofwSG-QEcEN226loOkdTd2CiPNlU0NJ4kRcu7X35D4mVVsCB6SukMRFI33LMqsH9kksFStFE-7nXyOr7YiZ7pMYSbFaMWvClA7eZWmLd0aPc4NdUHgwE7q9JFkpVAusy_YH-SWbOwSvGxtrJ15F9jHdwAPosdUi4z5ST7OUlw==&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;puydufou.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQFUIfOcXCJdEHywKMrWXfgSSdRmodlUyxGUIRx2AKVojkvzugfs49u2T4MMLrpq-1z_tOYoq4ozWhi7aXtYjR2fPsTo4ARp_rBMeNSvKcp26jbTgxKTYNrmm7oeFQjTlE1rkug9LJwbRnz1Bwls5Q-O_Fh4eM4mvQk=&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;6wresearch.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQEUw4Giv1pNf_-CZTpku8tItZcCZmCVpUikRFEqk43PAwYAtaDE48XXctm49tnHJSbhibv3JTr7vOX7rpRoSOTaw5hCJj8hL8kIj2wNnnzeRFP87O7J6wgpU8NVPSprSYky9dhmn1WhjCkRztCfoaefPe2TyhFWPeVNJ8R-ci56QlG-M2E-F3w61OM=&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;kenresearch.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQFEjxBpDNE1QmFmYrzsYO2EGjyFIhKywCzcmy9THBnY8qiPZ8yFU3Vfx7KI-UJisPEs7HISfiv9MJ3ZzZ9wde0OZMeRcyoInsPpKWAjwM79IVnz_6XrhrQFlK9qPThZ3g1V-aaLUDuOaoimg78YXXqc4XFxTzJ1M42R1QMHsmZp2dWlszQ=&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;businesscoot.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQEm74cgwvacFkZ6jMAeCmyDdxoryPJfaWoHSm5sLa3dMud3HImfssGP97BU1vOodJAnrzmw9BSida9dOX4HTf5KWFk_ZgQJXsT-jFc1R6zAwgW_74QUHiwibUKEimb_-fH9hOEF0vRuux4FojMlOwEPkK2AW1WPMmxtDINef_uV&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;mousenotifier.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQFtKiKaW9PcrSRpKB-4UFqDZg47FfSpIKUHjMf-UhU2pWzG0gYW8lI7ePhBXI9EfYqT2nJze9YO2qLcMefbpFZoD19eCT44dZaKKg1IofNtAMS8vtTvhdpwWjEVc_NX1bfC_guo76Uduq2PB1HmJ-Yntvk=&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;: &quot;parkworld-online.com&quot;,\r\n                      &quot;url&quot;: &quot;https:\/\/vertexaisearch.cloud.google.com\/grounding-api-redirect\/AUZIYQHB2rvlEFKahIlbzDy8gpMl9EKHySbc_sqUoiAJ_zqKlVXj4L7dVIdSArJxSK2WN1jzvpKcRYdSFXO94N34TY6rFJ5yIl3AsOVnSkVnfm7uwnnZxdOo-Da0sxH_jCu9v98pcV7xurzBPSXE1cH_e-InjDff50qFDJsiN-xLpgbEWbDQdbshwp6XC3L5FYWSvBYX3ZSWmOTFTzs=&quot;\r\n                    }\r\n                  ]\r\n                }\r\n              ]\r\n            }\r\n          ],\r\n          &quot;fan_out_queries&quot;: [\r\n            &quot;amusement park business France current relevance&quot;,\r\n            &quot;amusement park industry France market size 2023 2024&quot;,\r\n            &quot;number of amusement parks in France 2023&quot;,\r\n            &quot;amusement park attendance France 2023&quot;,\r\n            &quot;trends in French amusement park industry&quot;\r\n          ]\r\n        }\r\n      ]\r\n    }\r\n  ]\r\n}<\/code><\/pre><\/div><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_column_text] Live Gemini LLM Responses \u200c\u200c Live Gemini LLM Responses endpoint allows you to retrieve structured responses from a specific Gemini AI model, based on the input parameters. [\/vc_column_text] POST https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/live Pricing The cost of the task can be calculated on the Pricing page. [vc_column_text]All POST data should be sent in the JSON format (UTF-8 [&hellip;]<\/p>\n","protected":false},"author":14,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"template.php","meta":{"apibase_doc_request_yaml":"parameters:\n  - name: user_prompt\n    type: string\n    description: |\n      <em>prompt for the AI model<\/em><br><strong>required field<\/strong><br>the question or task you want to send to the AI model;<br>you can specify <strong>up to 500 characters<\/strong> in the <code>user_prompt<\/code> field\n  - name: model_name\n    type: string\n    description: |\n      <em>name of the AI model<\/em><br><strong>required field<\/strong><br><code>model_name<\/code >consists of the actual model name and version name;<br>if the basic model name is specified, its latest version will be set by default;<br>for example, if <code>gemini-1.5-pro<\/code> is specified, the <code>gemini-1.5-pro-002<\/code> will be set as <code>model_name<\/code> automatically;<br>you can receive the list of available LLM models by making a separate request to the <code>https:\/\/api.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models<\/code>\n  - name: max_output_tokens\n    type: integer\n    description: |\n      <em>maximum number of tokens in the AI response<\/em><br>optional field<br>minimum value: <code>1<\/code><br>maximum value: <code>4096<\/code>;<br>default value: <code>2048<\/code>;<br><strong>Note:<\/strong> if <code>web_search<\/code> is set to <code>true<\/code> or the reasoning model is specified in the request, the output token count may exceed the specified <code>max_output_tokens<\/code> limit<br><strong>Note #2:<\/strong> if <code>use_reasoning<\/code> is set to <code>true<\/code>, the minimum value for <code>max_output_tokens<\/code> is <code>1024<\/code>\n  - name: temperature\n    type: float\n    description: |\n      <em>randomness of the AI response<\/em><br>optional field<br>higher values make output more diverse <br>lower values make output more focused<br>minimum value: <code>0<\/code><br>maximum value: <code>2<\/code><br>default value: <code>1.3<\/code>\n  - name: top_p\n    type: float\n    description: |\n      <em>diversity of the AI response<\/em><br>optional field <br>controls diversity of the response by limiting token selection<br>minimum value: <code>0<\/code><br>maximum value: <code>1<\/code> <br>default value: <code>0.9<\/code>\n  - name: web_search\n    type: boolean\n    description: |\n      <em>enable web search for current information<\/em><br>optional field<br>when enabled, the AI model can access and cite current web information;<br><strong>Note:<\/strong> refer to the <a href=\"https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models\/\">Models endpoint<\/a> for a list of models that support <code>web_search<\/code>; <br>default value: <code>false<\/code>;<br>The cost of the parameter can be calculated on the <a title=\"Gemini API Pricing\" href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/pricing\" target=\"_blank\" rel=\"noopener noreferrer\">Pricing<\/a> page\n  - name: system_message\n    type: string\n    description: |\n      <em>instructions for the AI behavior<\/em><br>optional field<br>defines the AI's role, tone, or specific behavior <br>you can specify <strong>up to 500 characters<\/strong> in the <code>system_message<\/code> field\n  - name: message_chain\n    type: array\n    description: |\n      <em>conversation history<\/em><br>optional field<br>array of message objects representing previous conversation turns;<br>each object must contain <code>role<\/code> and <code>message<\/code> parameters:<br><code>role<\/code> string with either <code>user<\/code> or <code>ai<\/code> role;<br><code>message<\/code> string with message content (max 500 characters);<br>you can specify <strong> the maximum of 10 message objects<\/strong> in the array;<br>example:<br><code>\"message_chain\": [{\"role\":\"user\",\"message\":\"Hello, what\u2019s up?\"},{\"role\":\"ai\",\"message\":\"Hello! I\u2019m doing well, thank you. How can I assist you today?\"}]<\/code>\n  - name: use_reasoning\n    type: boolean\n    description: |\n      <em>enable reasoning for the AI model<\/em><br>optional field<br>when enabled, the model will perform reasoning before generating a response<br>refer to the <a href=\"https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/gemini\/llm_responses\/models\/\" target=\"_blank\">Models endpoint<\/a> for a list of models that support <code>reasoning<\/code><br>default value: <code>false<\/code><br><strong>Note:<\/strong> if set to <code>true<\/code>, the minimum value for <code>max_output_tokens<\/code> is <code>1024<\/code><br><strong>Note #2:<\/strong> for Gemini Pro models, the <code>use_reasoning<\/code> will automatically be set to <code>true<\/code>\n  - name: tag\n    type: string\n    description: |\n      <em>user-defined task identifier<\/em><br>optional field<br><em>the character limit is 255<\/em><br>you can use this parameter to identify the task and match it with the result<br>you will find the specified <code>tag<\/code> value in the <code>data<\/code> object of the response","apibase_doc_request_additional_yaml":"","apibase_doc_response_yaml":"parameters:\n  - name: version\n    type: string\n    description: |\n      <em>the current version of the API<\/em>\n  - name: status_code\n    type: integer\n    description: |\n      <i>general status code<\/i><br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a><br><strong>Note:<\/strong> we strongly recommend designing a necessary system for handling related exceptional or error conditions\n  - name: status_message\n    type: string\n    description: |\n      <em>general informational message<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a>\n  - name: time\n    type: string\n    description: |\n      <em>execution time, seconds<\/em>\n  - name: cost\n    type: float\n    description: |\n      <em>total tasks cost, USD<\/em>\n  - name: tasks_count\n    type: integer\n    description: |\n      <em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array<\/em>\n  - name: tasks_error\n    type: integer\n    description: |\n      <em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array returned with an error<\/em>\n  - name: tasks\n    type: array\n    description: |\n      <em>array of tasks<\/em>\n    items:\n      children:\n        - name: id\n          type: string\n          description: |\n            <em>task identifier<\/em><br><strong>unique task identifier in our system in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Universally_unique_identifier\">UUID<\/a> format<\/strong>\n        - name: status_code\n          type: integer\n          description: |\n            <em>status code of the task<\/em><br>generated by DataForSEO; can be within the following range: 10000-60000<br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a>\n        - name: status_message\n          type: string\n          description: |\n            <em>informational message of the task<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a>\n        - name: time\n          type: string\n          description: |\n            <em>execution time, seconds<\/em>\n        - name: cost\n          type: float\n          description: |\n            <em>cost of the task, USD<\/em><br>includes the base task price plus the <code>money_spent<\/code> value\n        - name: result_count\n          type: integer\n          description: |\n            <em>number of elements in the <code>result<\/code> array<\/em>\n        - name: path\n          type: array\n          description: |\n            <em>URL path<\/em>\n        - name: data\n          type: object\n          description: |\n            <em>contains the same parameters that you specified in the POST request<\/em>\n        - name: result\n          type: array\n          description: |\n            <em>array of results<\/em>\n          items:\n            children:\n              - name: model_name\n                type: string\n                description: |\n                  <em>name of the AI model used<\/em>\n              - name: input_tokens\n                type: integer\n                description: |\n                  <em>number of tokens in the input<\/em><br>total count of tokens processed\n              - name: output_tokens\n                type: integer\n                description: |\n                  <em>number of tokens in the output<\/em><br>total count of tokens generated in the AI response\n              - name: reasoning_tokens\n                type: integer\n                description: |\n                  <em>number of reasoning tokens<\/em><br>total count of tokens used to generate reasoning content\n              - name: web_search\n                type: boolean\n                description: |\n                  <em>indicates if web search was used<\/em>\n              - name: money_spent\n                type: float\n                description: |\n                  <em>cost of AI tokens, USD<\/em><br>the price charged by the third-party AI model provider for according to its <a href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/pricing\" target=\"_blank\">Pricing<\/a>\n              - name: datetime\n                type: string\n                description: |\n                  <em>date and time when the result was received<\/em><br>in the UTC format: \u201cyyyy-mm-dd hh-mm-ss +00:00\u201d<br>example:<br><code class=\"long-string\">2019-11-15 12:57:46 +00:00<\/code>\n              - name: items\n                type: array\n                description: |\n                  <em>array of response items<\/em><br>contains structured AI response data\n                children:\n                  - name: reasoning\n                    type: object\n                    description: |\n                      <em>element in the response<\/em>\n                    addIt: type:reasoning\n                    children:\n                      - name: type\n                        type: string\n                        description: |\n                          <em>type of the element = <strong>'reasoning'<\/strong><\/em><br><strong>Note:<\/strong> this element is supported only in reasoning models and is not guaranteed to be returned\n                      - name: sections\n                        type: array\n                        description: |\n                          <em>reasoning chain sections<\/em><br>array of objects containing the reasoning chain sections generated by the LLM\n                        items:\n                          children:\n                            - name: type\n                              type: string\n                              description: \"<em>type of element_=_<strong>'summary_text'<\/strong><\/em>\"\n                            - name: text\n                              type: string\n                              description: |\n                                <em>text of the reasoning chain section<\/em><br>text of the reasoning chain  section summarizing the model's thought process\n                  - name: message\n                    type: object\n                    description: |\n                      <em>element in the response<\/em>\n                    addIt: type:message\n                    children:\n                      - name: type\n                        type: string\n                        description: |\n                          <em>type of the element = <strong>'message'<\/strong><\/em>\n                      - name: sections\n                        type: array\n                        description: |\n                          <em>array of content sections<\/em><br>contains different parts of the AI response\n                        items:\n                          children:\n                            - name: type\n                              type: string\n                              description: \"<em>type of element_=_<strong>'text'<\/strong><\/em>\"\n                            - name: text\n                              type: string\n                              description: |\n                                <em>AI-generated text content<\/em>\n                            - name: annotations\n                              type: array\n                              description: |\n                                <em>array of references used to generate the response<\/em><br>equals <code>null<\/code> if the <code>web_search<\/code> parameter is not set to <code>true<\/code><br><strong>Note:<\/strong> <code>annotations<\/code> may return empty even when <code>web_search<\/code> is <code>true<\/code>, as the AI will attempt to retrieve web information but may not find relevant results\n                              items:\n                                children:\n                                  - name: title\n                                    type: string\n                                    description: |\n                                      <em>the domain name or title of the quoted source<\/em>\n                                  - name: url\n                                    type: string\n                                    description: |\n                                      <em>redirect URL to the quoted source<\/em><br>contains a Vertex AI redirect that leads to the original source\n              - name: fan_out_queries\n                type: array\n                description: |\n                  <em>array of fan-out queries<\/em><br>contains related search queries derived from the main query to provide a more comprehensive response","footnotes":""},"class_list":["post-21737","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages\/21737","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/comments?post=21737"}],"version-history":[{"count":41,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages\/21737\/revisions"}],"predecessor-version":[{"id":24260,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages\/21737\/revisions\/24260"}],"wp:attachment":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/media?parent=21737"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}