{"id":21755,"date":"2025-07-04T14:58:29","date_gmt":"2025-07-04T14:58:29","guid":{"rendered":"https:\/\/docs.dataforseo.com\/v3\/?page_id=21755"},"modified":"2026-04-06T17:19:11","modified_gmt":"2026-04-06T17:19:11","slug":"ai_optimization-chat_gpt-llm_responses-live","status":"publish","type":"page","link":"https:\/\/docs.dataforseo.com\/v3\/ai_optimization-chat_gpt-llm_responses-live\/","title":{"rendered":"ai_optimization\/chat_gpt\/llm_responses\/live"},"content":{"rendered":"<div class=\"wpb-content-wrapper\"><p>[vc_row][vc_column][vc_column_text]<\/p>\n<h2>Live ChatGPT LLM Responses<\/h2>\n<p>\u200c\u200c<br \/>\nLive ChatGPT LLM Responses endpoint allows you to retrieve structured responses from a specific ChatGPT AI model, based on the input parameters.<\/p>\n<p>[\/vc_column_text]    <div class=\"endpoint\">\n        <img decoding=\"async\" class=\"endpoint__icon\" src=\"https:\/\/docs.dataforseo.com\/v3\/wp-content\/themes\/dataforseo\/assets\/img\/icons\/checked-circle.svg\" alt=\"checked\">\n\n                    POST            <button class=\"btn-reset button-link copy-button\" data-href=\"https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live\">\n                https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live                <svg width=\"16\" height=\"16\" viewBox=\"0 0 16 16\">\n                    <use href=\"https:\/\/docs.dataforseo.com\/v3\/wp-content\/themes\/dataforseo\/assets\/img\/icons\/sprite.svg#layers\"><\/use>\n                <\/svg>\n            <\/button>\n            <\/div>\n    \t<article class=\"info-card info-card--yellow\">\n\t\t<header class=\"info-card__header\">\n\t\t\t<div class=\"info-card__icon\">\n\t\t\t\t<svg width=\"16\" height=\"16\" viewBox=\"0 0 16 16\">\n\t\t\t\t\t<use href=\"https:\/\/docs.dataforseo.com\/v3\/wp-content\/themes\/dataforseo\/assets\/img\/icons\/sprite.svg#label\"><\/use>\n\t\t\t\t<\/svg>\n\t\t\t<\/div>\n\t\t\t<div class=\"info-card__title\">Pricing<\/div>\n\t\t<\/header>\n\t\t<div class=\"info-card__content\">\n\t\t\t<p>The cost of the task can be calculated on the <a href=\"https:\/\/dataforseo.com\/pricing\/ai-optimization\/llm-responses\" target=\"_blank\">Pricing page<\/a><\/a>. <\/p>\n\t\t<\/div>\n\t<\/article>\n\t[vc_column_text]All POST data should be sent in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/JSON\">JSON<\/a> format (UTF-8 encoding). The task setting is done using the POST method. When setting a task, you should send all task parameters in the task array of the generic POST array. You can send up to 2000 API calls per minute, each Live ChatGPT LLM Responses call can contain only one task.<\/p>\n<p><strong>The number of concurrent Live tasks is currently limited to 30 per account for each platform in the LLM Responses.<\/strong><\/p>\n<p><strong>Execution time for tasks set with the Live ChatGPT LLM Responses endpoint is currently up to 120 seconds.<\/strong><\/p>\n<p>Below you will find a detailed description of the fields you can use for setting a task.<\/p>\n<p><strong>Description of the fields for setting a task:<\/strong><br \/>\n<div class=\"dfs-doc-container dfs-doc-request\"><table><thead><tr><th>Field name<\/th><th>Type<\/th><th>Description<\/th><\/tr><\/thead><tbody><tr data-doc-id=\"user_prompt\"><td><code>user_prompt<\/code><\/td><td>string<\/td><td><p><em>prompt for the AI model<\/em><br><strong>required field<\/strong><br>the question or task you want to send to the AI model;<br>you can specify <strong>up to 500 characters<\/strong> in the <code>user_prompt<\/code> field<\/p><\/td><\/tr><tr data-doc-id=\"model_name\"><td><code>model_name<\/code><\/td><td>string<\/td><td><p><em>name of the AI model<\/em><br><strong>required field<\/strong><br><code>model_name<\/code >consists of the actual model name and version name;<br>if the basic model name is specified, its latest version will be set by default;<br>for example, if <code>gpt-4.1<\/code> is specified, the <code>gpt-4.1-2025-04-14<\/code> will be set as <code>model_name<\/code> automatically;<br>you can receive the list of available LLM models by making a separate request to the <code><a href=\"https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models\">https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models<\/a><\/code><\/p><\/td><\/tr><tr data-doc-id=\"max_output_tokens\"><td><code>max_output_tokens<\/code><\/td><td>integer<\/td><td><p><em>maximum number of tokens in the AI response<\/em><br>optional field<br>minimum value for reasoning models (e.g., <code>reasoning<\/code> is <code>true<\/code> in the <a href=\"\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models\/\" target=\"_blank\">Models endpoint<\/a>): <code>1024<\/code>;<br>minimum value for non-reasoning models: <code>16<\/code>;<br>maximum value: <code>4096<\/code>;<br>default value: <code>2048<\/code><br><strong>Note:<\/strong> if <code>web_search<\/code> is set to <code>true<\/code> or the reasoning model is specified in the request, the output token count may exceed the specified <code>max_output_tokens<\/code> limit<\/p><\/td><\/tr><tr data-doc-id=\"temperature\"><td><code>temperature<\/code><\/td><td>float<\/td><td><p><em>randomness of the AI response<\/em><br>optional field<br>higher values make output more diverse; <br>lower values make output more focused;<br>minimum value: <code>0<\/code><br>maximum value: <code>2<\/code><br>default value: <code>0.94<\/code><br><strong>Note:<\/strong> not supported in reasoning models<\/p><\/td><\/tr><tr data-doc-id=\"top_p\"><td><code>top_p<\/code><\/td><td>float<\/td><td><\/td><\/tr><tr data-doc-id=\"web_search\"><td><code>web_search<\/code><\/td><td>boolean<\/td><td><p><em>enable web search<\/em><br>optional field<br>when enabled, the AI model can access and cite current web information;<br>default value: <code>false<\/code>;<br><strong>Note:<\/strong> refer to the <a href=\"https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models\/\">Models endpoint<\/a> for a list of models that support <code>web_search<\/code>;<\/p><\/td><\/tr><tr data-doc-id=\"force_web_search\"><td><code>force_web_search<\/code><\/td><td>boolean<\/td><td><p><em>force AI agent to use web search<\/em><br>optional field<br>to enable this parameter, <code>web_search<\/code> must also be enabled;<br>when enabled, the AI model is forced to access and cite current web information;<br>default value: <code>false<\/code>;<br><strong>Note:<\/strong> even if the parameter is set to <code>true<\/code>, there is no guarantee web sources will be cited in the response <br><strong>Note #2:<\/strong> not supported in reasoning models<\/p><\/td><\/tr><tr data-doc-id=\"web_search_country_iso_code\"><td><code>web_search_country_iso_code<\/code><\/td><td>string<\/td><td><p><em>ISO country code of the location<\/em><br>optional field<br>required if <code>web_search_city<\/code> is specified;<br>to enable this parameter, <code>web_search<\/code> must also be enabled;<br>when enabled, the AI model will search the web from the country you specify;<br><strong>Note:<\/strong> not supported in <code>o3-mini<\/code>, <code>o1-pro<\/code>, <code>o1<\/code> models<\/p><\/td><\/tr><tr data-doc-id=\"web_search_city\"><td><code>web_search_city<\/code><\/td><td>string<\/td><td><p><em>city name of the location<\/em><br>optional field<br><strong>Note:<\/strong> specify <code>web_search_country_iso_code<\/code> to use this parameter<br><strong>Note #2:<\/strong> not supported in <code>o3-mini<\/code>, <code>o1-pro<\/code>, <code>o1<\/code> models<\/p><\/td><\/tr><tr data-doc-id=\"system_message\"><td><code>system_message<\/code><\/td><td>string<\/td><td><p><em>instructions for the AI behaviour<\/em><br>optional field<br>defines the AI's role, tone, or specific behavior <br>you can specify <strong>up to 500 characters<\/strong> in the <code>system_message<\/code> field<\/p><\/td><\/tr><tr data-doc-id=\"message_chain\"><td><code>message_chain<\/code><\/td><td>array<\/td><td><p><em>conversation history<\/em><br>optional field<br>array of message objects representing previous conversation turns;<br>each object must contain <code>role<\/code> and <code>message<\/code> parameters:<br><code>role<\/code> string with either <code>user<\/code> or <code>ai<\/code> role;<br><code>message<\/code> string with message content (max 500 characters);<br>you can specify <strong> the maximum of 10 message objects<\/strong> in the array;<br>example:<br><code>\"message_chain\": [{\"role\":\"user\",\"message\":\"Hello, what\u2019s up?\"},{\"role\":\"ai\",\"message\":\"Hello! I\u2019m doing well, thank you. How can I assist you today?\"}]<\/code><\/p><\/td><\/tr><tr data-doc-id=\"tag\"><td><code>tag<\/code><\/td><td>string<\/td><td><p><em>user-defined task identifier<\/em><br>optional field<br><em>the character limit is 255<\/em><br>you can use this parameter to identify the task and match it with the result<br>you will find the specified <code>tag<\/code> value in the <code>data<\/code> object of the response<\/p><\/td><\/tr><\/tbody><\/table><\/div><br \/>\n\u200c<br \/>\n\u200c\u200cAs a response of the API server, you will receive <a href=\"https:\/\/en.wikipedia.org\/wiki\/JSON\">JSON<\/a>-encoded data containing a <code>tasks<\/code> array with the information specific to the set tasks.<br \/>\n\u200c<br \/>\n<strong>Description of the fields in the results array:<\/strong><br \/>\n<div class=\"dfs-doc-container dfs-doc-response\"><div class=\"api-block-main\"><div class=\"api-section\"><table><thead><tr><th>Field name<\/th><th>Type<\/th><th>Description<\/th><\/tr><\/thead><tbody><tr data-doc-id=\"version\"><td><code>version<\/code><\/td><td>string<\/td><td><p><em>the current version of the API<\/em><\/p><\/td><\/tr><tr data-doc-id=\"status_code\"><td><code>status_code<\/code><\/td><td>integer<\/td><td><p><i>general status code<\/i><br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a><br><strong>Note:<\/strong> we strongly recommend designing a necessary system for handling related exceptional or error conditions<\/p><\/td><\/tr><tr data-doc-id=\"status_message\"><td><code>status_message<\/code><\/td><td>string<\/td><td><p><em>general informational message<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a><\/p><\/td><\/tr><tr data-doc-id=\"time\"><td><code>time<\/code><\/td><td>string<\/td><td><p><em>execution time, seconds<\/em><\/p><\/td><\/tr><tr data-doc-id=\"cost\"><td><code>cost<\/code><\/td><td>float<\/td><td><p><em>total tasks cost, USD<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks_count\"><td><code>tasks_count<\/code><\/td><td>integer<\/td><td><p><em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks_error\"><td><code>tasks_error<\/code><\/td><td>integer<\/td><td><p><em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array returned with an error<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks\"><td><strong><code>tasks<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of tasks<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-id\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>id<\/code><\/td><td>string<\/td><td><p><em>task identifier<\/em><br><strong>unique task identifier in our system in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Universally_unique_identifier\">UUID<\/a> format<\/strong><\/p><\/td><\/tr><tr data-doc-id=\"tasks-status_code\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>status_code<\/code><\/td><td>integer<\/td><td><p><em>status code of the task<\/em><br>generated by DataForSEO; can be within the following range: 10000-60000<br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a><\/p><\/td><\/tr><tr data-doc-id=\"tasks-status_message\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>status_message<\/code><\/td><td>string<\/td><td><p><em>informational message of the task<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a><\/p><\/td><\/tr><tr data-doc-id=\"tasks-time\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>time<\/code><\/td><td>string<\/td><td><p><em>execution time, seconds<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-cost\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>cost<\/code><\/td><td>float<\/td><td><p><em>cost of the task, USD<\/em><br>includes the base task price plus the <code>money_spent<\/code> value<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result_count\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>result_count<\/code><\/td><td>integer<\/td><td><p><em>number of elements in the <code>result<\/code> array<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-path\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>path<\/code><\/td><td>array<\/td><td><p><em>URL path<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-data\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<code>data<\/code><\/td><td>object<\/td><td><p><em>contains the same parameters that you specified in the POST request<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result\"><td>&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>result<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of results<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-model_name\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>model_name<\/code><\/td><td>string<\/td><td><p><em>name of the AI model used<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-input_tokens\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>input_tokens<\/code><\/td><td>integer<\/td><td><p><em>number of tokens in the input<\/em><br>total count of tokens processed<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-output_tokens\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>output_tokens<\/code><\/td><td>integer<\/td><td><p><em>number of tokens in the output<\/em><br>total count of tokens generated in the AI response<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-reasoning_tokens\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>reasoning_tokens<\/code><\/td><td>integer<\/td><td><p><em>number of reasoning tokens<\/em><br>total count of tokens used to generate reasoning content<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-web_search\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>web_search<\/code><\/td><td>boolean<\/td><td><p><em>indicates if web search was used<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-money_spent\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>money_spent<\/code><\/td><td>float<\/td><td><p><em>cost of AI tokens, USD<\/em><br>the price charged by the third-party AI model provider for according to its <a href=\"https:\/\/platform.openai.com\/docs\/pricing\" target=\"_blank\">Pricing<\/a><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-datetime\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>datetime<\/code><\/td><td>string<\/td><td><p><em>date and time when the result was received<\/em><br>in the UTC format: \u201cyyyy-mm-dd hh-mm-ss +00:00\u201d<br>example:<br><code class=\"long-string\">2019-11-15 12:57:46 +00:00<\/code><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>items<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of response items<\/em><br>contains structured AI response data<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>reasoning<\/code><\/strong><\/td><td>object<\/td><td><p><em>element in the response<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of the element = <strong>'reasoning'<\/strong><\/em><br><strong>Note:<\/strong> this element is supported only in reasoning models and is not guaranteed to be returned<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-sections\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>sections<\/code><\/strong><\/td><td>array<\/td><td><p><em>reasoning chain sections<\/em><br>array of objects containing the reasoning chain sections generated by the LLM<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-sections-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of element<em>=<\/em><strong>'summary_text'<\/strong><\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:reasoning-sections-text\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>text<\/code><\/td><td>string<\/td><td><p><em>text of the reasoning chain section<\/em><br>text of the reasoning chain  section summarizing the model's thought process<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>message<\/code><\/strong><\/td><td>object<\/td><td><p><em>element in the response<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of the element = <strong>'message'<\/strong><\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>sections<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of content sections<\/em><br>contains different parts of the AI response<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-type\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>type<\/code><\/td><td>string<\/td><td><p><em>type of element<em>=<\/em><strong>'text'<\/strong><\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-text\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>text<\/code><\/td><td>string<\/td><td><p><em>AI-generated text content<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-annotations\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><code>annotations<\/code><\/strong><\/td><td>array<\/td><td><p><em>array of references used to generate the response<\/em><br>equals <code>null<\/code> if the <code>web_search<\/code> parameter is not set to <code>true<\/code><br><strong>Note:<\/strong> <code>annotations<\/code> may return empty even when <code>web_search<\/code> is <code>true<\/code>, as the AI will attempt to retrieve web information but may not find relevant results<\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-annotations-title\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>title<\/code><\/td><td>string<\/td><td><p><em>the domain name or title of the quoted source<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-items-type:message-sections-annotations-url\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>url<\/code><\/td><td>string<\/td><td><p><em>URL of the quoted source<\/em><\/p><\/td><\/tr><tr data-doc-id=\"tasks-result-fan_out_queries\"><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<code>fan_out_queries<\/code><\/td><td>array<\/td><td><p><em>array of fan-out queries<\/em><br>contains related search queries derived from the main query to provide a more comprehensive response<\/p><\/td><\/tr><\/tbody><\/table><\/div><\/div><\/div><br \/>\n\u200c\u200c[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n<blockquote><p>Instead of \u2018login\u2019 and \u2018password\u2019 use your credentials from https:\/\/app.dataforseo.com\/api-access<\/p><\/blockquote><div id=\"curl\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-bash hljs\"># Instead of &#039;login&#039; and &#039;password&#039; use your credentials from https:\/\/app.dataforseo.com\/api-access \r\nlogin=&quot;login&quot; \r\npassword=&quot;password&quot; \r\ncred=&quot;$(printf ${login}:${password} | base64)&quot; \r\ncurl --location --request POST &quot;https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live&quot; \r\n--header &quot;Authorization: Basic ${cred}&quot;  \r\n--header &quot;Content-Type: application\/json&quot; \r\n--data-raw &#039;[\r\n  {\r\n    &quot;system_message&quot;: &quot;communicate as if we are in a business meeting&quot;,\r\n    &quot;message_chain&quot;: [\r\n      {\r\n        &quot;role&quot;: &quot;user&quot;,\r\n        &quot;message&quot;: &quot;Hello, what\u2019s up?&quot;\r\n      },\r\n      {\r\n        &quot;role&quot;: &quot;ai&quot;,\r\n        &quot;message&quot;: &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n      }\r\n    ],\r\n    &quot;max_output_tokens&quot;: 200,\r\n    &quot;temperature&quot;: 0.3,\r\n    &quot;top_p&quot;: 0.5,\r\n    &quot;model_name&quot;: &quot;gpt-4.1-mini&quot;,\r\n    &quot;web_search&quot;: true,\r\n    &quot;web_search_country_iso_code&quot;: &quot;FR&quot;,\r\n    &quot;web_search_city&quot;: &quot;Paris&quot;,\r\n    &quot;user_prompt&quot;: &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n  }\r\n]&#039;<\/code><\/pre><\/div><\/div><div id=\"php\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-php hljs\">&lt;?php\r\n\/\/ You can download this file from here https:\/\/cdn.dataforseo.com\/v3\/examples\/php\/php_RestClient.zip\r\nrequire(&#039;RestClient.php&#039;);\r\n$api_url = &#039;https:\/\/api.dataforseo.com\/&#039;;\r\ntry {\r\n   \/\/ Instead of &#039;login&#039; and &#039;password&#039; use your credentials from https:\/\/app.dataforseo.com\/api-access\r\n   $client = new RestClient($api_url, null, &#039;login&#039;, &#039;password&#039;);\r\n} catch (RestClientException $e) {\r\n   echo &quot;n&quot;;\r\n   print &quot;HTTP code: {$e-&gt;getHttpCode()}n&quot;;\r\n   print &quot;Error code: {$e-&gt;getCode()}n&quot;;\r\n   print &quot;Message: {$e-&gt;getMessage()}n&quot;;\r\n   print  $e-&gt;getTraceAsString();\r\n   echo &quot;n&quot;;\r\n   exit();\r\n}\r\n$post_array = array();\r\n\/\/ You can set only one task at a time\r\n$post_array[] = array(\r\n        &quot;system_message&quot; =&gt; &quot;communicate as if we are in a business meeting&quot;,\r\n        &quot;message_chain&quot; =&gt; [\r\n            [\r\n                &quot;role&quot;    =&gt; &quot;user&quot;,\r\n                &quot;message&quot; =&gt; &quot;Hello, what\u2019s up?&quot;\r\n            ],\r\n            [\r\n                &quot;role&quot;    =&gt; &quot;ai&quot;,\r\n                &quot;message&quot; =&gt; &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n            ]\r\n        ],\r\n        &quot;max_output_tokens&quot; =&gt; 200,\r\n        &quot;temperature&quot; =&gt; 0.3,\r\n        &quot;top_p&quot; =&gt; 0.5,\r\n        &quot;model_name&quot; =&gt; &quot;gpt-4.1-mini&quot;,\r\n        &quot;web_search&quot; =&gt; true,\r\n        &quot;web_search_country_iso_code&quot; =&gt; &quot;FR&quot;,\r\n        &quot;web_search_city&quot; =&gt; &quot;Paris&quot;,\r\n        &quot;user_prompt&quot; =&gt; &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n);\r\nif (count($post_array) &gt; 0) {\r\ntry {\r\n    \/\/ POST \/v3\/serp\/google\/ai_mode\/live\/advanced\r\n    \/\/ in addition to &#039;google&#039; and &#039;ai_mode&#039; you can also set other search engine and type parameters\r\n    \/\/ the full list of possible parameters is available in documentation\r\n    $result = $client-&gt;post(&#039;\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live&#039;, $post_array);\r\n    print_r($result);\r\n    \/\/ do something with post result\r\n} catch (RestClientException $e) {\r\n    echo &quot;n&quot;;\r\n    print &quot;HTTP code: {$e-&gt;getHttpCode()}n&quot;;\r\n    print &quot;Error code: {$e-&gt;getCode()}n&quot;;\r\n    print &quot;Message: {$e-&gt;getMessage()}n&quot;;\r\n    print  $e-&gt;getTraceAsString();\r\n    echo &quot;n&quot;;\r\n}\r\n$client = null;\r\n?&gt;<\/code><\/pre><\/div><\/div><div id=\"javascript\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-javascript hljs\">const axios = require(&#039;axios&#039;);\r\n\r\naxios({\r\n    method: &#039;post&#039;,\r\n    url: &#039;https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live&#039;,\r\n    auth: {\r\n        username: &#039;login&#039;,\r\n        password: &#039;password&#039;\r\n    },\r\n    data: [{\r\n    system_message: encodeURI(&quot;communicate as if we are in a business meeting&quot;),\r\n    message_chain: [\r\n      {\r\n        role: &quot;user&quot;,\r\n        message: &quot;Hello, what\u2019s up?&quot;\r\n      },\r\n      {\r\n        role: &quot;ai&quot;,\r\n        message: encodeURI(&quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;)\r\n      }\r\n    ],\r\n    max_output_tokens: 200,\r\n    temperature: 0.3,\r\n    top_p: 0.5,\r\n    model_name: &quot;gpt-4.1-mini&quot;,\r\n    web_search: true,\r\n    web_search_country_iso_code: &quot;FR&quot;,\r\n    web_search_city: &quot;Paris&quot;,\r\n    user_prompt: encodeURI(&quot;provide information on how relevant the amusement park business is in France now&quot;)\r\n    }],\r\n    headers: {\r\n        &#039;content-type&#039;: &#039;application\/json&#039;\r\n    }\r\n}).then(function (response) {\r\n    var result = response[&#039;data&#039;][&#039;tasks&#039;];\r\n    \/\/ Result data\r\n    console.log(result);\r\n}).catch(function (error) {\r\n    console.log(error);\r\n});<\/code><\/pre><\/div><\/div><div id=\"python\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-python hljs\">&quot;&quot;&quot;\r\nMethod: POST\r\nEndpoint: https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live\r\n@see https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live\r\n&quot;&quot;&quot;\r\n\r\nimport sys\r\nimport os\r\nsys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), &#039;..\/..\/..\/..\/..\/&#039;)))\r\nfrom lib.client import RestClient\r\nfrom lib.config import username, password\r\nclient = RestClient(username, password)\r\n\r\npost_data = []\r\npost_data.append({\r\n        &#039;system_message&#039;: &#039;communicate as if we are in a business meeting&#039;,\r\n        &#039;message_chain&#039;: [\r\n            {\r\n                &#039;role&#039;: &#039;user&#039;,\r\n                &#039;message&#039;: &#039;Hello, what&#039;s up?&#039;\r\n            },\r\n            {\r\n                &#039;role&#039;: &#039;ai&#039;,\r\n                &#039;message&#039;: &#039;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&#039;\r\n            }\r\n        ],\r\n        &#039;max_output_tokens&#039;: 200,\r\n        &#039;web_search_country_iso_code&#039;: &#039;FR&#039;,\r\n        &#039;web_search_city&#039;: &#039;Paris&#039;,\r\n        &#039;model_name&#039;: &#039;gpt-4o&#039;,\r\n        &#039;web_search&#039;: True,\r\n        &#039;user_prompt&#039;: &#039;provide information on how relevant the amusement park business is in France now&#039;\r\n    })\r\ntry:\r\n    response = client.post(&#039;\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live&#039;, post_data)\r\n    print(response)\r\n    # do something with post result\r\nexcept Exception as e:\r\n    print(f&#039;An error occurred: {e}&#039;)<\/code><\/pre><\/div><\/div><div id=\"csharp\" class=\"tab-content example__content\"><div class=\"example__code\"><pre><code class=\"language-csharp hljs\">using System;\r\nusing System.Linq;\r\nusing System.Net.Http;\r\nusing System.Net.Http.Headers;\r\nusing System.Text;\r\nusing System.Collections.Generic;\r\nusing System.Threading.Tasks;\r\nusing Newtonsoft.Json;\r\nnamespace DataForSeoSdk;\r\n\r\npublic class AiOptimization\r\n{\r\n\r\n    private static readonly HttpClient _httpClient;\r\n    \r\n    static AiOptimization()\r\n    {\r\n        _httpClient = new HttpClient\r\n        {\r\n            BaseAddress = new Uri(&quot;https:\/\/api.dataforseo.com\/&quot;)\r\n        };\r\n        _httpClient.DefaultRequestHeaders.Authorization =\r\n            new AuthenticationHeaderValue(&quot;Basic&quot;, ApiConfig.Base64Auth);\r\n    }\r\n\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Method: POST\r\n    \/\/\/ Endpoint: https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live\r\n    \/\/\/ &lt;\/summary&gt;\r\n    \/\/\/ &lt;see href=&quot;https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live&quot;\/&gt;\r\n    \r\n    public static async Task ChatGptLlmResponsesLive()\r\n    {\r\n        var postData = new List&lt;object&gt;();\r\n        \/\/ a simple way to set a task, the full list of possible parameters is available in documentation\r\n        postData.Add(new\r\n        {\r\n            system_message = &quot;communicate as if we are in a business meeting&quot;,\r\n            message_chain = new object[]\r\n            {\r\n                new\r\n                {\r\n                    role = &quot;user&quot;,\r\n                    message = &quot;Hello, what&#039;s up?&quot;\r\n                },\r\n                new\r\n                {\r\n                    role = &quot;ai&quot;,\r\n                    message = &quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n                }\r\n            },\r\n            max_output_tokens = 200,\r\n            web_search_country_iso_code = &quot;FR&quot;,\r\n            web_search_city = &quot;Paris&quot;,\r\n            model_name = &quot;gpt-4o&quot;,\r\n            web_search = true,\r\n            user_prompt = &quot;provide information on how relevant the amusement park business is in France now&quot;\r\n        });\r\n\r\n        var content = new StringContent(JsonConvert.SerializeObject(postData), Encoding.UTF8, &quot;application\/json&quot;);\r\n        using var response = await _httpClient.PostAsync(&quot;\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live&quot;, content);\r\n        var result = JsonConvert.DeserializeObject&lt;dynamic&gt;(await response.Content.ReadAsStringAsync());\r\n        \/\/ you can find the full list of the response codes here https:\/\/docs.dataforseo.com\/v3\/appendix\/errors\r\n        if (result.status_code == 20000)\r\n        {\r\n            \/\/ do something with result\r\n            Console.WriteLine(result);\r\n        }\r\n        else\r\n            Console.WriteLine($&quot;error. Code: {result.status_code} Message: {result.status_message}&quot;);\r\n    }<\/code><\/pre><\/div><\/div><blockquote><p>The above command returns JSON structured like this:<\/p><\/blockquote><div class=\"example example--json\"><div class=\"example__content\"><div class=\"example__code example__code-json\"><pre><code class=\"language-json hljs\">{\r\n  &quot;version&quot;:&quot;0.1.20251208&quot;,\r\n  &quot;status_code&quot;:20000,\r\n  &quot;status_message&quot;:&quot;Ok.&quot;,\r\n  &quot;time&quot;:&quot;4.7042 sec.&quot;,\r\n  &quot;cost&quot;:0.0296424,\r\n  &quot;tasks_count&quot;:1,\r\n  &quot;tasks_error&quot;:0,\r\n  &quot;tasks&quot;:[\r\n    {\r\n      &quot;id&quot;:&quot;12111224-1535-0612-0000-0362a4b8fdb4&quot;,\r\n      &quot;status_code&quot;:20000,\r\n      &quot;status_message&quot;:&quot;Ok.&quot;,\r\n      &quot;time&quot;:&quot;4.6134 sec.&quot;,\r\n      &quot;cost&quot;:0.0296424,\r\n      &quot;result_count&quot;:1,\r\n      &quot;path&quot;:[\r\n        &quot;v3&quot;,\r\n        &quot;ai_optimization&quot;,\r\n        &quot;chat_gpt&quot;,\r\n        &quot;llm_responses&quot;,\r\n        &quot;live&quot;\r\n      ],\r\n      &quot;data&quot;:{\r\n        &quot;api&quot;:&quot;ai_optimization&quot;,\r\n        &quot;function&quot;:&quot;llm_responses&quot;,\r\n        &quot;se&quot;:&quot;chat_gpt&quot;,\r\n        &quot;system_message&quot;:&quot;communicate as if we are in a business meeting&quot;,\r\n        &quot;message_chain&quot;:[\r\n          {\r\n            &quot;role&quot;:&quot;user&quot;,\r\n            &quot;message&quot;:&quot;Hello, what&#039;s up?&quot;\r\n          },\r\n          {\r\n            &quot;role&quot;:&quot;ai&quot;,\r\n            &quot;message&quot;:&quot;Hello! I\u2019m doing well, thank you. How can I assist you today? Are there any specific topics or projects you\u2019d like to discuss in our meeting?&quot;\r\n          }\r\n        ],\r\n        &quot;temperature&quot;:0.3,\r\n        &quot;top_p&quot;:0.5,\r\n        &quot;web_search_country_iso_code&quot;:&quot;FR&quot;,\r\n        &quot;web_search_city&quot;:&quot;Paris&quot;,\r\n        &quot;model_name&quot;:&quot;gpt-4.1-mini&quot;,\r\n        &quot;web_search&quot;:true,\r\n        &quot;user_prompt&quot;:&quot;provide information on how relevant the amusement park business is in France now&quot;\r\n      },\r\n      &quot;result&quot;:[\r\n        {\r\n          &quot;model_name&quot;:&quot;gpt-4.1-mini-2025-04-14&quot;,\r\n          &quot;input_tokens&quot;:8174,\r\n          &quot;output_tokens&quot;:483,\r\n          &quot;reasoning_tokens&quot;:576,\r\n          &quot;web_search&quot;:true,\r\n          &quot;money_spent&quot;:0.0290424,\r\n          &quot;datetime&quot;:&quot;2025-12-11 12:24:51 +00:00&quot;,\r\n          &quot;items&quot;:[\r\n            {\r\n              &quot;type&quot;:&quot;reasoning&quot;,\r\n              &quot;sections&quot;:[\r\n                {\r\n                  &quot;type&quot;:&quot;summary_text&quot;,\r\n                  &quot;text&quot;:&quot;**Exploring a riddle**nnThis likely refers to a riddle or joke. The classic answer seems to be a cup that&#039;s closed at the top and bottom, making it essentially useless. Or, is it a trophy cup or even a cupcake? I need to think through this riddle: I have a cup with no bottom and a closed top. How can I drink from it? The punchline might simply be that you can&#039;t drink from it. Hm, I wonder if there might be other interpretations too!&quot;\r\n                },\r\n                {\r\n                  &quot;type&quot;:&quot;summary_text&quot;,\r\n                  &quot;text&quot;:&quot;**Pondering a riddle\u2019s meaning**nnI\u2019m considering if this cup could also mean something like hiccup \u2014 closed at the top and bottom? But if there\u2019s no bottom, anything liquid just falls out. A closed top means you can\u2019t pour anything in. So it may not be a traditional drinking cup. Maybe it\u2019s an acorn cup, although it has an open top. The joke could suggest inverting it, but that still leaves it open. I\u2019m not sure how it all ties back to drinking from it!&quot;\r\n                },\r\n                {\r\n                  &quot;type&quot;:&quot;summary_text&quot;,\r\n                  &quot;text&quot;:&quot;**Clarifying the riddle\u2019s punchline**nnSo, the answer seems to be that you&#039;re meant to drink from the rim, but since there\u2019s no bottom and the top is closed, it&#039;s impossible to do that. The correct response points to a thimble instead, which has an open bottom. The punchline is clear: You can&#039;t drink; it&#039;s a thimble. The riddle plays on the expectation of a witty response. To keep it light, I could say, You don&#039;t! That&#039;s a thimble! and add a playful tone.&quot;\r\n                }\r\n              ]\r\n            },\r\n            {\r\n              &quot;type&quot;:&quot;message&quot;,\r\n              &quot;sections&quot;:[\r\n                {\r\n                  &quot;type&quot;:&quot;text&quot;,\r\n                  &quot;text&quot;:&quot;The amusement park industry in France remains a significant and growing sector within the country&#039;s tourism and entertainment landscape. In 2024, the French amusement parks market generated approximately USD 3.25 billion in revenue and is projected to reach USD 4.27 billion by 2030, reflecting a compound annual growth rate (CAGR) of 4.4% from 2025 to 2030. ([grandviewresearch.com](https:\/\/www.grandviewresearch.com\/horizon\/outlook\/amusement-parks-market\/france?utm_source=openai))nnFrance&#039;s contribution to the European amusement parks market is notable, accounting for 26.6% of the total revenue in 2024. ([grandviewresearch.com](https:\/\/www.grandviewresearch.com\/horizon\/outlook\/amusement-parks-market\/europe?utm_source=openai)) This underscores the country&#039;s prominence in the regional market.nnMajor parks such as Disneyland Paris and Parc Ast\u00e9rix continue to attract millions of visitors annually. In 2023, Parc Ast\u00e9rix welcomed over 2.8 million visitors, making it the second most visited park in France after Disneyland Paris. ([en.wikipedia.org](https:\/\/en.wikipedia.org\/wiki\/Parc_Ast%C3%A9rix?utm_source=openai)) Additionally, the Compagnie des Alpes, a leading operator in the sector, reported a revenue of \u20ac525.9 million in the 2022\/2023 financial year, with over 10.6 million visitors across its parks. ([compagniedesalpes.com](https:\/\/www.compagniedesalpes.com\/sites\/default\/files\/documents\/2024-02\/CDA_DEU_2023_EN.pdf?utm_source=openai))nnThe industry is also witnessing a trend towards immersive and personalized attractions, catering to visitors seeking unique experiences. This shift includes the development of themed areas and interactive installations that appeal to various age groups, ensuring an inclusive environment for families. ([statista.com](https:\/\/www.statista.com\/outlook\/amo\/entertainment\/amusement-parks\/france?utm_source=openai))nnIn summary, the amusement park business in France is not only relevant but also experiencing growth and innovation, solidifying its position as a key component of the country&#039;s entertainment and tourism sectors. &quot;,\r\n                  &quot;annotations&quot;:[\r\n                    {\r\n                      &quot;title&quot;:&quot;France Amusement Parks Market Size &amp; Outlook, 2030&quot;,\r\n                      &quot;url&quot;:&quot;https:\/\/www.grandviewresearch.com\/horizon\/outlook\/amusement-parks-market\/france?utm_source=openai&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;:&quot;Europe Amusement Parks Market Size &amp; Outlook, 2030&quot;,\r\n                      &quot;url&quot;:&quot;https:\/\/www.grandviewresearch.com\/horizon\/outlook\/amusement-parks-market\/europe?utm_source=openai&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;:&quot;Parc Ast\u00e9rix&quot;,\r\n                      &quot;url&quot;:&quot;https:\/\/en.wikipedia.org\/wiki\/Parc_Ast%C3%A9rix?utm_source=openai&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;:&quot;2023 UNIVERSAL&quot;,\r\n                      &quot;url&quot;:&quot;https:\/\/www.compagniedesalpes.com\/sites\/default\/files\/documents\/2024-02\/CDA_DEU_2023_EN.pdf?utm_source=openai&quot;\r\n                    },\r\n                    {\r\n                      &quot;title&quot;:&quot;Amusement Parks - France | Statista Market Forecast&quot;,\r\n                      &quot;url&quot;:&quot;https:\/\/www.statista.com\/outlook\/amo\/entertainment\/amusement-parks\/france?utm_source=openai&quot;\r\n                    }\r\n                  ]\r\n                }\r\n              ]\r\n            }\r\n          ],\r\n          &quot;fan_out_queries&quot;:[\r\n            &quot;current relevance of amusement park business in France 2024&quot;\r\n          ]\r\n        }\r\n      ]\r\n    }\r\n  ]\r\n}<\/code><\/pre><\/div><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_column_text] Live ChatGPT LLM Responses \u200c\u200c Live ChatGPT LLM Responses endpoint allows you to retrieve structured responses from a specific ChatGPT AI model, based on the input parameters. [\/vc_column_text] POST https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/live Pricing The cost of the task can be calculated on the Pricing page. [vc_column_text]All POST data should be sent in the JSON format (UTF-8 [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"template.php","meta":{"apibase_doc_request_yaml":"parameters:\n  - name: user_prompt\n    type: string\n    description: |\n      <em>prompt for the AI model<\/em><br><strong>required field<\/strong><br>the question or task you want to send to the AI model;<br>you can specify <strong>up to 500 characters<\/strong> in the <code>user_prompt<\/code> field\n  - name: model_name\n    type: string\n    description: |\n      <em>name of the AI model<\/em><br><strong>required field<\/strong><br><code>model_name<\/code >consists of the actual model name and version name;<br>if the basic model name is specified, its latest version will be set by default;<br>for example, if <code>gpt-4.1<\/code> is specified, the <code>gpt-4.1-2025-04-14<\/code> will be set as <code>model_name<\/code> automatically;<br>you can receive the list of available LLM models by making a separate request to the <code>https:\/\/api.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models<\/code>\n  - name: max_output_tokens\n    type: integer\n    description: |\n      <em>maximum number of tokens in the AI response<\/em><br>optional field<br>minimum value for reasoning models (e.g., <code>reasoning<\/code> is <code>true<\/code> in the <a href=\"\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models\/\" target=\"_blank\">Models endpoint<\/a>): <code>1024<\/code>;<br>minimum value for non-reasoning models: <code>16<\/code>;<br>maximum value: <code>4096<\/code>;<br>default value: <code>2048<\/code><br><strong>Note:<\/strong> if <code>web_search<\/code> is set to <code>true<\/code> or the reasoning model is specified in the request, the output token count may exceed the specified <code>max_output_tokens<\/code> limit\n  - name: temperature\n    type: float\n    description: |\n      <em>randomness of the AI response<\/em><br>optional field<br>higher values make output more diverse; <br>lower values make output more focused;<br>minimum value: <code>0<\/code><br>maximum value: <code>2<\/code><br>default value: <code>0.94<\/code><br><strong>Note:<\/strong> not supported in reasoning models\n  - name: top_p\n    type: float\n  - name: web_search\n    type: boolean\n    description: |\n      <em>enable web search<\/em><br>optional field<br>when enabled, the AI model can access and cite current web information;<br>default value: <code>false<\/code>;<br><strong>Note:<\/strong> refer to the <a href=\"https:\/\/docs.dataforseo.com\/v3\/ai_optimization\/chat_gpt\/llm_responses\/models\/\">Models endpoint<\/a> for a list of models that support <code>web_search<\/code>;\n  - name: force_web_search\n    type: boolean\n    description: |\n      <em>force AI agent to use web search<\/em><br>optional field<br>to enable this parameter, <code>web_search<\/code> must also be enabled;<br>when enabled, the AI model is forced to access and cite current web information;<br>default value: <code>false<\/code>;<br><strong>Note:<\/strong> even if the parameter is set to <code>true<\/code>, there is no guarantee web sources will be cited in the response <br><strong>Note #2:<\/strong> not supported in reasoning models\n  - name: web_search_country_iso_code\n    type: string\n    description: |\n      <em>ISO country code of the location<\/em><br>optional field<br>required if <code>web_search_city<\/code> is specified;<br>to enable this parameter, <code>web_search<\/code> must also be enabled;<br>when enabled, the AI model will search the web from the country you specify;<br><strong>Note:<\/strong> not supported in <code>o3-mini<\/code>, <code>o1-pro<\/code>, <code>o1<\/code> models\n  - name: web_search_city\n    type: string\n    description: |\n      <em>city name of the location<\/em><br>optional field<br><strong>Note:<\/strong> specify <code>web_search_country_iso_code<\/code> to use this parameter<br><strong>Note #2:<\/strong> not supported in <code>o3-mini<\/code>, <code>o1-pro<\/code>, <code>o1<\/code> models\n  - name: system_message\n    type: string\n    description: |\n      <em>instructions for the AI behaviour<\/em><br>optional field<br>defines the AI's role, tone, or specific behavior <br>you can specify <strong>up to 500 characters<\/strong> in the <code>system_message<\/code> field\n  - name: message_chain\n    type: array\n    description: |\n      <em>conversation history<\/em><br>optional field<br>array of message objects representing previous conversation turns;<br>each object must contain <code>role<\/code> and <code>message<\/code> parameters:<br><code>role<\/code> string with either <code>user<\/code> or <code>ai<\/code> role;<br><code>message<\/code> string with message content (max 500 characters);<br>you can specify <strong> the maximum of 10 message objects<\/strong> in the array;<br>example:<br><code>\"message_chain\": [{\"role\":\"user\",\"message\":\"Hello, what\u2019s up?\"},{\"role\":\"ai\",\"message\":\"Hello! I\u2019m doing well, thank you. How can I assist you today?\"}]<\/code>\n  - name: tag\n    type: string\n    description: |\n      <em>user-defined task identifier<\/em><br>optional field<br><em>the character limit is 255<\/em><br>you can use this parameter to identify the task and match it with the result<br>you will find the specified <code>tag<\/code> value in the <code>data<\/code> object of the response","apibase_doc_request_additional_yaml":"","apibase_doc_response_yaml":"parameters:\n  - name: version\n    type: string\n    description: |\n      <em>the current version of the API<\/em>\n  - name: status_code\n    type: integer\n    description: |\n      <i>general status code<\/i><br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a><br><strong>Note:<\/strong> we strongly recommend designing a necessary system for handling related exceptional or error conditions\n  - name: status_message\n    type: string\n    description: |\n      <em>general informational message<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a>\n  - name: time\n    type: string\n    description: |\n      <em>execution time, seconds<\/em>\n  - name: cost\n    type: float\n    description: |\n      <em>total tasks cost, USD<\/em>\n  - name: tasks_count\n    type: integer\n    description: |\n      <em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array<\/em>\n  - name: tasks_error\n    type: integer\n    description: |\n      <em>the number of tasks in the <strong><code>tasks<\/code><\/strong> array returned with an error<\/em>\n  - name: tasks\n    type: array\n    description: |\n      <em>array of tasks<\/em>\n    items:\n      children:\n        - name: id\n          type: string\n          description: |\n            <em>task identifier<\/em><br><strong>unique task identifier in our system in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Universally_unique_identifier\">UUID<\/a> format<\/strong>\n        - name: status_code\n          type: integer\n          description: |\n            <em>status code of the task<\/em><br>generated by DataForSEO; can be within the following range: 10000-60000<br>you can find the full list of the response codes <a href=\"\/v3\/appendix\/errors\">here<\/a>\n        - name: status_message\n          type: string\n          description: |\n            <em>informational message of the task<\/em><br>you can find the full list of general informational messages <a href=\"\/v3\/appendix\/errors\">here<\/a>\n        - name: time\n          type: string\n          description: |\n            <em>execution time, seconds<\/em>\n        - name: cost\n          type: float\n          description: |\n            <em>cost of the task, USD<\/em><br>includes the base task price plus the <code>money_spent<\/code> value\n        - name: result_count\n          type: integer\n          description: |\n            <em>number of elements in the <code>result<\/code> array<\/em>\n        - name: path\n          type: array\n          description: |\n            <em>URL path<\/em>\n        - name: data\n          type: object\n          description: |\n            <em>contains the same parameters that you specified in the POST request<\/em>\n        - name: result\n          type: array\n          description: |\n            <em>array of results<\/em>\n          items:\n            children:\n              - name: model_name\n                type: string\n                description: |\n                  <em>name of the AI model used<\/em>\n              - name: input_tokens\n                type: integer\n                description: |\n                  <em>number of tokens in the input<\/em><br>total count of tokens processed\n              - name: output_tokens\n                type: integer\n                description: |\n                  <em>number of tokens in the output<\/em><br>total count of tokens generated in the AI response\n              - name: reasoning_tokens\n                type: integer\n                description: |\n                  <em>number of reasoning tokens<\/em><br>total count of tokens used to generate reasoning content\n              - name: web_search\n                type: boolean\n                description: |\n                  <em>indicates if web search was used<\/em>\n              - name: money_spent\n                type: float\n                description: |\n                  <em>cost of AI tokens, USD<\/em><br>the price charged by the third-party AI model provider for according to its <a href=\"https:\/\/platform.openai.com\/docs\/pricing\" target=\"_blank\">Pricing<\/a>\n              - name: datetime\n                type: string\n                description: |\n                  <em>date and time when the result was received<\/em><br>in the UTC format: \u201cyyyy-mm-dd hh-mm-ss +00:00\u201d<br>example:<br><code class=\"long-string\">2019-11-15 12:57:46 +00:00<\/code>\n              - name: items\n                type: array\n                description: |\n                  <em>array of response items<\/em><br>contains structured AI response data\n                children:\n                  - name: reasoning\n                    description: |\n                      <em>element in the response<\/em> \n                    addIt: type:reasoning\n                    type: object\n                    children:\n                      - name: type\n                        type: string\n                        description: |\n                          <em>type of the element = <strong>'reasoning'<\/strong><\/em><br><strong>Note:<\/strong> this element is supported only in reasoning models and is not guaranteed to be returned\n                      - name: sections\n                        type: array\n                        description: |\n                          <em>reasoning chain sections<\/em><br>array of objects containing the reasoning chain sections generated by the LLM\n                        items:\n                          children:\n                            - name: type\n                              type: string\n                              description: \"<em>type of element_=_<strong>'summary_text'<\/strong><\/em>\"\n                            - name: text\n                              type: string\n                              description: |\n                                <em>text of the reasoning chain section<\/em><br>text of the reasoning chain  section summarizing the model's thought process\n                  - name: message\n                    description: | \n                      <em>element in the response<\/em> \n                    addIt: type:message\n                    type: object\n                    children:\n                      - name: type\n                        type: string\n                        description: |\n                          <em>type of the element = <strong>'message'<\/strong><\/em>\n                      - name: sections\n                        type: array\n                        description: |\n                          <em>array of content sections<\/em><br>contains different parts of the AI response\n                        items:\n                          children:\n                            - name: type\n                              type: string\n                              description: \"<em>type of element_=_<strong>'text'<\/strong><\/em>\"\n                            - name: text\n                              type: string\n                              description: |\n                                <em>AI-generated text content<\/em>\n                            - name: annotations\n                              type: array\n                              description: |\n                                <em>array of references used to generate the response<\/em><br>equals <code>null<\/code> if the <code>web_search<\/code> parameter is not set to <code>true<\/code><br><strong>Note:<\/strong> <code>annotations<\/code> may return empty even when <code>web_search<\/code> is <code>true<\/code>, as the AI will attempt to retrieve web information but may not find relevant results\n                              items:\n                                children:\n                                  - name: title\n                                    type: string\n                                    description: |\n                                      <em>the domain name or title of the quoted source<\/em>\n                                  - name: url\n                                    type: string\n                                    description: |\n                                      <em>URL of the quoted source<\/em>\n              - name: fan_out_queries\n                type: array\n                description: |\n                    <em>array of fan-out queries<\/em><br>contains related search queries derived from the main query to provide a more comprehensive response","footnotes":""},"class_list":["post-21755","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages\/21755","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/comments?post=21755"}],"version-history":[{"count":44,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages\/21755\/revisions"}],"predecessor-version":[{"id":24218,"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/pages\/21755\/revisions\/24218"}],"wp:attachment":[{"href":"https:\/\/docs.dataforseo.com\/v3\/wp-json\/wp\/v2\/media?parent=21755"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}