NAVNavbar
cURL php NodeJS Python cSharp

LLM Responses API: Overview

LLM Responses API provides data for conversational search optimization. You can use it to analyze how different large language models (LLMs) respond to queries about your brand, competitors, or any other target keywords and topics.

Currently, you can collect LLM Responses from the following large language models:

ChatGPT
Claude
Gemini
Perplexity

For each supported LLM, you can use a dedicated Models endpoint to select specific versions for testing.

ChatGPT Models
Claude Models
Gemini Models
Perplexity Models

To find answers on common questions about DataForSEO APIs and find guidance on most efficient use, visit our Help Center.

Methods

 
The cost of using LLM Responses API endpoints depends on the selected method and priority of task execution. Available methods and priorities are described below.

DataForSEO has two main methods to deliver the results: Standard and Live.

If your system requires delivering instant results, the Live method is the best solution for you. Unlike the Standard method, this method doesn’t require making separate POST and GET requests to the corresponding endpoints.

If you don’t need to receive data in real-time, you can use the Standard method of data retrieval. This method requires making separate POST and GET requests, but it’s more affordable. Using this method, you can retrieve the results after our system collects them.

‌Alternatively, you can specify pingback_url or postback_url when setting a task, and we will notify you on completion of tasks or send them to you respectively.

If you need to set several tasks, you can receive the list of id for all completed tasks using ‘Tasks Ready’ endpoint, and then collect the results of each separate task using ‘Task GET’ endpoint.

ChatGPT and Claude in LLM Responses API support both Standard and Live methods:

ChatGPT Live
ChatGPT Task POST
ChatGPT Tasks Ready
ChatGPT Task GET

Claude Live
Claude Task POST
Claude Tasks Ready
Claude Task GET

Gemini and Perplexity in LLM Responses API support only the Live method of data retrieval:

Gemini Live
Perplexity Live

You can send up to 2000 API calls per minute. Contact us if you would like to raise the limit. Note that the maximum number of Live requests that can be sent simultaneously is limited to 30.

Cost

 
The cost can be calculated on the Pricing page. You can check your spending in your account dashboard or by making a separate call to the User Data endpoint

You can test LLM Responses API for free using DataForSEO Sandbox.