Ver Mensaje Individual
  #6  
Antiguo Hace 3 Semanas
Parveen Parveen is offline
Registrado
 
Registrado: abr 2023
Posts: 3
Reputación: 0
Parveen Va por buen camino
Cita:
Empezado por pgranados Ver Mensaje
Hola Dec,

Lo que no entiendo es lo de Input y Output, porque input vale 10USD 1M de tokens y output 30USD 1M de tokens?
When the AI generates text (output), it needs to be creative and consider a vast array of possibilities. This could involve composing different phrasings, structures, or even generating code or scripts. All this complexity requires more computational power from OpenAI's infrastructure, resulting in a higher cost per token for the response. But processing your input prompt (question) is a simpler task. The AI just needs to understand the context and meaning you provide. This requires less computational power, leading to a lower cost per token.

Large language models like OpenAI's use an "attention mechanism" during response generation. This mechanism allows the model to focus on specific parts of the input prompt (tokens) while generating the response.
Here's where the cost difference comes in: to decide the next word in the response, the model needs to constantly refer back to the input tokens and the previously generated response tokens. This ongoing analysis throughout the generation process contributes to the higher cost of output tokens compared to input tokens.
Responder Con Cita