Batch Processing: Optimizing Your AI Interactions

Definition: The set of examples used in one training iteration, with batch size determining the number of examples processed.

Batches in Training vs. Inference

  • Training: Updating weights based on N examples at once.
  • Inference (Vibe Coding): Sending N tasks to the AI at once.

The “Batch Prompting” Hack

You pay for the prompt tokens every time. If you have 10 small questions, don’t send 10 separate requests.

  • Inefficient:
    1. “Fix func A.”
    2. “Fix func B.”
  • Efficient (Batching):
    “Here are 3 functions (A, B, C). Fix all of them and return the results in a JSON list.”

Benefits

  1. Cost: You send the “System Prompt” and “Shared Context” only once.
  2. Consistency: The AI fixes all functions with the same “style” because they are in the same context window.
  3. Speed: You get all answers in one streaming response.

When NOT to Batch

  • Complex Chains: If Task B depends on the result of Task A, you cannot batch them. You must do them sequentially (Chain of Thought).
  • Context Limits: If batching 10 files pushes you out of the context window, the AI will forget the beginning.

Expert Strategy

Group related tasks. “Review all UI components” is a good batch. “Review the DB schema and the CSS” is a bad batch (too disjointed).

Similar Posts

Leave a Reply