Perverse Incentives

Perverse Incentives

Many AI coding assistants, including Claude Code, charge based on token count – essentially the amount of text processed and generated. This creates what economists would call a “perverse incentives(삐뚤어진 유인책)” – an incentive that produces behavior contrary to what’s actually desired.

Let’s break down how this works:

  1. The AI generates verbose, procedural code for a given task
  2. This code becomes part of the context when you ask for further changes or additions (this is key)
  3. The AI now ha stop read (and you pay for) this verbose code in every subsequent interaction
  4. More tokens processed = more revenue for the company behind the AI
  5. The LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom line

It might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count.

There’s clearly something going on where the more verse the LLM is, the better it does, This actually makes sense given the discovery that chain-of-thought reasoning improves accuracy, but this issue has begun to feel like a real tradeoff when it comes to these almost-magical systems.

The model produces more tokens to cover all possible edge cases rather than thinking deeply about the elegant core solution or a root cause problem.

Some Tricks to manage these perverse incentives

  1. Force planning before implementation
  2. Explicit permission protocol
    Enforcing this “ask before generating” boundary and repeatedly relabeling it (“remember, don’t write any code”) helps prevent the automatic generation of unwanted, verbose solutions.
  3. Git-based experimentation with ruthless pruning
    Creating experimental branches is very helpful.
  4. Use a cheaper model
    Sometimes the simplest solution works best: using a smaller, cheaper model often results in more direct solutions.

답글 남기기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다