{"id":2526,"date":"2025-07-10T08:30:29","date_gmt":"2025-07-09T23:30:29","guid":{"rendered":"https:\/\/skanto.co.kr\/?p=2526"},"modified":"2025-07-10T08:52:35","modified_gmt":"2025-07-09T23:52:35","slug":"perverse-incentives","status":"publish","type":"post","link":"https:\/\/skanto.co.kr\/?p=2526","title":{"rendered":"Perverse Incentives"},"content":{"rendered":"\n<p>Many AI coding assistants, including Claude Code, charge based on token count &#8211; essentially the amount of text processed and generated. This creates what economists would call a &#8220;<strong>perverse incentives<\/strong>(\uc090\ub6a4\uc5b4\uc9c4 \uc720\uc778\ucc45)&#8221; &#8211; an incentive that produces behavior contrary to what&#8217;s actually desired.<\/p>\n\n\n\n<p>Let&#8217;s break down how this works:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>The AI generates verbose, procedural code for a given task<\/li>\n\n\n\n<li>This code becomes part of the context when you ask for further changes or additions (this is key)<\/li>\n\n\n\n<li>The AI now ha stop read (and you pay for) this verbose code in every subsequent interaction<\/li>\n\n\n\n<li><strong>More tokens processed = more revenue for the company behind the AI<\/strong><\/li>\n\n\n\n<li>The LLM developers have no incentive to &#8220;fix&#8221; the verbose code problem because doing so will meaningfully impact their bottom line<\/li>\n<\/ol>\n\n\n\n<p><span style=\"text-decoration: underline;\">It might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count.<\/span><\/p>\n\n\n\n<p>There&#8217;s clearly something going on where the more verse the LLM is, the better it does, This actually makes sense given the discovery that <strong>chain-of-thought reasoning<\/strong> improves accuracy, but this issue has begun to feel like a real tradeoff when it comes to these almost-magical systems.<\/p>\n\n\n\n<p>The model produces more tokens to cover all possible edge cases rather than thinking deeply about the elegant core solution or a root cause problem.<\/p>\n\n\n\n<p>Some Tricks to manage these perverse incentives<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Force planning before implementation<\/li>\n\n\n\n<li>Explicit permission protocol<br>Enforcing this &#8220;ask before generating&#8221; boundary and repeatedly relabeling it (&#8220;remember, don&#8217;t write any code&#8221;) helps prevent the automatic generation of unwanted, verbose solutions.<\/li>\n\n\n\n<li>Git-based experimentation with ruthless pruning<br>Creating experimental branches is very helpful.<\/li>\n\n\n\n<li>Use a cheaper model<br>Sometimes the simplest solution works best: using a smaller, cheaper model often results in more direct solutions.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Many AI coding assistants, including Claude Code, charge based on token count &#8211; essentially the amount of text processed and generated. This creates what economists would call a &#8220;perverse incentives(\uc090\ub6a4\uc5b4\uc9c4 \uc720\uc778\ucc45)&#8221; &#8211; an incentive that produces behavior contrary to what&#8217;s actually desired. Let&#8217;s break down how this works: It might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count. There&#8217;s clearly something going on where the more verse the LLM is, the better&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/skanto.co.kr\/?p=2526\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[14],"tags":[48,241,242],"class_list":["post-2526","post","type-post","status-publish","format-standard","hentry","category-sw-development","tag-ai","tag-perverse-incentives","tag-vibe-coding"],"_links":{"self":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/2526","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2526"}],"version-history":[{"count":4,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/2526\/revisions"}],"predecessor-version":[{"id":2530,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/2526\/revisions\/2530"}],"wp:attachment":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2526"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2526"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2526"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}