Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR’s results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to “settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.” While those factors may not apply in “many realistic, economically relevant settings” involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

  • tankfox@midwest.social
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Who are in the process of learning to do something new, versus the workflow that they’ve been trained in and have a lot of experience in.

    Where was the sample of non-coders tasked with doing the same thing, using AI to help or learning without assistance?

    Where was the sample of coders prohibited from looking anything up and having to rely solely on their prior knowledge to do the job?

    It might help refine what’s actually being tested.