Given that LLMs seem to be able to automate so many small tasks, why don’t we see large productivity effects?
I drafted a short paper recently exploring the possibility that it’s for the same reason (or at least one of the reasons) that labor is typically bundled into multi-task jobs, instead of transacted by the task, in the first place: because performing a task increases one’s productivity not only at the task itself but at related tasks.
For example, say you used to spend half your time coding and half your time debugging, and the LLM can automate the coding but you still have to do the debugging. If you’re more productive at debugging code you write yourself, this (1) explains why “coder” and “debugger” aren’t separate jobs, and (2) predicts that the LLM won’t save half your time. If you’re half as productive at debugging code you didn’t write, or less, the LLM saves you no time at all.
So I was excited to see @judyhshen and @alextamkin’s paper from a week or two ago finding basically just that!
At least the way I’m thinking about it, “cross-task learning” should make the productivity impacts of automating tasks more convex: – Automating the second half of a job should be expected to have much more of an impact than automating the first half; and – If the machines can learn from their and each others’ experience, as a worker learns by doing from her own experience, then automating two jobs will have more than twice the impact of automating one.
That is from Philip Trammell. Here is his short piece. Here is the Shen and Tamkin paper. This is all very important work for why the AI growth take-off will be much slower than the power of the models themselves might otherwise indicate. The phrase “…and then all at once” nonetheless applies. But when?
These short pieces and observations are likely among the most important outputs economists will produce this year. But are they being suitably rewarded?






