An excellent overview of the development in the world of #LLMs in the last year, put together by @simon in his "Things we learned about LLMs in 2024": https://simonwillison.net/2024/Dec/31/llms-in-2024/. Remember the YouTube paradox where the engineers made the site faster, but globally overall load times went _up_ because suddenly more people could use it? I wonder if something like this could happen with LLMs and the environmental impact of prompts: individual prompts get cheaper, but overall energy consumption goes up.
@tomayac yeah that's interesting - as the cost of running a prompt drops to almost nothing (seriously, $1.68 for 68,000 image captions!?) people will inevitably find all sorts of new uses for the models