LLMS are unique among human technology, because they can have thoughts and make decisions.
You may think: this describes all software! When I try to log in to an online service, it makes a decision about whether to log me in or not based on my provided credentials.
In one sense, this is correct. A common metaphor is to say that our login-bot is given instructions (code) about how to process a login request, which allows it to make a decision.
But consider: if the login-bot makes logs me in to an account that isn’t mine, is the login-bot at fault? No, of course not. The software engineer who wrote the instructions is at fault. That software engineer thought up the login instructions and made all the decisions about their implementation.
Now, let’s switch out our login-bot for an LLM. I give the LLM some credentials, and it logs me in to an account that isn’t mine. Who is at fault now?
Clearly, the LLM has made a decision to log me in, independent of any human instruction. I could make the same request again and it might do something else.
Is the LLM at fault? Perhaps this could be sensible if it were alive, but it is not. It is not part of our community or our society, so judging it for its decisions simply serves no sense - what would it achieve?
Maybe the person who trained the LLM is at fault. Training gave the bot its ability to make decisions, and the matrix by which it does so. This feels analogous to the login-bot’s instructions.
But the person who did the training had no way of knowing this would be the outcome: they could not shape all possible outcomes of the LLM’s decisions. They cannot be at fault for an outcome they could never have predicted.
No, the person at fault is the person who decided to use an LLM for this purpose in the first place. The blame lies with leadership.
As we scramble to cram LLMs into everything, from government to finance to healthcare, I wonder if they’ll accept it.
(Spoiler alert: they won’t.)