Skip to content

Random thought: perhaps modern LLM interfaces are oversimplified, which leads users to unnecessarily overestimating their capabilities (such as ascribing "intelligence" or "sentience" to the models).

Perhaps a good LLM interface should expose its guts and details so it is obvious how it works.

Deliberate friction or dizzying complexity might be sobering for the end user a little.