Intelligence & expectations

Please note that I will simplify some complex subjects here.

What we call "Artificial Intelligence" should be called "algorithms" or "Large Language Models" (LLMs for short) because they have nothing close to "intelligent". They do what algorithms do, even though in a "smarter" way, by connecting things together depending on probabilities extracted from the original data set on which they have been trained. I'm not going through defining what "intelligence" is because, first I'm not competent to do so and secondly, even experts in various fields closely related to the subject do NOT agree on a common definition. This should be a first good evidence that we shouldn't be quick at calling things "intelligent". But even on a colloquial use of the term, this does not work very well.

Wikiwand - Intelligence
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. More generally, it can be described as the ability to perceive or infer information…

I'm not been picky on words for the fun of it. As much as an "Autopilot" that is not "auto-piloting" anything is disingenuous and harmful by creating false expectations and narratives, so "Artificial Intelligence" creates in the public narrative misplaced expectations –although clearly less intentional.

Hallucination

We know current LLMs (Large Language Models) have an unreliable relationship with facts and transparency, only because they are not designed for that. Their job is to create believable narratives. In human terms, they are basically "liars" and they are averagely good at that.

Treating any conversation with LLMs as a huge "shared hallucination" (between you and the algorithm) removes the burden of defacto "trust" we instinctively put on any human social interactions. It also means we create an intentional space for weirdness and messy connections, which has the potential to bring interesting outcomes.

Suspension of disbelief

On a similar note, we can take a principle from another space to approach LLMs interactions. The concept of "suspension of disbelief" comes from storytelling and the entertainment space, especially in movies, TV shows and Games, and stands for the fact that we approach such media knowing these are stories with (potentially) loose connections to reality.

For instance, we don't take The Lord of the Rings for an accurate historical depiction, nor are we taking Star Wars or Mass Effect for a factually true representation of space exploration. Instead, we set aside reasoning and allow ourselves to accept extraordinary elements, only because they have the potential to benefit the story as a whole –and such media know how to play with this principle.

Extending such principle to LLMs interactions makes a lot of sense too. Reliability and trust are not issues or bugs anymore but they are features in service of a temporary shared hallucination/story we are co-creating with the algorithm.

Note: I'm not saying LLMs don't have to improve on such aspects either.

Autonomy and context

In my two previous articles (see here and here), I discuss the importance of the autonomy of individuals as a strong design principle: “AIs (LLMs, Algorithms) should not replace human knowledge but rather sustain its autonomy”. I present the case for the necessity of working with LLMs (AI, algorithms) using small groups of humans that create a localised context for discussions, interactions, and interpretations.

Context is key, it is what allows us to attribute meaning and expectations to things. As a group, we define what is relevant and appropriate given what we understand of the context in which we operate. This means interacting with LLMs in such a way allows us to reinterpret whatever algorithm's output in a meaningful way and discard what is not relevant, then allowing the group to intervene in their context on their own terms: this guarantees autonomy to the group.

This also removes another burden put on LLMs algorithms, which is to deal with moral aspects of our social realities. Keeping this burden on a group is better because this lies in our way of dealing with such questions (i.e. we already know how to make other humans accountable for their behaviours). We do NOT have to algorithmically solve moral issues at scale (at the algorithm level), we simply let them unfold in context (at the human/society level).

This has perhaps the potential to allow space for genuine debates and discussions in the public sphere rather than automatized harmful incompetence.

Turbulences, loss of senses and disillusion
Hi, Kevin here. In my recent readings, I noticed an interesting theme in relation to the recent layoffs in Tech, the disillusion of some in the design communities, and the criticisms of our tools as instruments of “our own downfall”. The many deaths of UX designHow UX was proclaimed dead
Turbulences and AI: from a third perspective to design principles
Hi everyone, Kevin again. I recently wrote about our loss of senses and disillusion in face of recent turbulences: the recent layoffs, the rise of AI, and the rapid commoditization of design all seem to indicate we are gently running along the edge of a precipice only covered by fog,

Thanks for reading!

Kevin from Design & Critical Thinking.