With the advent of ChatGPT, Artificial Intelligence (AI) is suddenly finding its way into the living rooms of “ordinary” citizens. AI has become mainstream. And the whole world is plunging into it. What does this mean for financial services? Will it change the way we invest? What impact will it have on customer contact? And: is our society actually ethically and legally ready for artificial intelligence? In this series, we explore the answers with experts from inside and outside APG.
In part 3: Eran Raviv, Expert Researcher in the Innovation and Digitalization team of APG Asset Management.
Recently, GPT-4 came out, which, unlike ChatGPT, can process not only text but also images. This pushes the boundaries of what AI can do, but also increases the risk of too high expectations – which already tended towards overstrain.
After talking to Raviv about ChatGPT, two messages stick. The first: don't underestimate the development speed of this artificially-intelligent chatbot. The second: don't overestimate ChatGPT. Yes, it is developing rapidly, but in order to benefit from it, we must not be blind to its flaws.
In the previous episode, Stefan Ochse already indicated that we should not consider ChatGPT as a self-thinking computer. How advanced is ChatGPT in your opinion?
"The artificial intelligence that drives ChatGPT is actually no different from the forms of AI we already knew. And at its core, that's not such a complicated technology at all. The kernel is ultimately based on simple arithmetic operations: addition, subtraction, multiplication. Then you wrap that core in a certain numerical recipe: if this, then that. Subsequently you let a computer run that recipe millions of times. And that's the point where the computer is superior to humans, who can't do that a million times a second. This computing power allows a computer to make predictions and achieve results at a speed that humans are unable to achieve. We then give that ability the predicate 'intelligence', but basically these are simple building blocks."
And yet ChatGPT is attributed more potential than the AI forms so far. Why is that?
"Before ChatGPT was available, you had to be able to program to use AI to achieve a certain solution. The strength of ChatGPT is that it is an interface that makes artificial intelligence accessible to the masses in one fell swoop. Every time ChatGPT is put to work, it ‘learns’, making the results better for users after that. That learning effect is inherent in artificial intelligence, so far there's nothing new under the sun. But because of the accessibility of ChatGPT, this happens so incredibly often that this learning effect takes place at an extremely high pace. That's what makes it so powerful and therefore we completely underestimate the speed of that development. As humans, we tend to estimate such a development linearly, but this is really about exponential growth."
Can you illustrate that learning effect with an example?
"As a joke, I introduced the question a few weeks ago: which is heavier, six grams of pillows or three grams of stones? The result that ChatGPT gave made no sense and came down to the pillows being lighter than the stones. Only a few days earlier you could create this joke with round numbers (100 kilos of this or 200 kilos of that). But seeing that many people were trying for that joke the software-update already corrected for this. It is for that reason I had to go with grams and less frequently used numbers (like 3 and 6) to recreate that joke. This is a prime example of how human inputs are used to improve the machines’ driving-algorithm.
Due to the speed of this development, we will soon be able to leave work or tasks to the computer that currently require human brainpower, just as we now only have to check a translation by Google-translate for inaccuracies. I myself mainly use ChatGPT to provoke my thinking. For example, I used to tell my son a bedtime story from memory, which I made up myself. But it's really hard to keep coming up with a new story. On one of those nights, I thought I could recycle a story from four months ago, but I had underestimated my son’s memory. So at one point he said, 'you've told me that before'. He knew exactly what would happen next. So now I ask ChatGPT for a kid's story with a nice moral. While the resulting story can be ludicrous, I use it as inspiration. I adjust it and make it my own."
Doesn't that make us lazy in our thinking?
"In a way, yes, but that's not necessarily a bad thing. For example, it took me years to learn to write well. That skill becomes less valuable, as texts generated on the basis of AI are rapidly getting better. But now I can shift my attention to other skills, such as listening and verbal proficiency. Another example of a skill that is now increasing in value, is ideation. This is also how I believe we can deal with artificial intelligence at APG. The development is there, you can't ignore it so you have to embrace it and ask the question: in what areas are we going to develop to remain competitive?"
So AI offers many opportunities. Is there a downside?
"In principle, I am skeptical about anything that has to do with AI. It offers many possibilities, but I cannot stress the importance of validation enough. In other words, you should never blindly rely on results generated by AI. When you ask ChatGPT a question, you will always get an answer, even if it is not correct. ‘I don't know’ would be a better answer in that case. It does not give you that answer because the machine provides the answer with the highest probability of being correct; and the highest can still be very low. The machine doesn’t care, but you should.
Moreover, AI works best when it comes to repetitive situations. But if the reality is very dynamic – for example in the case of self-driving cars – mistakes are bound to happen. In such situations, you have to be extra vigilant when using AI, because if an exceptional situation occurs, accidents happen. Artificial intelligence also does not have intuition, like humans. It does not recognize cynicism, for example. After all, there is much more to it than language, such as mimicry and intonation, and that is not picked up.”
But isn't it only a matter of time before AI also records and interprets mimicry and intonation?
"Yes, and that is the Holy Grail in the world of artificial intelligence: the combination of content and form. It requires monster-size algorithms to come together and integrate well. My feeling is that it's going to take a while before we get to that point. That said, it’s a natural next step and we already see at least some efforts in that direction."