prg.ai meetup: Tomáš Mikolov in a conversation with Rich Sutton

What does it mean to create (artificial) intelligence? Is studying intelligence strictly beneficial? Do we need to control the algorithms we create? Are we on a good path with deep learning and large language models? Rich Sutton, a leading figure in the field of reinforcement learning, gave an opinionated talk and discuss his views on the future of AI with Prague’s own Tomáš Mikolov, a researcher with a significant footprint in the current language models.

On the verge of the conclusion of the Days of AI 2024, two weeks of AI events for everyone interested in this technology, many AI experts, talents, and enthusiasts gather in the Next Zone event space to listen to the opinions of two distinguished AI scientists, Rich Sutton, and Tomáš Mikolov. The first from the duo starts off the event with an energetic talk on the substance of intelligence.

Intelligence, Cooperation, and Human Flourishing

Sutton begins by pointing out that understanding and creating intelligence have been among the oldest strivings and grand quests in human history. From biology and psychology to the philosophy of mind, people have been inquiring about the capabilities and origins of our mind. Consequently, many inquisitive engineers have been “playing God”, trying to replicate (or even improve upon) human capabilities on inanimate objects. The extent of these experiments is only confirmed by the fact that such efforts have even been imprinted in our mythologies (the legend of Golem comes to mind in the Czech context). So far, improvements in our knowledge about intelligence have always resulted in prosperity and an enriched economy, at least on average (with respect to concrete years and nations). However, can we count on this trend to continue in the future? 

Intelligence, The Ultimate Good?

Rich Sutton takes on a highly (techno-)optimistic view. In his opinion, intelligence is not only the “most powerful phenomenon in the universe” with the potential to “move stars around” in the future. It is also the ultimate good, towards which we all should strive to contribute. Provocatively, he sees possible superintelligent machinery (programs that would overpass humans in most intellectual tasks at once) as our progeny and more noble succession to inefficient humans.

It should be noted that both of these remarks caused many questions among parts of the audience (and the author of this article). I ask the reader to think critically about the following questions: Firstly, is it clear that the sum total of studying and creating greater intelligence will be positive? Or would you require concrete risk-benefit evaluations to support this claim? Secondly, how should we measure nobleness? And to whose benefit should the technology be developed?

Success And Deficits of AI

Sutton then continues by explaining the reasons behind the current success of AI. He points to Moore’s law (which claims that compute power per size of hardware doubles approx. every two years), which is also backed by data showing increasing sizes of state-of-the-art AI models through the years. However, Sutton notes, unlike the improvements in hardware, there have been almost no groundbreaking improvements on the algorithmic side. The currently omnipresent large language models are based on a technology, which is at least seven years old (or older, depending on which part of the algorithm you stress). In his opinion, the current paradigm, which heavily relies on supervised and unsupervised learning, is bound to fail and will soon run into problems with diminishing returns. As an alternative, he recommends reinforcement learning, as learning from feedback should more closely resemble real-life learning of animals and people.

Definition of Intelligence

But when we talk about the right and wrong directions of AI research, shouldn’t we first start with what intelligence is? Throughout history, scientists and engineers have used different working definitions. For example, Turing assumed that intelligent machines should “behave like humans”. Rich, on the other hand, wants the definition of intelligence to be less anthropomorphized. He goes with “[the] ability to achieve goals by adapting behavior”.

Control And Decentralization

But which goals should we program into our AI systems? Sutton imagines we should create a diverse society of interacting agents with many goals. In his libertarian views, adopting a decentralized setting with agents helping each other out of pure mutual benefit is the best way to achieve a safe and beneficial future with AI. On the other side of the barricade, he claims, are calls for centralized (safety) control of AI, which Rich compared to calls for restriction of free speech and other freedoms. I’ll ask the reader to be the judge of the following questions: Will diverse goals more likely lead to cooperation with mutual benefit or competition over scarce resources? Is the regulation of AI another restriction of our rights, or can it have a positive impact on the whole?

Tomáš Mikolov & Rich Sutton 

After the talk, Rich Sutton sits down with Tomáš Mikolov to discuss the topic in a one-on-one chat.

AI Fears, Machine Learning Paradigms 

Mikolov starts by reminding the audience about the recent unprecedented news of the first Nobel prizes awarded to AI researchers, concretely the one in physics awarded to Geoffrey Hinton and John Hopfield for long-term contribution to the successful field of deep learning. Sutton first speculates whether computer science shouldn’t have its own Nobel prize, as the choice of physics is a bit unfortunate. The former researcher, Hinton, then becomes the subject of critique from both Sutton and Mikolov for his warnings about the dangers of AI and later also his inclination towards (un)supervised learning, as opposed to other approaches.
As for the first critique, the two debaters wonder about the reasons behind fearing AI and whether the development of AI should be considered just a continuation of technological progress or whether AI brings anything qualitatively different. Addressing the possibilities of different learning settings, Sutton argues that (supervised) training is a very limiting paradigm due to the bias introduced by the necessary selection of the datasets. Further, he argues that continuous adaptability is necessary to reach the holy grail of intelligence. Unfortunately, the online or sequential learning setting is not the current AI mainstream. However, Mikolov agrees with Sutton that supervised learning will possibly be a necessary component of any RL or evolutionary AI systems.

Computation And Local Ecosystems

After discussing different learning paradigms, Rich complains that currently many systems are called intelligent for the pure purpose of product advertisement. In many of these systems, the intelligence is equated to the amount of computations. However, that is hardly a reasonable assumption. Taking the argument further, Sutton adds, should we even call LLMs intelligent?

When talking about computation, Mikolov raises the proverbial problem of funding. He complains about the lack of it in Europe especially compared to the US and China. Sutton then ends up with a hopeful wish for improvement of the situation, while likening the Prague and Alberta AI environments.

Q&A

The event ends with a short Q&A. One attendee raises the question of whose “brain children” we are creating and how the underrepresentation of certain cultures in AI development could affect future societal equilibria. Rich answers that more intelligent agents will be anyway more objective than us, regardless of the creator’s origins. At the same time, Mikolov raises a rhetorical question about the purpose of life. 

Finally, we hear two contemplations from the audience: shouldn’t we set limitations to AI applications (in analogy to Kant’s limits on pure reason), and what are the differences between unrestricted collaboration and anarchy? These loaded questions nicely wrap up the courageous and sometimes provocative tone of the evening’s event.

Autorem textu je AI výzkumník FEL ČVUT Martin Krutský.

prg․ai meetup infobox

What a way to wrap up Prague Dny AI 2024! The fourth prg․ai meetup was a blast! We had the incredible opportunity to host Richard Sutton, a leading Canadian researcher, a pioneer in reinforcement learning and a true giant in the world of AI, and Tomas Mikolov, whose work has revolutionized natural language processing. They led an amazing discussion – it was insightful, far-reaching, and even got philosophical at times! Huge shoutout to everyone who came, to our awesome speakers, and to our partners EquiLibre Technologies and MSD Czech Republic for making it happen! A recording of the talk is available below, on our YouTube channel.