Thoughts on “The Enigma of Reason”

Arthur Juliani
5 min readJul 20, 2021

--

Cicero denouncing Catiline at a meeting of the Roman Senate. A famous instance of the successful use of reason.

I recently read The Enigma of Reason by Dan Sperber and Hugo Mercier. As the title suggests, the book attempts to make sense of what seems to be a paradox of human’s ability to reason. The book seeks to explain how a mechanism which has been assumed to have evolved in humans in order to improve their problem solving ability seems to be so poor at that job. While reading the book, it struck me that some of the ideas presented within were relevant for not just cognitive science, but also for the field of artificial intelligence (AI). As such, I decided to share some of my thoughts on the book.

Sperber and Mercier see reason as an evolved module within the brain which specializes in producing reasons for our behavior. These reasons serve to justify our behavior to ourselves and others. It also enables us to evaluate the reasons proposed by others for their behavior. As such, it serves a primarily social role, one of coordination and cooperation, rather than an intellectual one of abstract problem solving. The authors oppose their theory to the popular two-system hypothesis about decision making, which proposes that there is a fast and biased unconscious inference mechanism paired with a slow, accurate and conscious reasoning mechanism. They instead see reason as just one of many intuitive inferential mechanisms within the brain. These mechanisms may be performing bayesian inference, or some other probabilistic learning process, however the specifics are not seen as very important. For Sperber and Mercier, what is important is that there is nothing unique about reason per-se, it simply operates on meta-level representations rather than low level primitives as most other inference mechanisms such as visual perception do.

The authors propose that reason leads us astray in the pursuit of objective knowledge not because it is flawed, but because that is simply not what it evolved to do in the first place. They demonstrate this by walking the reader through a long series of psychology experiments conducted over the past century which show again and again that human reason is “biased” in all sorts of informative ways. Rather than reflecting some sort of transcendent capacity for pure abstract logic, human reason is often messy, and relies more on context and perceived relevance rather than stone-cold rules of logical deduction. Chief among these is the so-called myside bias (a generalization of confirmation bias), where humans produce reasons which reinforce their previously held beliefs. The myside bias ironically often leads people to become more wrong after thinking deeply about a problem than if they had to make a decision on the spur of the moment.

What does this have to do with AI? For one thing, it provides an additional perspective on how misguided the idea of artificial general intelligence (AGI) is. Proponents of AGI will often make recourse to the supposedly general nature of human reason when discussing its possibility in artificial agents. If the hypothesis put forward by Sperber and Mercier is correct though, human reason isn’t even for problem solving, let alone for some hypothetical context-agnostic problem solving. Rather, it is for justification and explanation of behavior and decisions reached implicitly within an embodied social context. In this view, the so-called flaws in human reasoning which a superhuman intelligence would supposedly correct for are not even flaws, but rather the features of the mechanism working as they evolved to. If we accept this view, then to extend human reason into a general problem-solving system would be like attempting to extend a skyscraper in New York City to be a means of getting to the moon.

All hope is not lost however, as the authors do discuss the conditions within which reason does in fact aid in the obtaining of objective truth or knowledge, and those are primarily social. For Sperber and Mercier, it is when reason is placed within its proper context, and humans are forced to justify and explain themselves to others, that reason does more than simply serve a social function. This is because critically, humans are better at evaluating reasons than we are at producing them. As a result, the bias towards producing reasons which justify our implicit decisions and the ability of others to find flaws in our reasons end up working together. These two abilities make possible open argumentation and debate to produce reasons which are better tuned to reality, and can thus ultimately drive better problem solving in the future. The authors support this conclusion with a wealth of cognitive psychology research which shows the critical role of argumentation in reaching informed decisions above and beyond what individuals in a group would have arrived at alone.

Within the context of AI, this perhaps suggests that rather than building a single system which is a “general reasoner,” a better approach to achieving super-human level problem solving would be to develop a kind of society of artificial agents, each with a unique perspective, and each with human-like reasoning abilities. Specifically, each with the propensity to produce many reasons to justify its behavior, and also to accurately discriminate between good and bad reasons in other agents. These agents could then argue and debate their ideas to arrive at better informed ones. We already see trends toward this in the current literature on ensemble methods, where multiple agents each come to a different decision, and some higher-level process takes all of those decisions into account when producing a final output. A further step however is to allow each agent to exchange information with all others through the bottleneck of reasons, which must be justified and explained. Coordination through learned multi-agent communication is potentially another promising avenue toward this kind of system.

Reading The Enigma of Reason also encouraged me to think deeper about what reasons really are, and how they might be instantiated within artificial intelligence systems today. One approach to operationalizing reason is to treat it as a kind of causal model. “The ground is wet because it rained last night” is a reason, but it is also a description of a causal link between rain and a wet ground. Importantly these kinds of relationships can be learned from experience. “I brought an umbrella because it might rain. It might rain because there are dark clouds in the sky” is a chain of reasons which ultimately exist to justify one’s behavior. There is currently exciting work being done in the field of causal discovery in deep learning models, and I believe that this work can help make possible agents which can produce coherent reasons. This work is still in the early stages, but it is exciting to think about the implications. It likely will not lead to some general problem solver, but it will potentially lead to agents which can better understand their worlds, and more importantly, explain themselves to us and other agents with which they interact. A not only more achievable goal, but one which is necessary for such systems to truly become a beneficial and everyday part of our lives.

--

--

Arthur Juliani
Arthur Juliani

Written by Arthur Juliani

Interested in artificial intelligence, neuroscience, philosophy, psychedelics, and meditation. http://arthurjuliani.com/

Responses (4)