Grounding the current ethics of AI debate

Paul FMJ Verschure

The burgeoning field of neural network-driven AI is stimulating a vigorous and essential debate on the ethics of AI. To make this debate more accessible, Alan Robertson of the Global Council for Responsible AI has compiled a list of the leading voices in this debate together with short profiles of the experts included and their positions. Providing the community with such an overview is very useful and timely. It is crucial, however, that such a list, its explanations, and attributions are accurate. This is challenging indeed. What caught my attention was the remark in the profile of influencer 13 Gary Marcus:

Gary plays a relevant role in the AI debate, but it is not for posing this foundational question. I wonder whether he would describe his central message in this way. The question “Does it understand?” has been posed and addressed regularly in the history of ideas on machine intelligence. An example:

[AI] has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.”

The word [AI] was actually “the Analytical Engine” in note G written by a 27 years old Ada Lady Lovelace in 1843 to Menabrea’s “Sketch of the Analytical Engine invented by Charles Babbage Esq.” (in Richard Taylor, F.S.A. (Ed.) Scientific Memoirs, Vol III).

The same author predicts in the very few words she shared with posterity that through AI:

“the relations and the nature of many subjects in … science are necessarily thrown into new lights, and more profoundly investigated.”

Indeed, if we ask descendants of the Analytical Engine, such as Gemini, chatGPT, deepSeek, or Claude, “Does AI actually understand what it’s doing?” They will all provide a clear answer: “No.” Gary and LLMs are in agreement on the matter! In addition, despite not knowing what the symbols mean, these LLMs will all point to a long history of this debate, which for the current period would start with Turing’s 1950 “Computing Machinery and Intelligence,” and Searle’s 1980 “Minds, Brains, and Programs.” As a human exposed to this history, I could add Dreyfus, Harnad, Suchman, Clancey, Penrose, Edelman, and the list goes on (here is a short summary). Steve Harnad dubbed the issue the symbol grounding problem in 1990, and it was one of the fundamental stumbling blocks of Good old-fashioned AI and a challenge for the subsequent paradigms of AI connectionism, new or embodied AI, and contemporary connectionist AI have all failed to solve. I prefer to call it the Problem of Priors because it asks what assumptions one is willing to make to obtain a result.

We stand on the shoulders of giants, so to quote one of Gary’s most frequent statements: “I said it before!“. In particular, in my criticism of the connectionist revival of the 1980ies, I already pointed out in 1990 that in the then much-hyped connectionist models like NETTalk, you get out what you put in, nothing more and possibly less. Qualitatively, not much has changed. My response to this problem of priors formed the foundation of the research program in the Distributed Adaptive Control of Mind and Brain that takes as its central question: “To understand cognition, the focus should not be on a predefined body of knowledge, but on how this can be acquired through system-environment interaction“. Currently, DAC is one of the very few theories of the embodied and situated mind/brain, if not the only one actively and continuously researched. That is not necessarily because its proponents are outsmarting the rest of the community but rather because it is a difficult and perilous endeavour given the collective proximal zone of science. But that is for another post.

The message with respect to the ethics of AI discussion is that if we want to make progress on any issue , we must avoid falling into an amnesic science where anything older than ±5 years has never existed. It is exciting to discover the new-new thing, but reliving this excitement by purposeful forgetting is not a way to advance the debate. Indeed, Ada Lovelace already warns us against AI hype:

“It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.”

Now that I’m at it, the second suggestion from the post, which I found intriguing, was: “he doesn’t just criticize—he proposes alternatives. He argues for hybrid systems that combine deep learning with symbolic reasoning.” This surprised me because that proposal is barely an alternative nor very new. One does not solve the symbol grounding problem by bolting the word “Symbol” to the word “Neuro”. A bit more is needed. Effectively, Gary is proposing to merge a contemporary approach that has not solved the symbol grounding problem, LLM AI, as he has vigorously reminded us of with the symbol systems of Good Old Fashioned AI, which also didn’t solve it. To be an alternative, it needs a bit more explanation, to say the least, because the symbols still have to be grounded in someone’s or something’s experience. To my understanding, there is no existence proof of such a neurosymbolic system. And that is another important contrast: Building artificial systems with advanced human-level capabilities is easier said than done. Or, in this context, more easily criticized than understood and practically improved. I’m reminded of the misguided attempt in 2017 by the European Parliament, inspired by a combination of “overenthusiastic” philosophers and industry lobbyists, to vote on the “Resolution on Civil Law Rules of Robotics” which recommended that advanced autonomous robots should be given the status of “electronic person.” They did drink the cool aid of dumb AI systems masquerading as the vanguard of an imagined AI future. This should give everybody pause and some humility in the ethics debate. Not to stifle the debate but to ensure we reduce the noise level, do we all really understand what we are talking about? The precautionary principle also holds here.

In our work on Distributed Adaptive Control, we focus on ethics by design, integrating mechanisms of moral decision-making, including its deontological and utilitarian variants. We are pursuing this complex challenge in the European Innovation Council project CAVAA, part of the Pathfinder Awareness Inside mission. At the heart of this project on Machine Morality is the hypothesis that the mind/brain virtualizes complex interactions with the social environment to represent and predict the future and extract norms to include in the decision-making of single agents. In other words, brains virtualize to comprehend the invisible. This approach, called Collaborative Cybernetics, combines neuroscience, cognitive science, robotics, and artificial intelligence to realize a real-world cognitive architecture for embodied ethics by design. This moral DAC will have neuronal elements and construct symbols, but it has to do much more to function as functioning real-world architecture.

As we worry about the ethics of AI, among other things, we should not forget to use it for what it is. By virtue of this technology, with all its limitations of hallucinations, bias, and self-referential model collapse, we can access vast domains of knowledge, as Ada Lovelace already foretold. Humans have a decent track record of adapting to and evolving with their inventions and complementing the limitations of their technologies, especially by sticking to old-fashioned embodied reasoning and enlightenment values. Looking at the world at large, this might be going out of style, as we analyzed in the Ernst Strungmann Forum “How Collaboration Arises and Why It Fails.” We should guard against becoming an epoch that could have known it all but chose deliberate ignorance because of the infatuation with illusory novelty and its necessary hyperbole.