New scientific understanding and engineering techniques have always impressed and frightened people. No doubt they will continue to do so. OpenAI has recently announced that it expects to see the emergence of “super intelligence” – AI that surpasses human abilities – within this decade. It is building a new team accordingly, and devoting 20% of its computing resources to ensuring that the behaviour of such AI systems will be consistent with human values.
It seems they don’t want rogue artificial superintelligence to wage war on humanity, as happened in James Cameron’s 1984 science fiction thriller, The Terminator (ominously, Arnold Schwarzenegger’s Terminator is sent back in time from 2029). OpenAI is calling on top machine-learning researchers and engineers to help tackle this problem.
But do philosophers have anything to contribute? More generally, what can be expected from a centuries-old discipline in the new technologically advanced era that is now emerging?
To answer this, it is worth emphasizing that philosophy has been important to AI from its very beginnings. One of the first success stories of AI was a 1956 computer program called the Logic Theorist, created by Allen Newell and Herbert Simon. Its job was to prove theorems using the propositions of Principia Mathematica, a three-volume work written in 1910 by philosophers Alfred North Whitehead and Bertrand Russell that aimed to reconstruct all mathematics on a logical basis.
In fact, the early focus on logic in AI was largely a result of foundational debates by mathematicians and philosophers.
An important step was the development of modern logic in the late 19th century by the German philosopher Gottlob Frege. Frege introduced the use of quantified variables instead of objects such as people in logic. His approach made it possible not only to say, for example, “Joe Biden is president” but also to systematically express such general ideas as “there exists an x such that x is president”, where “there exists” is a quantifier, and “x” is a variable.
Other important contributors in the 1930s were the Austrian-born logician Kurt Gödel, whose theorems of completeness and incompleteness are about the limits of what one can prove, and the Polish logician Alfred Tarski’s “Proof of the Undefinability of Truth”. The latter showed that “truth” in any standard formal system cannot be defined within that particular system, so that arithmetical truth, for example, cannot be defined within the system of arithmetic.
Ultimately, the abstract concept of a computing machine, proposed by British pioneer Alan Turing in 1936, inspired such developments and had a huge influence on early AI.
However, it can be said that even though such old-fashioned symbolic AI was indebted to high-level philosophy and logic, “second wave” AI, based on deep learning, derives more from solid engineering feats involving the processing of vast amounts of data.
Yet, philosophy has played a role here too. Take big language models, such as the one that powers ChatGPT, which produces conversational text. They are very large models, with billions or trillions of parameters, trained on huge datasets (typically comprising much of the internet). But at their core, they track – and exploit – statistical patterns of language use. This idea was expressed very succinctly by the Austrian philosopher Ludwig Wittgenstein in the mid-20th century: “The meaning of a word”, he said, “is its use in language”.
But contemporary philosophy, not just its history, is relevant to AI and its development. Can an AI really understand the language it processes? Can it achieve consciousness? These are deep philosophical questions.
Science has so far been unable to fully explain how consciousness arises from the cells in the human brain. Some philosophers even believe that this is such a “hard problem” that it is beyond the scope of science, and may require the help of philosophy.
Similarly, we might ask whether image-generating AI can be truly creative. British cognitive scientist and philosopher of AI Margaret Boden argues that while AI will be capable of producing new ideas, it will struggle to evaluate them in the same way as creative people do.
She also predicts that only a hybrid (neural-symbolic) architecture – which uses both logical techniques and deep learning from data – will achieve artificial general intelligence.
human values
Returning to OpenAI’s announcement, when we asked a question about the role of philosophy in the age of AI, ChatGPT suggested to us that (among other things) it “helps ensure that the development and use of AI are consistent with human values”.
In this spirit, perhaps we can allow ourselves to propose that, if AI alignment is as serious an issue as OpenAI believes, it is not just a technical problem to be solved by engineers or tech companies, but also a social problem. It will require input from philosophers as well as social scientists, lawyers, policymakers, citizen users, and others.
Indeed, many people are concerned about the growing power and influence of tech companies and their impact on democracy. Some argue that we need an entirely new way of thinking about AI – taking into account the underlying systems that support the industry. For example, British barrister and author Jamie Susskind has argued that now is the time to create a “digital republic” – one that ultimately rejects the very political and economic systems that have given tech companies so much influence.
Finally, let’s briefly ask how will AI affect philosophy? Formal logic in philosophy actually dates back to the work of Aristotle in ancient times. In the 17th century, German philosopher Gottfried Leibniz suggested that we might one day have a “calculus ratiocinator” – a calculating machine that would help us get answers to philosophical and scientific questions in a quasi-divine way.
Perhaps we are now beginning to realize that vision, with some authors advocating a “computational philosophy” that literally encodes assumptions and derives consequences from them. This ultimately allows for factual and/or value-oriented assessments of consequences.
For example, the Polygraphs project simulates the effects of information sharing on social media. This can then be used to computationally address questions about how we should form our opinions.
Undoubtedly, advances in AI have given philosophers a lot to think about; it may even have begun to provide some answers.
,Author: Anthony Grayling, Professor of Philosophy, Northeastern University London and Brian Ball, Associate Professor of Philosophy, AI and Information Ethics, Northeastern University London)
,disclosure statement: Brian Ball receives funding from the British Academy, and has previously been supported by the Royal Society, the Royal Academy of Engineering and the Leverhulme Trust. Anthony Grayling does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond his academic appointment)
This article is republished from The Conversation under a Creative Commons license. Read the original article.