AI existed before computer science
The story of artificial intelligence usually starts in 1956, at a summer workshop on the campus of Dartmouth College. A small group of researchers coined the term and kicked off a new academic field. But the ideas behind AI, the dream of building thinking machines and encoding human reason into mechanical systems, are far older than that. They're older than computers. They're older than electricity. In many ways, they're older than science itself. The roots of AI stretch back thousands of years, through mythology, philosophy, logic, and mechanical invention. Understanding this deeper history doesn't just add color to the timeline. It reveals that AI isn't a product of computer science at all. It's a continuation of one of humanity's oldest ambitions: to externalize thought.
Thinking machines in myth and imagination
Long before anyone built a circuit, people imagined artificial beings that could think and act. The ancient Greeks were especially prolific in this regard. In Greek mythology, Hephaestus, the god of invention and blacksmithing, was a creator of automata. Homer describes golden tripods on wheels that moved on their own to serve the gods at banquets, and golden handmaidens "wrought in the semblance of living maids" who possessed understanding, speech, and strength. These weren't just decorative figures. They were imagined as functional, autonomous agents. The most striking example is Talos, a giant bronze automaton first mentioned around 700 BC in the works of Hesiod. Zeus commissioned Talos to patrol the island of Crete, circling it three times daily and hurling boulders at approaching enemy ships. In the myth, Talos had a single vein running from neck to ankle, sealed by a nail at the heel, carrying ichor (the life-giving fluid of the gods). He was, in every functional sense, an artificial guardian programmed with a purpose. As Stanford classicist Adrienne Mayor has argued, these myths represent the earliest conceptions of robots and artificial life. They show that the desire to create intelligent, self-moving machines predates any formal theory of computation by millennia. Beyond Greece, similar ideas appeared across cultures. In Jewish tradition, the Golem legends describe artificial beings made from clay and animated through mystical words. In Chinese texts from the 3rd century BC, the Liezi describes a humanoid automaton presented to King Mu of Zhou. The impulse to create artificial minds was never confined to one civilization.
From philosophy to formal reasoning
If mythology gave us the dream of artificial beings, philosophy gave us the tools to start building them, at least conceptually. Aristotle, writing in the 4th century BC, developed the syllogism, the first formal system of deductive logic. A syllogism takes two premises and derives a necessary conclusion: "All men are mortal. Socrates is a man. Therefore, Socrates is mortal." This wasn't just an exercise in rhetoric. Aristotle was trying to identify the fundamental laws of valid reasoning, rules so precise they could be applied mechanically. Aristotle was also the first logician to use variables, replacing specific terms with placeholders to express general patterns of inference. This level of abstraction, separating the structure of an argument from its content, is exactly what makes computation possible. His syllogistic logic dominated Western thought for over two thousand years and directly influenced the development of mathematical logic in the 19th century. Thomas Hobbes took this further in the 17th century. In De Corpore (1655), Hobbes made a bold claim: "By reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract." This was a radical idea. Hobbes was saying that thinking is calculation, that the mind operates by manipulating internal symbols according to rules, much like an adding machine. Some scholars have called Hobbes "the grandfather of AI" for this insight. If reasoning is just computation, then in principle, a machine that computes could also reason.
Ramon Llull's thinking machine
While Hobbes theorized about reasoning as computation, a Catalan polymath named Ramon Llull had already tried to build a device that could do it. Around 1275, Llull designed what he called the Ars Magna ("The Great Art"), a system of nested rotating paper discs inscribed with symbols representing fundamental concepts, divine attributes, virtues, and vices. By rotating the discs, a user could generate combinations of these concepts and, Llull believed, arrive at truths through systematic exploration of all possible logical relationships. Llull's original motivation was religious. He wanted to create a tool that could convince Muslims and Jews of Christian theological truths through pure logic, without relying on scripture or authority. The approach was naive, but the underlying ideas were remarkable. The Ars Magna embodied three principles that remain central to AI and computer science today: first, that a limited set of fundamental concepts could represent a vast domain of knowledge; second, that meaningful combinations of those concepts could be generated mechanically; and third, that the process of reasoning could be systematized and, in some sense, automated. Llull likely drew inspiration from the zairja, a combinatorial device used by medieval Arab astrologers that used the 28 letters of the Arabic alphabet to represent categories of philosophical thought. By combining number values associated with letters, users could generate new paths of inquiry. The cross-pollination of ideas across cultures was already driving innovation in mechanical reasoning.
Leibniz and the calculus of reason
Gottfried Wilhelm Leibniz, writing in the late 17th century, is perhaps the most direct intellectual ancestor of modern AI. Leibniz envisioned two complementary systems that, together, would make all human disagreement resolvable by calculation. The first was the characteristica universalis, a universal symbolic language capable of expressing all mathematical, scientific, and philosophical concepts in precise notation. The second was the calculus ratiocinator, a formal method for manipulating those symbols according to logical rules, essentially a reasoning engine. Leibniz imagined that with these tools, when two people disagreed, they could simply say: "Let us calculate." The dream was that logic could be mechanized so thoroughly that truth and falsehood would become matters of computation rather than debate. Leibniz never fully realized either system, but his vision directly influenced the development of formal logic, symbolic mathematics, and eventually computer science. His work inspired later logicians like George Boole, Gottlob Frege, and Bertrand Russell, each of whom pushed the formalization of reasoning closer to something a machine could execute.
Boole and the algebra of thought
In 1854, George Boole published An Investigation of the Laws of Thought, a work that transformed logic from a branch of philosophy into a branch of mathematics. Boole showed that logical propositions could be expressed as algebraic equations, with variables representing truth values and operations like AND, OR, and NOT governing their combination. Boolean algebra seemed abstract and philosophical at the time, interesting mainly to mathematicians and logicians. But a century later, Claude Shannon demonstrated that Boolean algebra could model electrical circuits, where current either flows or doesn't, on or off, true or false. This insight became the foundation of digital computing. Boole's contribution is a perfect example of the pattern running through this entire history: ideas about the nature of thought, developed long before any practical computing technology existed, turned out to be exactly what was needed to build thinking machines.
Babbage, Lovelace, and the Analytical Engine
Charles Babbage's Analytical Engine, conceived in 1837, was the first design for a general-purpose programmable computer. Though it was never built during his lifetime, the design included all the essential components of a modern computer: an arithmetic unit, memory, conditional branching, and looping, all powered by steam and programmed with punch cards. Ada Lovelace, working with Babbage, went further. In her famous 1843 notes on the Analytical Engine, she described what is now recognized as the first computer program, an algorithm for computing Bernoulli numbers. But more importantly, Lovelace grasped something about the machine that even Babbage may not have fully appreciated: its potential extended beyond mere number-crunching. Lovelace understood that the Analytical Engine could manipulate symbols of any kind, not just numbers. She foresaw that it could compose music, produce graphics, and perform tasks that went far beyond arithmetic. At the same time, she was careful to note that the machine could only do what it was instructed to do: "The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform." Alan Turing, a century later, encountered Lovelace's notes and addressed her position directly, dubbing it "Lady Lovelace's Objection" in his famous 1950 paper on machine intelligence. The conversation between Lovelace and Turing, separated by a hundred years, shows how deeply the philosophical questions about artificial minds preceded the technology needed to explore them.
Automata across cultures
While European philosophers debated the nature of reasoning, engineers and inventors across the world were building actual mechanical automata. Ismail al-Jazari, a 12th-century polymath from Mesopotamia, documented over 50 mechanical devices in his Book of Knowledge of Ingenious Mechanical Devices (1206). Among them were programmable automata: a musical band of robot musicians that played on a boat, a humanoid servant that poured water for hand-washing rituals, and the famous Elephant Clock, a 22-foot tall device that combined Indian, Persian, Arab, Egyptian, and Chinese design elements. Al-Jazari didn't just imagine these machines. He built them and provided detailed construction instructions. In 18th-century Europe, Jacques de Vaucanson built a mechanical duck that could eat, digest, and excrete grain (or at least appear to), and a mechanical flute player that could perform twelve different tunes. Pierre Jaquet-Droz created "The Writer," an automaton that could be programmed to write any text up to 40 characters. These weren't computers in any modern sense, but they demonstrated that complex, seemingly intelligent behavior could be produced by purely mechanical means. The existence of these devices across centuries and cultures reinforces a key point: the ambition to create artificial intelligence was never a product of computer science. It was a human impulse that computer science eventually inherited.
Why this history matters
When we trace the lineage of AI back through Turing and the Dartmouth workshop, we're only seeing the last chapter of a much longer story. The real history includes Greek myths about bronze guardians, a medieval monk spinning paper discs to generate truths, a 17th-century philosopher declaring that reasoning is computation, a 19th-century mathematician turning logic into algebra, and a countess recognizing that a steam-powered machine could manipulate symbols of any kind. This matters because it changes how we think about AI. It's not a sudden technological disruption that appeared out of nowhere. It's the latest expression of an intellectual tradition stretching back to antiquity, one that has always asked the same fundamental question: can thought be captured in a system of rules, and if so, can a machine follow those rules? The answer to that question has been pursued by mythmakers, philosophers, logicians, engineers, and mathematicians for thousands of years. Computer scientists are just the latest to pick up the thread.
References
- Adrienne Mayor, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology (Princeton University Press, 2018). https://press.princeton.edu/books/hardcover/9780691183510/gods-and-robots
- Bruce MacLennan, "The History of Artificial Intelligence Before Computers," University of Tennessee (2009). https://web.eecs.utk.edu/~bmaclenn/papers/HistoryAIBeforeComputers.pdf
- Thomas Hobbes, De Corpore (1655), Chapter 1, Sections 1.2-1.3. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/fall2024/entries/hobbes/
- Ramon Llull and the Ars Magna, IIIA-CSIC. https://www.iiia.csic.es/~sierra/wp-content/uploads/2019/02/Llull.pdf
- Gottfried Wilhelm Leibniz, "Characteristica universalis," Wikipedia. https://en.wikipedia.org/wiki/Characteristica_universalis
- George Boole, An Investigation of the Laws of Thought (1854). https://archive.org/details/investigationofl00boolrich
- Ada Lovelace and the Analytical Engine, Bodleian Library, University of Oxford. https://blogs.bodleian.ox.ac.uk/adalovelace/2018/07/26/ada-lovelace-and-the-analytical-engine/
- Ismail al-Jazari, The Book of Knowledge of Ingenious Mechanical Devices (1206). Wikipedia. https://en.wikipedia.org/wiki/Ismail_al-Jazari
- "History of Artificial Intelligence," Wikipedia. https://en.wikipedia.org/wiki/History_of_artificial_intelligence
- NIST, "Ada Lovelace: The World's First Computer Programmer Who Predicted Artificial Intelligence." https://www.nist.gov/blogs/taking-measure/ada-lovelace-worlds-first-computer-programmer-who-predicted-artificial
You might also enjoy