Back in the summer of 1900, a diver looking for sponges discovered a shipwreck off the coast of the small Greek island of Antikythera. There were, of course, lots of brass coins, the requisite marble statuary and a collection of amphora that had once contained delicious oils or wine or spiced emulsions. But the most interesting thing he discovered was a small decaying wooden box with over 30 brass dials all connected to each other and designed to interact in complex ways.
It wasn’t until 2006 that the greater purpose of the Antikythera mechanism was revealed. That year, Mike Edmunds of Cardiff University in Wales and his team published CT scans of the fragments, revealing more details of the inner workings, as well as hidden inscriptions.
Based on this analysis, it was determined that the Antikythera Mechanism was in fact the first analog computer, created over 2,000 years ago to track lunar and solar eclipses as well the orbits of Mercury, Venus, Mars, Saturn and Jupiter. Nothing of similar complexity was developed until the Middle Ages in Europe.
This device was the germinating seed of artificial intelligence.
Since time immemorial humans have been trying to develop machines to expand their reasoning abilities. Fast forward from 150 B.C. to 1955 and Logic Theorist, a computer program written by Herbert Simon, a political scientist, economist, sociologist and computer scientist. Described as the first program deliberately engineered to mimic the problem solving skills of a human being, it is known today as “the first artificial intelligence program”.
In 2017, we are at the beginning of a Fourth Industrial Revolution. Developments in genetics, robotics, nanotechnology, 3D printing and biotechnology, to name just a few, are all building on and amplifying one another and leveraging the power of artificial intelligence. These interlocking technologies will lay the foundation for a revolution more comprehensive and all-encompassing than anything we have ever seen. Smart systems exploiting AI will emerge in homes, factories, farms, stores, transportation. AI will be exploited to help tackle problems ranging from supply chain management to climate change and beyond.
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains.
Whether we like it or not, artificial intelligence is already deployed in many behind the scenes settings today. Facebook uses AI for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use AI to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on AI. All of these small advances are part of the chase to eventually create flexible, self-teaching AI that will mirror human learning and conduct what’s called recursive self-improvement.
For decades, science fiction writer Isaac Asimov’s ‘Three Laws of Robotics’ were a cornerstone for the ethical development of robots and artificial intelligence machines. First laid out in his 1942 short story Runaround, Asimov’s three principles stated: A robot must not harm a human through action or inaction; a robot must obey humans; and a robot must protect its own existence. Each rule takes precedence over the rules that follow it in order to ensure a human’s life is protected over the existence of a robot.
Today, we have a range of organizations looking at ways to address this challenge. OpenAI was established by Elon Musk to focus on building artificially intelligent systems whose primary goal is what’s called “unsupervised learning”—creating machines that can truly learn on their own, without a human hand to guide them. This is one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. When Musk unveiled OpenAI, he also positioned the project as a way to neutralize the threat of a malicious artificial super-intelligence.
There are of course differing views on the value and impact of AI. Elon Musk’s alarming views on the dangers of AI first went viral after he spoke at M.I.T. in 2014—speculating that AI was probably humanity’s “biggest existential threat.” “The development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking has told the BBC.
Demis Hassabis, who is considered one of the leading minds in the field of artificial intelligence, sold his company Deep Mind to Google in 2014. He and his team have put forward the idea of an off switch, in a peer-reviewed paper titled Safely Interruptible Agents. The paper outlines a framework for preventing advanced machines from ignoring turn-off commands and becoming an out-of-control rogue agent.
According to Jeff Bezos, CEO of Amazon, we are in a Renaissance today driven by the potential of AI. It will help us solve previously intractable problems and deliver solutions that were once in the realm of science fiction.
Bezos calls AI an “enabling layer” that will “improve every business.” Much like the Internet, the World Wide Web and social media, AI is the next tech that will impact and influence every business
AI has many aspects. AI is everywhere. It’s not that big, scary thing in the future. AI is here with us. It’s time to understand and embrace it.