Acceleration

In futures studies and the history of technology, accelerating change is a perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.

Big Idea: Technology Grows Exponentially. The doubling of computer processing speed every 18 months, known as Moore’s Law, is just one manifestation of the greater trend that all technological change occurs at an exponential rate. In this session, we will look into artificial intelligence and its origins, what acceleration is and how it is sculpting our future in tech, and learn a little about some of the theories behind the Singularity.

Artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search engine to Apple’s Siri to autonomous machines.

Artificial intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (like only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

WHY IS IT IMPORTANT?
AI automates repetitive learning and discovery through data. But AI is different from hardware-driven, robotic automation. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For this type of automation, human inquiry is still essential to set up the system and ask the right questions.

It also adds intelligence to existing products. In most cases, AI will not be sold as an individual application. Rather, products you already use will be improved with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies at home and in the workplace, from security intelligence to investment analysis.

Artificial intelligence adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that the algorithm acquires a skill: The algorithm becomes a classifier or a predicator. So, just as the algorithm can teach itself how to play chess, it can teach itself what product to recommend next online. And the models adapt when given new data. Back propagation is an AI technique that allows the model to adjust, through training and added data, when the first answer is not quite right.

AI analyzes more and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago. All that has changed with incredible computer power and big data. You need lots of data to train deep learning models because they learn directly from the data. The more data you can feed them, the more accurate they become.

We can achieve incredible accuracy with AI though deep neural networks – which was previously impossible. For example, your interactions with Alexa, Google Search and Google Photos are all based on deep learning – and they keep getting more accurate the more we use them. In the medical field, AI techniques from deep learning, image classification and object recognition can now be used to find cancer on MRIs with the same accuracy as highly trained radiologists.

AI gets the most out of data. When algorithms are self-learning, the data itself can become intellectual property. The answers are in the data; you just have to apply AI to get them out. Since the role of the data is now more important than ever before, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.

WHAT ARE THE CHALLENGES?
Artificial intelligence is going to change every industry, but we have to understand its limits. The primary limitation of AI is that it learns from the data we provide it. There is no other way in which knowledge can be incorporated. That means any inaccuracies in the data will be reflected in the results. Any additional layers of prediction or analysis have to be added separately.

Today’s AI systems are trained to do a clearly defined task. The system that plays poker cannot play solitaire or chess. The system that detects fraud cannot drive a car or give you legal advice. In fact, an AI system that detects health care fraud cannot accurately detect tax fraud or warranty claims fraud. In other words, these systems are very, very specialized. They are focused on a single task and are far from behaving like humans.

Likewise, self-learning systems are not autonomous systems. The imagined AI technologies that you see in movies and TV are still science fiction. But computers that can probe complex data to learn and perfect specific tasks are becoming quite common.

This animated short by HubSpot does a wonderful job of explaining AI in an easy format. Please watch.

Scientists at Duke and Princeton have developed AI that can not only generate imagery, but can now generate video! This is interesting. Please read.

The Smithsonian magazine has a great article on the effects of AI in our society.

What about AI and artists? Adobe explores how AI can become a creative muse in their excellent blog series. Check it out!

As you should recall from our session on computing, Alan Turing was an instrumental mind behind the investigation of the powers of computers and their ability to rival human thought. In 1950, he published a philosophical paper including the idea of an ‘imitation game’ for comparing human and machine outputs, now called the Turing Test. If a machine could carry on a conversation via a teleprinter that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

Also during the 1950’s and 60’s, AI for gaming was developed at the University of Manchester by Christopher Strachey with a program written for playing checkers, and one written for chess by Dietrich Prinz. Shortly after these two, Arthur Samuel developed a checkers game that accomplished enough skill to play against an amateur – a remarkable breakthrough. This was the first time humans could engage in a relatively complex and thoughtful interaction with a computer. The game industry will continue to be a major player in the development of AI through its history.

However, these researchers and scientists still weren’t calling these thinking machines “Artificial Intelligence.” The 1956 Dartmouth Conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI. The conference was organized by Marvin Minsky, John McCarthy and two scientists from IBM, Claude Shannon and Nathan Rochester. At the conference, McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.

The real research and development in AI found itself taking place in four key universities – MIT, Carnegie Mellon University, Stanford and the University of Edinburgh. The researchers and scientists here were given massive grants to help ensure a steady focus in this area and continued breakthroughs in thinking machines and programming. What came of this was a lot of grand promises by very optimistic scientists – like fully intelligent machines in 20 years – that never came to fruition. The progress was nowhere near what the financial backers were led to believe, and the money quickly vanished. This led to what is known as the first AI winter – a lull in the development and a buzzkill to most people who had expressed interest and support. The major reasons behind this inability to produce are pretty obvious if you look at it from today’s eyes. There just wasn’t enough computer power – memory was extremely limited, processing speeds were super slow and there just wasn’t a pool of enough data that the machines could rely on and process in order to “think.” It was these inadequacies, alongside other struggles in tech advances such as gaming and telecommunications that were behind the major drive to begin to deliver these elements at  greater volumes, smaller sizes, faster speeds, affordable prices, etc. – leading to the boom in electronics and digital tech that we are in the midst of today.

The success that was seen in these early developmental years of AI were mostly due to the introduction of logic to these computing machines. Logic was introduced into AI research in 1958 by John McCarthy, and his early version was a bit difficult as it required a massive number of steps in order to perform rather simple problem solving. A better take on this logic language was introduced in the 1970’s at the University of Edinburgh by Robert Kowalski and two French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog. This is when clauses and rules became an integral part of the processing of data. This was followed by an improved language called Soar, developed by Alan Newell and Herbert Simon at Carnegie Mellon. Soon after, something known as neural networks began to be an important part of a machine’s ability to gain “intelligence.”

It wasn’t long before a jump was made from logic systems to knowledge systems. Logic wasn’t left behind, as it is a critical part of problem solving, but as advances in tech were being made, there was more room for base data – in this case, knowledge. Researchers could supply a pool of facts, eventually known as expert systems, that the computers could move through problems faster and more accurately. These data sets and language were distributed in a way in which other programmers and scientists could build upon the base data in order to more specifically get the computers to work within their own research areas – kind of reminds me of the hacker culture and opensource movement we see today. Then, starting in the 80’s and lasting through 2010, the major advancement in AI was in a new way of “machine learning,” which gave computers the ability to learn specific tasks based on sets of data and without intentional programming.

So the AI research looked like it was really picking up again and garnered a lot of interest and of course, lots of money and funding. But yet again, this bubble burst in the late 1980’s and there was a second AI winter. An interesting factor in this change of momentum was the arrival of desktop computing. Both Apple and IBM were delivering machines to people’s homes that were more powerful and faster than the behemoths that these research labs were reliant on – not to mention worlds cheaper. This now outdated, expensive industry was torn down, almost overnight. The major funders changed interest, labs were either shut down or were extremely downsized or focused on different emerging tech, and the dream of human level intelligence that had captured the imagination of the world in the 1960s was nearly dead.

Despite this significant slow-down in the field of artificial intelligence, several of the main hubs of research quietly kept eating away at it, but in super-focused areas – leading to a surprising boom that many people weren’t anticipating. On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. The super computer was a specialized version of a framework produced by IBM.

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for over 130 miles along an unrehearsed desert trail. In 2007, a team from Carnegie Mellon won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and traffic laws. In February 2011, during a Jeopardy! quiz show exhibition match, IBM’s Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a remarkable margin.

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today. In fact, Deep Blue’s computer was 10 million times faster than the computer that Christopher Strachey taught to play chess in 1951. This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

The redirection of resources and focus during the AI winter, major advances were being made in the world of mathematics and computing. These new insights were noticed by the remaining AI researchers and they found ways to begin to meld the disciplines – leading to some valuable tech, such as data mining, industrial robotics, speech recognition, and Google’s search engine. Today’s forms of artificial intelligence that are now a part of nearly everyone’s lives are Apple’s Siri, Google’s voice assistant, IBM’s Watson, Uber’s self-driving cars, Amazon’s Alexa, just to name a few. These current technologies are striving on a new programming method known as “deep learning.” The push for serious AI is on again, and gaining much momentum as the tech components are much more a match for what the scientists and engineers behind it need them to be, and only getting better.

Please watch this documentary on AI by Fares Alahdab – it poses some interesting questions about how artificial intelligence might become a concern in our workplace and livelihoods. 

50 years ago Gordon Moore, co-founder of Intel, made a simple observation that has revolutionized the computing industry. Basically, it states that the number of transistors (the fundamental building blocks of the microprocessor and the digital age) incorporated on a computer chip will double every two years, resulting in increased computing power and devices that are faster, smaller and lower cost. This insight, known as Moore’s Law, became the golden rule for the electronics industry, and a springboard for innovation for decades to come. But now what?

Click to enlarge

If you think about it, anything doubling every two years is significant exponential growth. On a graph, it’s one of those that is visualized as a nice curve that quickly turns into a nearly vertical spike. When Moore first made his statement there were two transistors on a processor chip. By 1971, the Intel 4004 contained 2,300 transistors. Today, current chips have billions! Does that blow your mind? It should – it’s absolutely amazing how intense an advancement that is. But like so many things, it may have plateaued. Moore’s Law proved to hold true for 50 solid years, but it looks as if it may be over, as the producers of these technologies (transistors and microchips) are struggling to continue to make breakthrough steps in speed, energy consumption and cooling. Not for a lack of trying – chip companies like Intel and Samsung are hard at work researching new materials and compounds that could possibly boost performance and keep temperatures down, as well as new architecture techniques in the build of the transistors. Up until now, transistors have been constructed in a two-dimensional approach – basically flat. But what they are finding is that a three-dimensional form factor increases surface area, improving speed by one third and simultaneously halving power consumption.

But this technology is not yet ready – and we have found ourselves sitting in this tech lull where the end users aren’t seeing the great changes in our gear that we have become accustomed to. From desktop computers to laptops, phones to tablets, etc. we are getting nearly the same thing offered to us as the last version, just in new colors, sizes, materials but maybe better displays. People have been complaining about this, but without any real understanding of how and why – instead just nasty comments about how we’re getting screwed by the big companies and being forced to wait longer. Fact is, the big guys are kind of looking around surprised, too, stumbling to make meaningful advancements.

This has led to some interesting solutions lying in the hands of other areas of our tech industry. Without having the excuse that the next chips will make everything faster and more efficient, software companies are looking at what they currently have to work with, understanding that it’s going to be that way for a while, and are working within the programming to make new achievements in performance. And this is a great thing – by getting the software to help parse out more tasks and do some math and computing, the processors are able to focus on other tasks and run more efficiently.

A GPU farm

Another resource that is garnering a lot of attention and use by many developers and tech companies is the GPU (remember what that is!?). Google recently revealed a project at its research lab concerning machine learning and AI work, where it had simultaneously used as many as 800 of the most powerful and expensive graphics processors available. Engineers have kept GPUs performing at a more powerful rate because they can be more specialized to the particular math they need to perform for graphics or machine learning. The good news for those betting on AI is that graphics chips have so far managed to defy gravity. At the recent conference of leading graphics chipmaker Nvidia, CEO Jensen Huang displayed a chart showing how his chips’ performance has continued to accelerate exponentially while growth in the performance of general purpose processors, or CPUs, has slowed.

But even with these alternative solutions, the fact is that transistor and processor development is seeing an unwanted plateau. This is going to be a time of challenging problem solving and creative infusions in order to keep a steady path of progress in our ever-
dependent world of digital technology.

Please watch this short little piece with Gordon Moore reflecting on the 50 years of acceleration since his paper in his first years with Intel.

Moore’s Law was fairly specific to the elements it referenced – transistors and processors. But many scientists and futurists have been actively working on more wholistic concepts and predictions, still rooted in technology and AI. Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions. Of his 147 predictions since the 1990s, Kurzweil claims an 86 percent accuracy rate. Kurzweil’s Law of Accelerating Returns also explains exponential advancement of life (biology) on this planet. Looking at biological evolution on Earth, the first step was the emergence of DNA, which provided a digital method to record the results of evolutionary experiments. Then, the evolution of cells, tissues, organs and a multitude of species that ultimately combined rational thought with an opposable appendage (i.e., the thumb) caused a fundamental paradigm shift from biology to technology. The first technological steps — sharp edges, fire, the wheel – took tens of thousands of years. For people living in this era, there was little noticeable technological change in even a thousand years. By 1000 A.D., progress was much faster and a paradigm shift required only a century or two. In the 19th century, we saw more technological change than in the nine centuries preceding it. Then in the first 20 years of the 20th century, we saw more advancement than in all of the 19th century. Now, paradigm shifts occur in only a few years’ time. The World Wide Web did not exist in anything like its present form just a decade ago, and didn’t exist at all two decades before that. Kurzweil feels that as these exponential developments continue, we will begin to unlock unfathomably productive capabilities and begin to understand how to solve the world’s most challenging problems. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.

Kurzweil reiterated his bold prediction at Austin’s South by Southwest (SXSW) festival in 2017 that machines will match human intelligence by 2029. And he doesn’t stop here – he predicts that “… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.” This idea has been known as the Singularity – a time when human conscience and knowledge is married with advanced nano technologies, and therefore a result of the new paradigm shift and essentially a new version of the human species. He has predicted that this will happen by 2045.

There are plenty of folks who have signed into this reasoning, and appropriately, many who counter it. Paul Allen, co-founder of Microsoft and Institute of Artificial Intelligence, among other ventures, has written that such a technological leap forward is still far in the future. “If the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress,” he writes, referring to the concept that past rates of progress can predict future rates as well.

This interview of Kurzweil at the 2017 SXSW conference is an important read.

Watch this TED Talk with Ray Kurzweil where he discusses the impacts of technology changing our lives.

Today's A.I. is designed to perform a specific task (face recognition or autonomous driving) and is called: