The Senior Executive’s Guide To AI, Machine Learning, and Deep Learning

Google+ Pinterest LinkedIn Tumblr +

This is the first installment of a three piece series on AI, Machine, Learning and Deep Learning. Part Two outlines where AI and its associated technologies are today, and Part Three outlines the limitations of AI, and where senior executives are investing in the technology. 

A recent NewtonX survey of 70 senior level executives on Artificial Intelligence (AI) and chatbots revealed that despite investing in these technologies, most executives do not understand the definition or real-life applications and limitations of AI. Furthermore, 67% of senior executives said they did not distinguish between AI and machine learning.

AI has become commonplace in business vernacular, but is a rampantly ill-understood term. Because of this, many executives are making poor business decisions based on misconceptions around how to utilize AI, and how to prepare for the associated costs of the technology.

To help remedy this pervasive problem, NewtonX developed a three-part series on AI. This overview is designed to give medium-to-large business executives a comprehensive overview of AI, Machine Learning, and Deep Learning. From this guide, you will gain an understanding of each term, the differences between each term, and the associated costs, limitations, applications, and potential for each technology.

Part 1, which we are sharing today, gives context and a framework for understanding how AI developed, and what key terms such as neural networks and deep learning really mean. This week, we will also release parts 2 and 3, which delve into machine learning, AI today, and the enterprise costs and limitations associated with implementing AI.

Part 1

The History of AI: How We Got Here

AI refers to the ability of a machine to perform cognitive functions that have traditionally been relegated exclusively to the human mind. For instance, before 1990, language could only be utilized and understood by humans — we could use machines to write and send messages, but the machines themselves could not produce or understand the messages. Today, Natural Language Processing (NLP), which refers to the ability of a machine to understand and respond to human language (spoken and written) is one of the most impressive examples of AI. Other cognitive functions that can now be performed by machines include the ability to reason, predict, interact and respond to one’s environment, and learn.

Applications of this ability include autonomous vehicles, computer vision, and predictive analytics. It’s important to note that there are different levels of complexity to AI. Some algorithmic forms are considered AI simply because they process big data rapidly and incorporate new data in real-time to inform predictions or other outcomes. Other applications, like image recognition or self-driving cars, are incredibly complex, and often involve years of training in various scenarios and with massive training data sets.

Speaking of data, it was the explosion of big data in the early ‘90’s that allowed for the development of AI in the first place. While the first deep learning models were developed as early as 1965, algorithms couldn’t be adequately trained until the explosion of big data gave them ample training material. This timeline outlines which factors were necessary for the market penetration of AI that we have today, and how they each functioned.  

1965 — Development of the first multilayer artificial network (ANN).

ANNs are so named because they are a very simple approximation of biological neural networks: artificial neurons are connected by “synapses”, which are really just weighted values. Neurons can either be “positive”, “negative”, or “neutral”, depending on the weighted value they receive through synapses from other neurons. For instance, if a single neuron needed a weight of 15 to be “positive”, then the sum of its inputs (other neurons connected to it through synapses) would need to be equal to or greater than 15. When you give an algorithm an input (say, a photo of a dog), it goes through numerous layers of this type of processing to move forward through the layers until it spits out the word “dog”.

This process is incredibly laborious. Think of it as learning to walk a path — you have never seen a stop sign, and you walk right into it. This hurts — to the algorithm, this would send it back in layers, instead of forward. You try again, and this time just your knee bumps into the lamp post. Ouch again, and you’re sent back. An algorithm needs thousands, if not millions, of tries before it learns to walk in a manner that won’t hurt it. But when it has learned, it will be able to walk so that it doesn’t step on a dog, even if it’s never seen a dog before. This is what we call deep learning today — an ANN that can respond to a previously unknown stimulus. While the first ANN was developed way back in 1965, there was simply not enough data to effectively teach an algorithm to the point where it could process unrecognized input.

1990s — The European Organization for Nuclear Research (CERN) opens up the World Wide Web to the public.

In 1990 Tim Berners-Lee developed the first web browser software, which looked like this:

First web browser

The technology opened up to the general public in 1991. But it would still be years before most people used, or even understood the Internet.

Mid 2000’s — Web 2.0 explodes, launching the era of user-generated data

Web 2.0 ushered in the era of user data. The term refers to the transition from passively viewing content on the Internet to actively producing content — be it through social media, blogs, wikis, video sharing sites, or hosted services.

In 2004, Facebook launched — then called “Thefacebook”, and by 2005 it had close to six million data-generating users.

2007 — Launch of the iPhone propels smartphones into the mainstream

The launch of the iPhone and subsequent mainstream adoption of smartphones massively exacerbated user data. Suddenly, consumers were “always on” — interacting with their devices 24/7. This massively exacerbated user data, and finally gave AI the training data it had so craved.

Suddenly, the technology developed in 1965 had enough training data to rapidly and effectively teach an algorithm through an ANN. Say you want to teach an algorithm what a dog is — now, you can just type in “dog” to Google Images and you have billions of photos to train your algorithm with. This led to the profusion of AI today — which we will examine in Part 2 of this series.

Share.

About Author

Comments are closed.