What is Artificial Intelligence?
Artificial Intelligence has always been a term that intrigues people all over the world. Artificial Intelligence (AI) refers to the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem-solving and decision making; it is inspired by the ways people use their brains to perceive, learn, reason out and decide the action. A Layman’s Guide to Artificial Intelligence
Various organizations have coined their own versions of defining Artificial Intelligence. Some of them are mentioned below:
NITI Aayog: National Strategy for Artificial Intelligence
AI refers to the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem solving and decision making. Initially conceived as a technology that could mimic human intelligence, AI has evolved in ways that far exceed its original conception. With incredible advances made in data collection, processing and computation power, intelligent systems can now be deployed to take over a variety of tasks, enable connectivity and enhance productivity.
World Economic Forum
Artificial intelligence (AI) is the software engine that drives the Fourth Industrial Revolution. Its impact can already be seen in homes, businesses and political processes. In its embodied form of robots, it will soon be driving cars, stocking warehouses and caring for the young and elderly. It holds the promise of solving some of the most pressing issues facing society, but also presents challenges such as inscrutable “black box” algorithms, unethical use of data and potential job displacement. As rapid advances in machine learning (ML) increase the scope and scale of AI’s deployment across all aspects of daily life, and as the technology itself can learn and change on its own, multi-stakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality to create trust.
European Artificial Intelligence (AI) leadership, the path for an integrated vision AI is not a well- defined technology and no universally agreed definition exists. It is rather a cover term for techniques associated with data analysis and pattern recognition. AI is not a new technology, having existed since the 1950s. While some markets, sectors and individual businesses are more advanced than others, AI is still at a relatively early stage of development, so that the range of potential applications, and the quality of most existing applications, have ample margins left for further development and improvement.
Encyclopedia Britannica
Artificial intelligence (AI), is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize or learn, from past experience.
In other words, AI can be defined as:
AI is a form of intelligence; a type of technology and a field of study. AI theory and development of computer systems (both machines and software) are able to perform tasks that normally require human intelligence. Artificial Intelligence covers a broad range of domains and applications and is expected to impact every field in the future. Overall, its core idea is building machines and algorithms which are capable of performing computational tasks that would otherwise require human like brain functions.
History of AI – Live Science
The beginnings of modern AI can be traced to classical philosophers’ attempts to describe human thinking as a symbolic system. But the field of AI wasn’t formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term “Artificial Intelligence” was coined. The graphic below appropriately explains why AI is a live science, what are the ups and downs in the pace of AI journey and how AI progressed in this domain from the year 1930-2000. (Reference)
What do we understand by AI in EDUCATION?
An effective education system has the dual responsibility to develop the most critical resource (i e the human resource) of a nation. One, that the younger generations must be educated in a way that they are ‘ready for life’ and are positive contributors to the advancement & enrichment of their nation. Second, they must be exposed to such learning environments with the help of updated tools and enlightened teachers so that their learning outcomes could be maximized and suited to the potential of every learner. In order that modern-day education achieves its goals of making its students ‘AI Ready’, it is imperative to know what K-12 learners must experience and confront in their day to day life.
AI is underlying the multitudes of its applications in the world; it encompasses and works on an array of capabilities which have universal application in different areas of study and operations. Some of the most important AI competencies with significant commonalities and connections with those of the other fields of study are shown in the graphic below.
A careful study of the above graph would lead us to believe that many of the technologies and the underlying principles that each of these follows, have a strong correlation with the teaching learning processes at school as well as college levels. Hence it is necessary that AI should not only be introduced as a subject in the school curricula, but also should become a link to teach other subjects at all the levels. Many of the AI based applications are now available to facilitate a learner to learn in his own unique way and at his own pace.
AI is NOT ALONE
AI does not operate in silos nor is it a stand-alone field of study or practice. AI has been said that it drives its knowledge as well as has its applications across other domains of knowledge. See below how the school domains of study (both formal and informal) interact with the concepts that Artificial Intelligence follows.
AI CROSS BREEDS WITH OTHER SUBJECTS
Subject Domain | What is Common with AI domain |
Psychology | How people perceive information, process it and build knowledge; how they behave |
Philosophy | Mind as a physical entity, methods of reasoning, basis of learning, foundations of language, rationality and logic |
Neuro-Science | How the basic information processing units – neurons process information |
Mathematics | Algorithms, computability, proof, methods of representation, tractability & decidability |
Statistics | Learning from data, uncertainty/ certainty of modelling |
Economics | Rational economic agents, usefulness of data & models, decision theory |
Linguistics | Grammar, syntax, knowledge representations |
Computer Science | Building computers |
Cognitive Sciences | Processes & things in nature, interpretation of different phenomena & their impact |
AI Related Terminologies
Story Speaker: It is an AI experiment which is available as an add-on to Google Docs. Story Speaker lets anyone create an interactive story with no coding required. It is an easy to install and easy to use tool and comes in handy when the user wants to create a story which changes according to the user’s input.
Data Acquisition: Data acquisition refers to acquiring authentic data crucial for the AI model from reliable sources. The data acquired could then be divided into two categories: Training Data and Testing Data. The AI model gets trained on the basis of training data and is evaluated on the basis of testing data.
Rock, Paper Scissors: This rock-paper-scissors game illustrates the basic principles of an adaptive artificial intelligence technology. Here, the artificially intelligent system learns to identify patterns of a person’s behavior by analyzing their decision strategies in order to predict future behavior. This game is based on the domain Data for AI where the machine collects and analyzes data to predict future outcomes.
Link to the game: https://www.afiniti.com/corporate/rock-paper-scissors
Mystery Animal: Mystery Animal is an AI experiment developed by Google on an open-sourced platform which is based on Natural Language Processing domain. In this game, the computer pretends to be an animal and the player needs to guess the animal by asking 20 Yes/No questions. The player asks the machine questions with the help of earphones/headphones/microphone to which the machine will respond either in Yes or No and according to the answers the player needs to modify his/her questions to guess the animal. https://mysteryanimal.withgoogle.com/
Artificial Intelligence vs Machine Learning vs Deep Learning: AI has always been a fancy for the people and in recent years, with the growth of technologies, it has gained a significant importance in the real-time applications. No industry is left behind, AI started stamping in all most all the sectors. With the emergence of AI, its subsets Machine Learning and Deep learning are also taking its turn to the limelight status. (Reference)
Natural Language Processing: It is the ability of a program to understand human language. Human language data can be fed to the machine in the form of text or speech. Natural Language Processing is one of the sub-fields of Artificial Intelligence wherein the machine interprets human language and produces intelligent output.
Autodraw.com: Autodraw.com is an AI enabled tool which is based on the domain of Computer Vision in which the machine identifies the pattern of your drawing and accordingly maps it with the most similar image. This tool shows various options trying to predict what the user is trying to draw. For example. if a user is trying to draw a tent and he starts with drawing a basic triangle, the machine would compare his drawing and would show the possible outcomes for the same. The user can then select out of them which one is the most appropriate form for him/her.
Neural Networks: Neural networks are loosely modelled after how neurons in the human brain The key advantage of neural networks are that they are able to extract data features automatically without needing the input of the programmer. A neural network is essentially a system of organizing machine learning algorithms to perform certain tasks. It is a fast and efficient way to solve problems for which the dataset is very large, such as in images.
Infinite Drum Machine: An infinite drum machine is an AI experiment developed by Google for people to understand how unsupervised-learning works. In this machine, thousands of sounds found in our surroundings have been randomly fed for the machine to make sense out of them. The sounds are not labelled in any way nor does the machine have any other information about that sound. All that it knows is the sound clip itself. Using one of the unsupervised learning algorithms, the machine analyses the data fed to it and tries to cluster similar sounds together. These clusters are then visible with the help of colors on the user’s screen. All the dots appearing on the screen are sound clips and they have been clustered together on their basis of their sound properties like amplitude, frequency and pitch with the help of which the machine is able to understand the similarity amongst different clips.
Link to Infinite Drum Machine: https://experiments.withgoogle.com/ai/drum-machine/view/
AI Model Training: An algorithm is said to be artificially intelligent if it gets trained and can make decisions/predictions by its own. The intelligence which a machine gains comes by training the machine with the appropriate dataset. For example, a machine is to be created which needs to classify an image as either an apple or a banana. To achieve this task, the machine is trained with hundreds of images of apples and bananas each. While training, the machine extracts features from the image dataset of apples which would help the machine classify any image of an apple as an apple. The same is done for the banana dataset. Finally, after training, the machine is tested by providing an image of either an apple or banana. If the machine is able to classify it correctly, the efficiency is said to be good else it gets re-trained on a better dataset.
Training an AI model requires two datasets: Training Data and Testing Data. The machine is first fed the training data from which it makes its own rules which help it to predict the output. Then the testing data is used to check the efficiency of the model. Once training and testing is done, the model is deployed for use.
Classification: Machine Learning algorithms can be broadly classified into tree families: Supervised learning, Unsupervised learning and Reinforcement learning. Classification is a part of Supervised learning model. Classification models work on labelled datasets and are used to predict the label of testing dataset. For example, 100 images of apples and 100 images of bananas have been taken as a training dataset for the AI model. These 200 images have been labelled as apples or bananas respectively. This labelled data is then fed to the machine which trains itself by extracting common features from the dataset and understanding which features come under the apple label and which ones come under the banana label. At the time of testing, the machine takes an input image and extracts features from it which are then compared with the features marked under both the labels. On the basis of the degree of similarity, the machine would label the testing image as either apple or banana. This process is known as Classification.
Unsupervised Learning: While there are many machine learning models, they can be broadly classified into 3 families. They are supervised learning, unsupervised learning and reinforcement learning. Unsupervised learning focusses on finding patterns or trends out of the data fed to the machine. Every machine learning algorithm requires training data as a base to work upon. Talking about unsupervised learning, the training data fed into this machine is un-labelled i.e. the data fed into the machine is unknown or random. It has not been supervised and hence is given to the machine to get processed in such a way that some meaningful information can be extracted out of it. For example, if in a locality, there are 1000 stray dogs and they are all random bred, if the pictures of all these dogs is fed into an unsupervised learning algorithm, it would automatically cluster these images according to the features observed and would give clusters of images as output. These clusters could be based on any trend or pattern observed in the data fed. This helps in understanding the dataset better.
Problem Scoping: Problem Scoping refers to understanding a problem and finding out various factors which affect the problem. Under problem scoping, we use the framework of 4Ws problem canvas where we look into the Who, What, Where and Why of a problem. After observing these factors, students get clarity towards the issue to be solved which leads them towards data acquisition.
AI Project Cycle: AI Project cycle is a framework which is used to design an AI project taking all the crucial factors into consideration. The project cycle consists of 5 steps namely: problem scoping, data acquisition, data exploration, modelling and evaluation. Each of the stages holds importance in the framework.
AI has been an academic area of study for many years with lots of dips on the way to its progress; in recent times it is increasingly becoming an enabler for a variety of technologies and appliances that impact our daily lives. Also, with the ever-increasing computing power, lesser cost of data storage and immense data available, there is a boom of technological innovations, which should make us believe that ‘AI Spring’ has arrived. So, AI is marching ahead to be the mainstream of the mainstream disciplines of study that it connects.