A Layman’s Guide to Artificial Intelligence (AI)

0
3901

“The very concept of intelligence is like a stage magician’s trick. Like the concept of ‘the unexplored regions of Africa’. It disappears as soon as we discover it.”

 — Marvin Minsky (1927–2016), mathematician and an AI pioneer.

What is Artificial Intelligence? 

According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”.

Basically, artificial intelligence (AI) is the ability of a machine or a computer program to think and learn. The concept of AI is based on the idea of building machines capable of thinking, acting, and learning like humans. 

AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.

Goals of AI 

  • To Create Expert Systems − The systems which exhibit intelligent behavior, learn, demonstrate, explain, and advice its users.
  • To Implement Human Intelligence in Machines − Creating systems that understand, think, learn, and behave like humans.

Applications of AI

AI has been dominant in various fields such as

  • Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where machine can think of large number of possible positions based on heuristic knowledge.
  • Natural Language Processing − It is possible to interact with the computer that understands natural language spoken by humans.
  • Expert Systems − There are some applications which integrate machine, software, and special information to impart reasoning and advising. They provide explanation and advice to the users.
  • Vision Systems − These systems understand, interpret, and comprehend visual input on the computer. For example,
    • A spying aeroplane takes photographs, which are used to figure out spatial information or map of the areas.
    • Doctors use clinical expert system to diagnose the patient.
    • Police use computer software that can recognize the face of criminal with the stored portrait made by forensic artist.
  • Speech Recognition − Some intelligent systems are capable of hearing and comprehending the language in terms of sentences and their meanings while a human talks to it. It can handle different accents, slang words, noise in the background, change in human’s noise due to cold, etc.
  • Handwriting Recognition − The handwriting recognition software reads the text written on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert it into editable text.
  • Intelligent Robots − Robots are able to perform the tasks given by a human. They have sensors to detect physical data from the real world such as light, heat, temperature, movement, sound, bump, and pressure. They have efficient processors, multiple sensors and huge memory, to exhibit intelligence. In addition, they are capable of learning from their mistakes and they can adapt to the new environment.

The difference between AI and “machine learning”

Chances are, if you’ve heard the term AI ballooning over the last few years, you’ve also heard “machine learning” as a buzzword.

Many have questions like “Is AI The Same As Machine Learning?

Not really. Although the two terms are often used interchangeably, they are not the same.

Artificial intelligence is a broader concept, while machine learning is the most common application of AI.

Here’s what it means: Advanced machines use large data sets to “learn” and create patterns — then, they use what they’ve learned to recognize more of the unknown.

AI and machine learning have a similar relationship to rectangles and squares. Just as all squares are rectangles, but not all rectangles are squares; machine learning is one application of AI, but AI is a broader concept that has other uses, too.

What’s AI Like Today?

The computers haven’t taken over the world, but artificial intelligence is already part of our everyday lives.

Although most of us haven’t taken a ride in a self-driving car, we benefit from AI through apps like Uber and Lyft that use algorithms to connect drivers to passengers.

We don’t have robotic assistants, yet, but we use AI assisted software like Siri and Google Now.

AI is also used in e-commerce, customer service, and financial services.

IBM’s cognitive computing system, Watson, is best known as a Jeopardy! winner, but Watson is also used in day-to-day data analytics in marketing, and research and diagnostic assistance at hospitals for physicians.

Google’s AI made news by beating the world Go champion, but its computing is also being used to answer email in Inboxidentify photos in Google Photos, and schedule appointments in G Suite, formerly Google Apps for Work.



How it will affect Humans: 

Experts predict that within the next decade AI will outperform humans in relatively simple tasks such as translating languages, writing school essays, and driving trucks. More complicated tasks like writing a bestselling book or working as a surgeon, however, will take machines much more time to learn. AI is expected to master these two skills by 2049 and 2053 accordingly.

It is obviously too soon to talk about AI-powered creatures like those from Westworld or Ex Machina stealing our jobs or, worse yet, rising against humanity, but we are certainly moving in that direction. Meanwhile, top tech professionals and scientists are getting increasingly concerned about our future and encourage further research on the potential impact of AI.

Potential for bias 

AI has an intrinsic potential for bias in terms of the data used to train each algorithm to do what it’s supposed to. For example, Google Photos came under fire for tagging African American users as gorillas in 2015, and in 2017the developers of FaceApp “beautified” faces by lightening skin tones. That’s why it’s vital for AI companies to look at the data they’re using and make sure it’s engineered to reduce bias.

What’s next in Artificial Intelligence 

AI is on the rise in industries across the board. In fact, 30 percent of businesses are predicted to incorporate it before 2019, and that’s up from just 13 percent last year, according to Spiceworks, an information technology company. Google, IBM, Amazon, Microsoft, Apple and many more companies are making AI a priority.

Closing Thoughts 

Given the innate advantage AI machines have over us humans (accuracy, speed, etc.) an AI rebellion scenario is something we should not completely dismiss. Time will show us whether AI is our greatest existential threat or a tech blessing that will improve our quality of life in many different ways.

So far, one thing remains perfectly clear: creating AI is one of the most remarkable events for humankind. After all, AI is considered a major component of 4th Industrial Revolution, and its potential socioeconomic impact is believed to be as huge as the invention of electricity once had.

In light of this, the smartest approach would be keeping an eye on how the technology evolves, taking advantage of the improvements it brings to our lives, and not getting too nervous at the thought of machine takeover.

PS: This is how AI brain look like

Follow us on Instagram and subscribe to Knowlab.