Why you Need to Appreciate the Human Brain

An Introduction To Artificial Intelligence and Machine Learning — Part I

Rania Hashim
12 min readDec 20, 2022

--

What did you do this morning?

You might say you decided what to eat for breakfast, brushed your teeth, texted your friend and so on.

What I want you to do is to think over all of these activities and how beautifully complex they can be had we not been gifted with our cognitive abilities.

Take deciding breakfast for example. In doing so, you were first able to consider the many options you had. For instance, you may have considered pancakes, eggs and bacon, french toast or more. Next, you looked into the pros and cons of each option. Then, perhaps, you prioritised what you wanted in a breakfast and arrived at a conclusion.

Of course, we don’t really think about our decision making frameworks on a day-to-day basis and so, the complexity of what we are able to do so effortlessly never really hits us.

While we’re still marvelling at how effortlessly our brain processes complexity, take a look at this image:

from the MNIST dataset

You probably read that as a 5, 7 and 8.

To a computer, however, the above image is basically just an arrangement of black and white pixels. It has no way to figure out that it is a handwritten form of the numbers 5, 7 and 8.

At the same time, there are certain things computers do better than us. An example of this is its ability to analyse great amounts of data. We could feed it data that would take us days to analyse but it could speedrun through it pretty easily.

This is where we join hands.

Yup, that’s right. Say hello to artificial intelligence.

Artificial Intelligence (or AI) is most usually seen used in conjunction with other technical mumbo-jumbo like ‘deep learning’, ‘algorithms’ and of course, machine learning.

Because of all the confusion that surrounds it, it is more important than ever to properly understand what AI is and what its not.

AI is one hell of a topic, covering so many different topics. It also comes up in many different fields like marketing, healthcare and more (as you will see below) and is already making great strides in various industries today.

We’re generating more data than ever and we need to keep up. We need an efficient method of analysing all this data to better understand the world around us. AI can develop methods and algorithms to do so.

And this is just the beginning…

note — i got this idea from another writer whose article i had previously read -> if anyone knows who it is, id greatly appreciate it!

Human civilization is at the edge of change. With exponential growth in technologies like AI, we are going to be witnessing a revolution. A colorful future we have ahead of us, indeed.

In a nutshell, AI is a collection of concepts, problems, methods and algorithms for solving them. Essentially, AI is a method of getting systems to work and behave like a human would.

To simplify things, we can say that a computer is powered by AI when it completes tasks that require human intelligence. This could include image recognition, speech recognition, decision making and many other tasks.

Thanks to sci-fi movies, when we hear the term AI, most of us envision unfeeling humanoids and massive, metallic robots. However, robots and humanoids are NOT AI; robots and humanoids are just a container for AI. It is essentially like the body to the brain.

Now, often times, when we talk about AI, we look for 2 things: Autonomy and Adaptivity.

Autonomy is the ability of a computer to perform tasks in complex environments without the need for constant guidance by a user. This is an especially appealling aspect of AI. Think about it — automation could help in the development of self-driving cars, etc. It can also eliminate the need for humans in tedious, repetitive tasks.

Adaptivity is best understood with an example.

Think about the first time you tried to learn how to ride a cycle.

After being instructed on pushing the right pedals and keeping an eye out for bumps, you would have started off with mistakes. With greater practice, you would have made fewer mistakes. Eventually, you would be comfortable enough to confidently say that you know how to ride a cycle.

Over here, you are going from falling at every turning to confidently racing through the streets of your town. This improvement occurred as you gained more experience riding the cycle.

Similarly, the ability of a computer to improve performance by learning from experience as shown above is adaptivity.

Autonomy and adaptivity are characteristic features of AI that enable it to potentially change the world.

The types of AI

In terms of its development and calibre, we can classify AI into three stages:

  1. Artificial Narrow Intelligence (ANI): Commonly referred to as weak AI, ANI is AI that specializes in one area. The learning algorithm is designed specifically to do one task; the knowledge gained from doing so cannot be applied in anything else. You can’t expect a chess champion AI to analyse your listening activity and give you music recommendations because that’s not what it has been trained to do.
  2. Artificial General Intelligence (AGI): Popularly known as Strong AI, AGI has broader applications and has the cognitive computing capabilities of a human. Take an example of a machine asked to process a million documents. With its high processing power, it could probably do this in a matter of seconds to minutes. However, if we were to ask the same machine to go to the kitchen and bring us the food, it would probably stare at you blankly. AI today is weak — Strong AI is capable of executing human-level tasks and abilities, which is much harder to achieve. Right now, this doesn’t really exist outside of sci-fi and it is pretty much the goal.
  3. Artificial Superintelligence (ASI): This stage of AI is what frightens people the most. All those posts you see rooted on the “AI Takeover” and “AI Apocalypse” describe this stage of AI. This is when AI achieves superintelligence — intelligence that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Nick Bostrom, Superintelligence). In this stage, machines become self-aware and surpass human capacities. Dystopian science fiction sure does love this idea.

After reading about the different stages of AI, you might be tempted to learn more about current applications.

As aforementioned, we haven’t really gotten past weak AI due to the complexity of the idea of a machine emulating human behaviour. However, we are already doing so much with ANI — just take a look at few of the examples below:

Example #1: Recommendation Systems

Everyday we consume content on the internet, be it on social media like Instagram and Facebook, music apps like Spotify and streaming services like Netflix and HBO.

You may have noticed that the content presented to you in these systems seem to be personalised for you. For instance, you may notice that you are getting more hiphop recommendations as you listen to more such music.

This is based on the information collected from your activity in the application.

The front page of the online version of New York Times and China Daily is different for each user, based on their reading activity. This is the magic of AI-based algorithms that determine what content makes it to your front page.

This also includes Google’s Predictive Search Engine.

Example #2: Image and Video Processing

Whenever I open my Photos app, I observe that there are people profiles and autogenerated videos made for specific people. While this is an incredibly wholesome way of reminding me of my loved ones, it is quite interesting how it organises my photos according to people and automatically tags them.

This is yet another application of AI — image recognition and video processing.

Face recognition is already coming to be used more in many customer, business and government applications.

Another interesting application of AI is AI-based art and style transfer (in which you can adapt your personal photos to a specific style). AI can be used to generate and alter visual content. This is especially rising in popularity as people explore AI-generated visual art more.

Experience it yourself: this website generates an art piece based on an art style and prompt of your choice!

guess the prompt 👀

Example #3: Personal Voice Assistants

Alexa, Siri, Cortana, Google Assistant… The list just goes on, doesn’t it?

We have virtual assistants literally at our fingertips. They can help us with so many things from knowing the weather to locating the nearest arcade, all without having to type a single word.

They do this by using voice recognition and natural language processing. This enables them to understand “natural language” as used by humans.

Example #4: Self-Driving Cars

Machine Learning algorithms are used in self-driving cars. This allow it to detect objects and empower it to make decisions based on the environment it is present in.

A common buzzword that the media loves to throw in is Machine Learning. It has been observed that ML is often interchangeably used with AI.

Well, spoiler alert: ML != AI

Machine Learning is a subset of AI → it focuses on feeding the machine as much data as possible in order to make it learn. While all ML comes under AI, the same cannot be said for the opposite. Artificial Intelligence comes with a lot more other fields like Deep Learning, NLP etc…

As the amount of data generated increases, the need to develop better methods to extract meaning from it grows. Such insights could help identify trends and uncover patterns to solve complex problems.

ML provides machines the ability to learn automatically and improve from experience without being explicitly programmed to do so.

So, how do we make a machine learn?

It’s one thing to store a lot of data on a computer and another to be able to extract useful insights from it.

There are three ways to do so:

Supervised Learning

In supervised learning, we feed the machine LABELLED data.

Let’s say we want to train a model to correctly identify and differentiate between pandas and koalas. Just like a toddler learns the names of objects via a picture book, we feed the machine a lot of images of pandas and koalas. Keep in mind, the machine hasn’t been trained and so, it would spit out a random guess.

The machine’s guess is verified or falsified by looking at the labels. This main objective of the machine is obviously to guess it correctly and so, it tweaks its decision making model ever so slightly. As the machine goes through 1000s of images, its decision making model is refined and based on the patterns it sees between koalas and pandas.

After the machine has been ‘trained’, it is ready to take on the challenges of the real world; feed it pics of the pandas you saw on your last Chinese expedition and watch with satisfaction as the machine guesses it right.

Except, a machine sorting out images of koalas and pandas have very little real-world relevance. I mean, sure, it is fun to watch a machine go through them but what kind of problems can supervised learning really solve?

Supervised learning solves regression and classification problems.

In regression, the main aim is to forecast or predict. By establishing a relationship between variables, it fits in your input to the data and try and predict the outcome.

The output here is usually a continuous quantity, i.e. can take on any value. It isn’t restricted to specific categories.

This is used in predicting the price of a house given its features, predicting gold rates, forecasting weather etc…

On the other hand, classification is more centred around computing the category of the data. The output here would be a more categorical quantity — just as was seen in the example above.

For instance, lets say you want to see your chances of getting accepted to your dream college based on SAT scores and your class rank. For this, you might choose to draw up a graph that relates SAT scores and class ranks to acceptance.

You might notice a boundary between acceptance and rejection based on these two variables.

Based on this, you may classify yourself based on the class rank and SAT score you secured as accepted or rejected.

However, if the task were to fill in gaps in the data or to predict the impact of SAT scores in overall acceptance rates, the problem would be a regression problem.

Algorithms are series of steps or a set of rules used to identify patterns from data. It essentially defines the logic of an ML model, mapping out the decisions the model is supposed to take.

There are many types of algorithms and they are used in various context depending on the need.

Some common algorithms used in supervised learning include linear regression, random forest, decision tree etc…

Unsupervised Learning

Unsupervised learning is what happens when there’s data — but no labels. The machine finds underlying patterns and finds structure in data.

In this case, the machine doesn’t exactly understand the data it is looking at; all it does is find patterns and relations. It categorises the data based on this. Kind of like how humans figured out constellations before we knew about stars.

A common example of unsupervised learning is news sections. Google News categorizes articles on the same story from various outlets. The news hadn’t been labelled as a specific type, but the algorithm finds various patterns and clusters them together.

Unsupervised learning solves clustering problems. Clustering is essentially classification but with unlabelled data.

As you might guess, in clustering, the main objective is to find patterns to form “clusters” or groups of similar items. An example is where your device might sort images of people despite you not naming each one of them.

Some algorithms used in unsupervised learning include K-means clustering, principal and independent component analysis etc…

Reinforcement Learning

Reinforcement learning switches things up — the agent learns not from pre-defined data as was the case in the above two examples, but by the experience it gains from interacting with the environment.

This is literally how humans learn. Take a toddler fascinated by fire for example. Its crackling noise, bright embers and the way it dances and sways along the wind lures them in to its warm embrace. As they extend their inexperienced hand to touch it… Ouch! That hurt! Touching the fire is not as good as they had anticipated; better avoid it next time, they decide.

The same way, the agent gains both positive and negative feedback from its interactions. The main objective of the agent is to maximize rewards and minimize punishment. Its method in doing so is the trial and error method.

Decisions in reinforcement learning are made sequentially. This means that the output depends on the state of the current input. If it made a desirable action, it would get positive feedback and would make a positive correlation to the same. At the same time, the agent is punished if it was undesirable. This, in turn, decides their next move.

The biggest example of reinforcement learning is in automated vehicles. Furthermore, reinforcement learning has also been used in achieving superhuman performance in many games like Go and chess.

In fact, AlphaGo, developed by Google DeepMind, was able to beat world Go champion Lee Sodel in the game.

Some popular algorithms used in reinforcement learning include Q-Learning, SARSA etc…

An exciting subfield of machine learning is deep learning. Deep Learning makes use of ‘neural networks’. These essentially model the brain and use its structure as inspiration.

I’ve always been fascinated by the working of neural networks and started off my AI journey building a very simple neural network that classified handwritten digits to the numbers they represent.

We’ll dive further into neural networks (+how you can build your own) in the next 2 articles. Until then, here is some food for thought:

“Anything that could give rise to smarter-than-human intelligence — in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement — wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.” — Eliezer Yudkowsky

Hey 👋, I’m Rania, a 16 y/o activator at the Knowledge Society. I’m a future of food researcher who focuses on acellular agriculture. Currently, I’m nerding out on using artificial intelligence for education. I’m always ready to learn, grow and inspire. I’d love to connect; reach out to me on any of my social media and let’s be friends!

--

--