a world of insight in our

Media Center

We have crafted solutions for industry leaders, combining years of expertise with agility and innovation.

How to talk AI like an expert

How to talk AI like an expert

By Marais Neethling, Synthesis AI Evangelist

Getting lost in the latest tech talk? Don’t know your AI from your ML? Then it’s time to learn how to talk AI like an expert.

The term AI/ML has become exceptionally popular but what exactly to the experts mean they refer to AI/ML?  “AI/ML” is a field in computer science which tries to use computers to solve problems that were previously only “solvable” by humans. For years, computers excelled at number crunching but certain tasks, such as voice recognition (hearing), object recognition in photos (vision) and predicting behaviour of agents in unconstrained or uncertain environments, remained a challenge for programmers to overcome.

Times have dramatically changed. These days, computer algorithms can perform as well or better than humans on a few narrow tasks that typically take a human between 0 and 3 seconds (rule of thumb) to perform. These tasks are usually perception tasks that don’t require deep, abstract thought by the human brain. Of course, there are contradictions to this rule of thumb. Consider generative models that write semi-coherent paragraphs of English text or compose musical scores or even invent art. However, these models arguably don’t apply “creative thinking” in the human-like way we are used to, it is merely an extension of the algorithms used in perception-based problem solving.

So, what do the terms “Artificial Intelligence”, “Machine Learning”, “Deep Learning” and “Data Science” mean and what are their differences? To find the answers, we’ll delve into the history of Artificial Intelligence.

The term “Artificial Intelligence” was first coined by scientists in 1956 when they arranged a workshop to bring together luminaries in the fields of cybernetics and mathematics, formal (mechanical) reasoning and other related fields of academia. Artificial Intelligence was thus used to describe this nascent field of research into board game-playing algorithms that could learn strategies, logic theorem provers, logic and inductive reasoning algorithms and early chatbots. Artificial Intelligence is the all-encompassing term used to refer to the field in science trying to create human-like, intelligent agents behaving in a way that we would describe as intelligent, typically focusing on narrow solutions.

“Artificial General Intelligence” or AGI is a derived term referring to an intelligence that is on par, or superior to, human intelligence and with broad applicability.

Naturally, a whole host of approaches and techniques have been developed over the course of centuries to try and mimic the human thought processes. Starting with ancient mathematicians trying to formally deduce and codify the human reasoning process in symbolic form (sometimes described as Good Old-Fashioned Artificial Intelligence) up to recent computational techniques such as Deep Learning. Between these extremes lies expert systems or knowledge-based systems, cybernetics (brain simulation) and statistical and sub-symbolic techniques.

Machine Learning generally refers to sub-symbolical (or specialised statistical) approaches in which agents iteratively update parameters (a process called training) that will gradually shift the agent to produce the desired output from the input it receives. It basically learns to map an input – or perception – to a desired output without directly codifying the rules of the mapping. Research into Machine Learning picked up in the early 2000s at the expense of symbolic approaches to AI.

Deep Learning is a sub-field of Machine Learning which rose to prominence since 2012 and it has been enabled by recent advances in parallel computing power.

Graphics processing unit (GPU), tensor processing units (TPU) and elastically scalable cloud computing allow for the training of models, typically Artificial Neural Networks, with billions of parameters, which used to be an intractable problem on older computing hardware. Deep Learning models have proven to be very good performers in some perception tasks such as seeing and hearing, often exceeding human performance. The success of Deep Learning has been the main reason for the resurgence in research on AI and the practical application of AI in everyday life on mobile devices and corporate business processes alike. Applications of Deep Learning range from computer vision to natural language understanding.

However, as with most research fields in the artificial intelligence space, the rise and fall of research in the approaches have caught up with Deep Learning as well. The latest darling of the research community appears to be Reinforcement-Learning, the type of Machine Learning used by the AlphaGo and AlphaZero algorithms to achieve super-human playing ability on board games such as chess and Go. Indeed, the achievements of AlphaZero has seemingly sparked the interest of the research community to pivot in the direction of Reinforcement Learning. The power of Reinforcement Learning lies in the fact that it can learn the rules and constraints of a given operating environment and figure out how to achieve success without any potentially sub-optimal imbued human knowledge or expertise.

It is possible that by the time this piece reaches your social media feed, other terms will have popped into our lexicon (Transformer architecture or attention anyone?). As the world of technology and its possibilities continue to be invented at pace, so new terms are needed for discourse. If we are to keep up, we will need to be constantly aware of the changes and the language required to harness this massive potential.