Artificial Intelligence Isn’t So Artificial…

While the phrase ‘Artificial Intelligence’ has become a 21st century technological buzz-term, it was first coined by scientists at a conference at Dartmouth in 1956 when they arranged a workshop to bring luminaries in the fields of cybernetics, mathematics, formal reasoning and other related fields of academia together to explore problem solving.

Marais Neethling, Head of Artificial Intelligence at leading cloud computing, digital and regtech firm Synthesis, says that AI has become the all-encompassing term used to refer to the field in science trying to create human-like, agents behaving in a way that we would describe as intelligent, typically focusing on narrow solutions. “For years, computers excelled at number crunching but certain tasks such as voice recognition, object recognition in photos and predicting behavior of agents in unconstrained environments remained a challenge for programmers to overcome. These days, computer algorithms can perform as well or better than humans on a few narrow tasks that typically take a human between 0 and 3 seconds to perform,” he says. “These tasks are typically perception tasks that don’t require deep, abstract thought by the human brain. Of course, there are apparent contradictions to this rule of thumb: consider generative models that write semi-coherent paragraphs of English text or compose musical scores – or even create art. However, these models arguably don’t apply ‘creative thinking’ in the way humans are used to – they are merely an extension of the algorithms used in perception-based problem solving”.

Marais Neethling (Pic: Supplied)

AI, AI, Everywhere

Gartner research suggests that AI and Machine Learning will eventually infiltrate just about every existing technology and IDC analysts predict that global investment in new intelligence technologies will exceed $265 Billion by 2023. IDC also predict that AI will be a key component of 90% of business software applications, and more than 50% of user interface interactions will incorporate some form of computer vision, speech recognition, natural language processing and augmented reality.

The element of AI which has driven the field into the public consciousness – whether people know the phrase or not – is ‘Deep Learning’. As the name suggests, Deep Learning networks are capable of learning, unsupervised, from data that is unstructured or unlabelled. The proviso is that they still need to first be ‘trained’ on massive amounts of labelled data, which gives them the basis from which they can mathematically isolate and analyse patterns within subsequent huge sets of data – effectively ‘learning’. The data input can be anything digital – anything from an image to a credit card purchase. The output is a response to a query – asking the machine to recognise a face in the image or verify the credit card purchase. This is the sub-set of AI which is helping develop technology like self-driving cars, for one, and helping big companies sift through mountains of data to gain insights about their customers or industries, to help develop new products of deliver improved efficiencies.

Deep Learning vs Reinforcement Learning

That said, Neethling says that Deep Learning is taking a back seat to Reinforcement Learning in the research space at the moment – though there is also place for a combination of the two, where unsupervised machine learning can be achieved, but that is much less common at the moment.. “Reinforcement Learning is a technique similar to the way humans and animals learn – by being rewarded for good behaviour in the problem domain and being punished for ‘bad’ behaviour or behaviour that doesn’t lead to a payoff,” he says.  “Reinforcement Learning teaches the mechanism how to optimize its behaviour to get rewarded – and, in so doing, learns how to complete the task correctly, itself”. The challenge lies in the mechanism actually gaining an understanding – there’s no supervision to tell it what the outcome should be. It just needs to explore the problem domain, act inside it and make mistakes until it figures out what it’s doing wrong. “A simple example is a board game – there are rules, and based on conforming to those, there are winners and losers. If you make a move that leads to a win, the feedback is that you’ve done well,” he says. This works in specific environments – it wouldn’t be ideal for one with an initial physical embodiment, like a self-driving car. “You couldn’t just let a car drive and figure out what’s right and wrong – it’d make mistakes and crash while it was exploring,” he says. “It’s better for the mechanism to learn in a simulated environment that leaving the road or straying from a lane is wrong, first – and then shift it into the physical realm, once it actually understands and can alter its behaviour, accordingly”.

What a Customer Wants…

Synthesis specialise in harnessing the power of AI – on the back of the Amazon Web Services (AWS) platform – to help brands gain an exponentially greater understanding of the needs, wants and behaviours of their customers, which gives them unprecedented insight into just what makes the users of their products or services, tick. In this application of AI, the aim is greater personalisation, which will boost the customer’s experience and hopefully deliver improved conversion rates. Think Amazon.com, where each site visit, product view, purchase, rating, review, shortlisting and referral delivers insights into customer behaviour. When AI is applied to the analysis of this data, the result is an accurate set of suggestions for future purchases.

Platforms like AWS deliver ‘shrinkwrapped’ AI services for defined problem domains, which allow customers to plug the service into their existing offering, with the help of a developer for the sake of integration. “As a developer, I don’t have to understand Machine Learning to be able to consume those – it’s easier to integrate them into existing software,” says Neethling. “As we move to more bleeding edge research and development with things like Natural Language Processing (NLP), things get a bit more difficult”.

Google’s BERT (Bidirectional Encoder Representations from Transformers) search algorithm is an example of NLP. “Say you want to try to understand a piece of text, or gauge its intent, you could use BERT for sentiment classification. You could look at social media posts around your brand to have the AI try to determine sentiment, engagement, where people are having problems and where they’re paying you compliments – without the interpretation of the messaging being influenced by human bias,” says Neethling.

AI vs Your Brain

Hearteningly for humans, Gartner’s research indicates that AI and machine learning will not completely replace people because, in their words, “AI-driven autonomous capabilities cannot match the human brain’s breadth of intelligence and dynamic general-purpose learning. Instead, they focus on well-scoped purposes, particularly for automating routine human activities”. Similarly, Forrester analysts predict the future of work relies on “a symbiotic relationship between man and machine. This is not a man-led, machine-do structure; instead it will match leadership, decisioning, and executive tasks across robots and machines that best deliver the desired outcome”.

*A version of this article appeared in the April/May 2020 issue of Destiny Man.

Leave a comment