Homo Sapiens 2.0

When IBM’s Deep Blue won its match against chess world champion Garry Kasparov in 1997, it became the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls. Kasparov accused IBM of cheating (he’d won the original match, a year earlier, 4-2) and demanded a rematch, but IBM declined and retired Deep Blue. Publicity stunt aside, the writing was on the wall – computers had moved out of the realm of being number crunchers and started to develop the ability to strategise and react, adapting to outside stimuli to solve problems. Then, in 2011, IBM’s Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter on US national television to further advance the machine cause.

Garry Kasparov playing against Deep Blue, the chess-playing computer built by IBM (Pic: Adam Nadel/AP Images)

Fast forward to 2020 and machines are doing more than out-thinking humans at games – they’re primed to replace many of us. The World Economic Forum (WEF) predicts that 41% of all work activities in South Africa are ‘vulnerable’ to automation – but then, many human labourers faced the same reality when machine tools arrived on the scene in 1760. The Industrial Revolution demonstrated that progress is inevitable and that being able to adapt to change brought about by mechanisation – humans were still needed to design, build, operate and maintain the machines that replaced them – was the key to survival in the workplace, albeit for a limited number of people. Having seen that scenario play out – and while staring the Fourth Industrial Revolution in the USB ports – humanity really needs to be considering what technology means for the future of the human race, rather than its chances of employment.

Singularity University is a global community and think tank that is attempting to harness the power of the ‘exponential technologies’ (think AI, augmented and virtual reality, data science, nanotech, robotics and more) to tackle humanity’s challenges. The community aims to create a more abundant future for all – which hopefully that means finding you a new place in society before your toaster replaces you at the office.

Fear & Loathing of Tech

Dr Tiffany Vora is, amongst many other things, Faculty Director and Vice Chair of Medicine and Digital Biology at Singularity University. Her advice for those questioning whether to embrace or fear what’s coming in terms of the role that tech is set to play in the future of humanity is: ‘Both!’. ‘Fear is useful if it motivates us to ask and answer hard questions now. I believe that humans are capable of wonderful things, including solutions to the problems that we created ourselves. But I don’t believe in using fear as a justification for holding back research or innovation. Thanks to technology, we are empowered to design our futures. It’s up to us to design positive ones,’ she says.

Dr. Tiffany Vora

Mic Mann, futurist and exponential technology strategist, is hopeful that the rise of tech will actually make it easier to preserve our humanity in a digital world. ‘I think we will have more time to focus on our humanity and the real important issues when technology frees us from doing mundane repetitive tasks,’ he says. ‘We will have more time to focus on deep meaningful relationships and evolve our current jobs when the tasks change’.

Mic Mann

Embrace Improvements

Biomedical Gerontologist Dr. Aubrey de Grey, on the other hand, has a more singular approach: ‘We should unequivocally relish it, and we should dispel any fears simply by taking an objective look at how vast are the improvements in our quality of life that technology has already brought about in the past’. As the Chief Science Officer of the charity SENS Foundation, he’s leading the charge in finding ways to combating the aging process – which he’d less likely be doing if he didn’t see a future for humanity in a tech-lead world. Jason Dunn, who co-founded Made In Space, the first organisation known to have manufactured a product outside our own planet, offers tempered optimism: ‘Today we have the opportunity to solve the world’s biggest challenges with advancing tech, which gives me a lot of hope for the future. But this doesn’t mean we shouldn’t be careful. Technologies like AI and Biotech are advancing faster than our policies can sometimes keep pace with, creating opportunities for bad things to be created as well’.

Dr. Aubrey de Grey

Dr. Vora isn’t sure whether technology is inherently good or bad, preferring to focus on what humans choose to do with it. ‘I’m excited by technologies for which the potential for good far, far outweighs the likelihood of bad. More than a million people are killed by cars around the world every year, but who’s talking about getting rid of cars?’ she asks. ‘In the same way, I worry about talk about getting rid of technologies for gene editing that could radically transform human health and could help feed the planet in the face of climate change, but that some people fear could be used by a biohacker to make a weapon. I wouldn’t want to see us abandon a rapidly democratizing, empowering technology for fear of a future that may never come’.

Who Makes the Rules?

Governance keeps coming up – who controls the path technological development should take, or even defines what machines are ‘taught’? ‘As long as humans are inventing the tech, then I think that humans have both the privilege and the responsibility to set the needs and requirements for the tech,’ says Dr. Vora. She says that she doesn’t have a ready answer on the governance question, but that it’s one she hopes to debate at the Summit. ‘Future generations of humans could be immune to HIV and the flu, could resist the harmful effects of radiation in space, could live much longer lives, could be smarter and healthier and feel less pain. Should parents be able to make these decisions for their own offspring? Does society as a whole have the right to regulate the biology of future citizens? Do governments? Do companies?’.

Mann is all for governance and cites current Institute of Electrical and Electronics Engineers (IEEE) ethics codes for people in the robotics industry as a positive step. ‘The bad things that can be created by AI and machine learning are serious and real. There is a major need to make sure we govern the types of AI or robotics tools that can be developed and created. Dr. De Grey has a simple response: ‘Tech makes it easier for bad people to do bad things, but also for good people to stop bad people from doing bad things’.

Wall-E is smarter than you (Pic: Lenin Estrada/Pexels)

He has a similarly taciturn answer to the question about technology’s role in helping humans live longer, better or smarter. ‘The “longer” part, which is my area, will consist of a divide-and-conquer approach of comprehensive, periodic preventative maintenance: of repairing the many types of molecular and cellular damage that the body does to itself in the course of its normal operation,’ he says. As one of the originators of the discussion around the much-quoted theory that the first person who will live to 200 has already been born, he says he’s immensely saddened by the desperation with which people respond to the question by agonising over whether they want it. ‘Any child can see that longevity is merely a side-effect of health, and thus that having an opinion about how long you want to live is crazy, since it ignores that causality. It’s about as sensible to have an opinion about how long you want to live as it is to have an opinion about what time you want to go to the toilet next Tuesday,’ he says.

AI-lien

Dr. Vora is fascinated by the discussions around the “alienness” of AI – how current algorithms make decisions that people can’t understand, leading to the theory that algorithms can’t be trusted. ‘This mindset confuses me, because we’ve got a good 10 000 years of evidence of what happens when people are allowed to make decisions!,’ she says. ‘I’m an experimentalist by training, so I’m curious to know what would happen if we abandoned the “humans know best” bias. We know that current AI captures the biases of the people who program it. What would happen if we took the human factor out of the equation entirely?’

*A version of this article appeared in the October 2018 issue of khuluma.

Leave a comment