While the debate about the merits of AI is interesting, it feels irrelevant; like arguing about accepting the internet in 1996
In the light of economic trends that have led towards geopolitical nationalism — and angst over the employment future of many people across the globe — discussions around Artificial Intelligence, machine learning and cognitive technology have quickly become the most important conversation in tech.
Probably the biggest global tech story this week was the decision by Intel to purchase Israel’s Mobileye, a company that builds computer vision technology for cars (in the presumable future, these vehicles would be driverless).
The pricetag for this deal — US$15.3 billion — was roughly three-and-a-half times the size of the much more famous brand Yahoo. This is not to say Intel overpaid (ok, it may have overpaid, but sometimes it is crucial to “back up the truck” to get the deal). Rather, the purchase underscores the current reality — AI is at the very top of every tech company’s priority list and justifiably so.
To get an idea of AI as an important driver in today’s society, here are three recent stories that highlight its value moving forward:
- Japan’s giant conglomerate SoftBank announced today it invested in a South Korean company called ObEn; it uses AI to build 3D Avatars.
- Google’s DeepMind has learned to ‘not forget’ how to solve tasks, allowing it to build upon previous knowledge.
- A Bloomberg story highlighted a programmed that significantly speeds up the loan process for JP Morgan.
It seems we are standing at the same point we were at 20 years ago with the internet. The technology has not yet permeated the way we live our lives, but to think that it won’t would be hopelessly naive.
The internet was the most important invention since the television, and the radio before that and the lightbulb before that. In ten to fifteen years, AI will be held up in the same category.
The AI debate
AI as a ‘cultural lightning rod’ has become increasingly relevant after the US election of Donald Trump. Both the Left and Right were confronted with a real-life consequence of progress. Almost overnight, it became clear that Silicon Valley’s unquenchable thirst for progress had left a lot of people behind.
Thanks to the internet, the rest of the world could read about it.
While the debate about AI has been happening for years, after the November election, the debate took on a certain earnestness that seems to be fairly novel.
Some argue that cognitive learning is the key to ‘working better‘ (The JP Morgan insurance programme would be an example).
They would point to the opportunity to be truly creative because the dullness of every job’s rote responsibilities would be done by robots. With this newfound creative freedom, human beings would be able to fully realise their intellectual potential.
Others (to be transparent, I fall in this camp), are concerned about the impact it will have on the workforce (and not just low-skill, low-wage work). For example, one white-collar industry already feeling pressure is law. AI-backed e-discovery platforms are allowing legal offices to search for case precedents and even predict how a court will likely rule.
Does that open up time to ‘maximise creativity’? Yes. It also puts a hell of a lot of legal clerks out of work.
AI is inevitable
Maybe the debate outlined above is irrelevant. Are the worries of people like me that important if the industry continues rapid innovation? Probably not.
After speaking with about 10 people from across the AI/Big Data spectrum — from investors and startups to corporates and techies with opinions — there seems to be a sense of inevitability about the fact that AI will play a significant role in the day-to-day lives of the average person.
Which means the next 5-10 years are to be defined by our ability to adapt to AI technology and its disruptive impact. Unlike the Internet in 1990, a lot of ‘old school’ people (like governments, traditional SMEs and large corporations) seem to recognise AI as a potentially disruptive technology.
There is an awareness that — pardon the generalisation — did not exist 20 years ago. Underestimating the internet eventually led to serious challenges in industries like newspapers, cable television and the postal service. It does not appear as if anyone is underestimating AI.
In a panel discussion at IBM Connect – Startup Xchange 2017, David Gowdey of Jungle Ventures explained it as such:
“If we were having this summit 15-20 years earlier, we would be talking about digital. Traditional businesses back then were trying to get their head around digital. Digital companies disrupted traditional business because they were able to offer seamless solutions.
“We see AI as similar to that. For a lot of internet companies out there, they are able to leverage cognitive technology to leverage efficiency gains. Think about it in a more horizontal way. The great thing about AI is a lot of it is open source, so it’s about the data and how you can leverage the structure.”
As Mark Cuban pointed out, the first trillionaire will be an entrepreneur that does exactly what Gowdey is talking about — leverage efficiencies to build the next ‘most important’ tech company.
AI is going to significantly increase the efficiency in which human beings can accomplish tasks — and we need to recognise that in doing so, it is likely to jeopardise or eliminate jobs.
By recognising this fact, and learning from those that were left behind in the internet economy, we can begin to adapt to the AI revolution.
Which is why now is the time to start thinking of how this transformative technology can benefit those who may not have the technical skillset to participate in building this world.
The post AI is the most important development in tech, and the deadline for adapting was yesterday appeared first on e27.