We’re still a long way away from AI that can think for itself
This article was first featured in Yahoo Finance Tech, a weekly newsletter highlighting our original content on the industry. Get it sent directly to your inbox every Wednesday. Subscribe!
Wednesday, Feb. 15, 2023
Don't expect human-like AI anytime soon
ChatGPT, Microsoft’s (MSFT) Bing, and Google’s (GOOG, GOOGL) Bard have the world talking about AI. Whether it’s how generative AI, artificial intelligence that creates content, will change art or help people more efficiently browse the web, the new crop of AI offerings is generating tons of buzz.
What makes ChatGPT and its cohort so intriguing is that they provide the illusion of an artificial intelligence that can think like a human. After all, if it’s able to write like one of us, it’s surely doing some kind of thinking, right? The reality, however, is that AI that can truly think like a person, referred to as artificial general intelligence, is still potentially decades away from becoming a reality, if it ever does at all.
“In the last few years, we've made pretty dramatic leaps,” Rayid Ghani, professor of AI at Carnegie Mellon University’s Heinz College, told Yahoo Finance. “But we’re still several leaps away from something that's general purpose that can be used for critical applications.”
Still, some experts say that we’re already on the way to reaching the kind of sci-fi inspired AI seen in films like “Her” and “Star Trek.”
“If you asked me…five years ago, I think I would have said no,” explained Yoon Kim, MIT assistant professor of electrical engineering and computer science. “Now I'm less sure. It might be one of the most pivotal moments in human history…and it will obviously raise a lot of philosophical questions and have a massive impact on society.”
ChatGPT, Bing, and Bard are smart, but not human-like
Ask ChatGPT to write a story about a girl who becomes a powerful mage who can summon storms and control dragons, and it will do just that. But that doesn’t mean that the platform is thinking like a person.
Instead, it’s been trained to recognize how certain words tend to pair based on feedback from human trainers and data pulled from the web and puts them together in an order that makes it seem like a person is writing. It’s more complicated than that behind the scenes, but that’s the gist of it. This process is called generative artificial intelligence, because, well, it’s generating something new from content it already recognizes.
But that doesn’t mean these platforms are thinking like you and me.
“I don't think these models can think in a human sense,” explained Qian Yang, an assistant professor in information science at Cornell University. “There is some level of reasoning abilities there, but the mechanism in which these models produce these logical judgments are apparently very different from how humans think.”
There’s also a broader discussion of what artificial general intelligence really means. If the software is supposed to replicate how a person thinks, who decides what human it’s based on? A 3-year-old and a 36-year-old are both people, but they think differently. People also have different thought processes based on their own experiences. How do we determine which person’s experiences are considered sufficient to replicate?
There are also broader questions about what artificial intelligence actually is, according to Yang.
“When machines do things that we typically associate with humans…yet the machines can do it, we think of it as artificial intelligence. But of course, that's a moving target,” she said.
One example Yang points to is searching and memorizing large amounts of data. In her classes, Yang says students don’t typically think of searching huge piles of data as a human task, but rather associate it with search engines like Google.
If that’s the case does that mean that kind of searching is no longer considered artificial intelligence?
Should we even bother making artificial general intelligence?
Thorny questions about what constitutes artificial intelligence aside, there’s also a broader debate as to whether tech companies should even be pursuing artificial general intelligence in the first place.
According to Ghani, firms should be working to create technologies that assist people, rather than totally replicate their thinking.
“Let's say we replicated humans perfectly today. We would still have a world that has inequities, we will still have a world that's racist, we still have a world that's sexist. That's not a world I want to recreate,” he said.
“I want to try to create a better world, which means augmenting people with these tools such that it corrects where they're doing things wrong, reinforces whether they're doing things right,” Ghani added.
While questions about what constitutes artificial intelligence and whether experts should pursue it as a means of replicating human thinking remain unanswered, there appears to still be plenty of time to consider them before artificial general intelligence becomes a reality.
“I think we’re still as far away from AGI as we were two, three years ago,” Kim said. “I don't think the release of [ChatGPT and Bing] to the general public has changed my opinion.”
By Daniel Howley, tech editor at Yahoo Finance. Follow him @DanielHowley
Read the latest financial and business news from Yahoo Finance
Follow Yahoo Finance on Twitter, Instagram, YouTube, Facebook, Flipboard, and LinkedIn