views
The biggest tech companies are busy developing their AI tools, including Microsoft which has a strong lead thanks to its partnership with OpenAI. But the company’s CEO Satya Nadella feels that the thinking behind AI needs to change, as it cannot be compared to human intelligence. Nadella even has a new term to define AI, which he calls “different intelligence.” The Microsoft CEO was quoted saying this in a recent interview, where he was trying to paint the technology in a more realistic picture, as many feel that AI can be a threat to humans in many ways.
AI For Tools, Not Replace You
He even went on to claim that AI will always function as tools and not come close to the basic human intelligence, which has created a big demand for AI-trained workforce. “I sort of believe it’s a tool and it has got intelligence, if you want to give it that moniker, but it’s not the same intelligence that I have,” Nadella was quoted saying by Bloomberg.
The Microsoft chief seems to be doing his best to calm down the hype around AI, which is slowly but gradually making an impact in the industry. The likes of Google and Meta have fired staff to integrate AI into their work systems, but maybe what Nadella is trying to emphasise is that AI and humans can co-exist rather than seeing them compete in the near future.
And he might have a point in this comparison, mostly because, these AI models or LLMs are being trained on human data, and intelligence, so expecting these machines to surpass our level of smartness does feel rather ambitious.
Should We Be Worried?
However, when you hear the likes of Sam Altman from OpenAI and Elon Musk from Tesla, you tend to fear for the future, as both these tech whiz suggest that AI is close to reaching human intelligence, in fact, Musk even stated that by 2029 AI will surpass the intelligence of all the humans, which is a scary thought.
That’s not all, Microsoft showcased its new AI features like Recall and Copilot Plus tech which looks to train on our intelligence and even mimic the emotions and thoughts over time. All these scenarios point to one big need and that is to regulate these AI systems before they get out of hand.
Comments
0 comment