Bill Gates explains why we should not fear artificial intelligence. Bill Gates, the co-founder of Microsoft, firmly believes in the potential of artificial intelligence, often stating that models like the one at the core of ChatGPT are the most significant technological advancement since the personal computer.
He says that the emergence of technology could contribute to problems such as deep fakes, biased algorithms, and school cheating, but he predicts that these issues are solvable.
In a blog post this week, Gates wrote, “One thing clear from everything written so far about the risks of AI—and a lot has been written—is that no one has all the answers.” “Another thing that is clear to me is that the future of AI is neither as bleak nor as bright as some people believe.”
Read more: San Francisco’s Anchor Brewing Company has announced that it will cease operations.
At a time when governments worldwide are grappling with how to regulate the technology and its potential drawbacks, Gates’ moderate stance on AI risks could shift the conversation away from dystopian scenarios and toward more limited regulation addressing current risks. On Tuesday, for instance, senators received a classified briefing on artificial intelligence and the military.
Gates is one of the most prominent voices regarding artificial intelligence and its regulation. Additionally, he maintains a close relationship with Microsoft, which has invested in OpenAI and integrated ChatGPT into its fundamental products, including Office.
In the blog post, Gates argues that humans have adapted to significant changes in the past and will do so again with AI, citing society’s responses to previous technological advances.
“For instance, it will have a significant impact on education, but so did the introduction of handheld calculators a few decades ago and computers in the classroom more recently,” Gates wrote.
“Speed limits and seatbelts” are, according to Gates, the type of regulation the technology requires.
“The first car accident occurred when the first automobiles hit the road. But we did implement speed limits, safety standards, licensing requirements, laws against drunk driving, and other road rules,” Gates wrote.
Gates is concerned about several issues arising from the widespread adoption of the technology, including how it could affect people’s employment and “hallucination,” or the tendency for models such as ChatGPT to fabricate facts, documents, and people.
For instance, he cites the issue of deep fakes, which use AI models to make it easy for people to create fake videos that impersonate another person and could be used to defraud people or sway elections.
But he also believes that people will become better at identifying fakes, citing the development of fake detectors by Intel and DARPA, a government-funded organization. He proposes regulations that would delineate precisely what types of fraud are permissible.
In addition, he is concerned about the capacity of AI-generated code to search for the software vulnerabilities required to attack computers, and he proposes a global regulatory body modeled after the International Atomic Energy Agency.