AI’s Inevitable Takeover? Why Experts See Jobs at Risk | Image Source: www.entrepreneur.com
WASHINGTON, D.C., 3 April 2025 - The conversation about artificial intelligence (AI) is growing, not just in technical centres and conference rooms. With the rise of generic AI tools like ChatGPT, Microsoft Copilot and Google Gemini, American workers are increasingly wondering: these tools will help us do our job better, or take our work completely? A recent comprehensive study by Pew Research Center, supported by interviews with more than 1,000 AI experts and more than 5,400 adults in the United States, deepens this imminent issue, exposing the growing gap between those developing AI and those living with its consequences.
According to Pew’s latest report, the professions most likely to disappear over the next two decades include ATMs, journalists, factory workers, truck drivers and even software engineers. It’s not just speculative thinking. Important consultants such as Goldman Sachs and McKinsey have already predicted that AI could automate up to 375 million jobs by 2030. The emerging reality suggests that not only are we facing the reorganization of work, but we are looking at the barrel of a change in the labour force.
What are the jobs in the AI cutting block?
Q: What specific work do AI experts consider the most at risk?A: Based on Pew’s findings, 73% of AI experts believe that ATM positions are likely to disappear. Journalists and factory workers are following closely, with nearly 60% consensus. Truck drivers, due to advances in self-driving, are considered vulnerable by 62% of experts, although only 33% of the general public agree. Software engineers are also on the list, with about half of the experts anticipating reductions.
This divergence between public perception and expert assessment highlights a more important issue: trust and understanding. As Jeff Gottfried, Associate Research Director of Pew, said, “it is really important that both sets of views are in the room.” The study focuses not only on jobs at risk, but also on the differences between AI experts and the general population in interpreting these risks.
The public is worried, experts are optimistic
Q: How do the attitudes of the general public and experts towards IA differ?A: This gap is widening. While 73% of AI experts say AI will have a positive impact over the next 20 years, only 23% of American adults share this optimism. Even more, 64% of adults believe that IA will result in fewer jobs, while only 39% of experts think so. Experts view AI as an instrument for improvement; The public considers it a threat to livelihoods.
The divergence goes beyond work. Only 17% of U.S. adults believe AI will have a positive impact on the U.S. over the next two decades, compared to 56% of AI experts. And in terms of personal benefits, 76% of experts see IV improve their lives, compared to only 24% of the general public.
Substantive issues: Who is in control of AI?
Q: Why is there so much skepticism about IA regulation?A: Because the public does not believe that government – or technology companies - will do well. According to Pew, more than half of the two groups (public and expert) want more control over how AI is used in their lives. But they don’t trust anyone to understand bets. An expert from an American university said:
“It seems that when you look at these hearings in Congress, you don’t understand at all. I don’t know if I have the faith that they could provide enough experts to understand it enough to regulate it, but I think it’s very important
There is a bipartite concern: 64% of Democrats and 55% of Republicans believe that regulation will be too short. What about public confidence in public or private companies to use AI responsibly? Almost non-existent.
Gen Z Reality: Growing with AI
Q: What does the younger generation of AI think?A: They’re careful. According to Gallup and Walton Family Foundation, 79% of Gen Z already use artificial intelligence tools such as ChatGPT, with almost half of their weekly use. But that doesn’t mean they trust them. In fact, more Gen Zers say AI makes them anxious (41%) than excited (36%). Only 27% feel hope. They fear that AI will harm their critical thinking, and only a third work generated by AI in terms of human production.
“They haven’t reached a point where they think the benefits outweigh the risks,” said researcher Gallup, Zach Hrynowski. This sentiment encapsulates the generational dilemma: familiarity does not always translate into confidence.
Gaps in representation and bias
Q: Who can shape AI, and who is left behind?A: There is a worrying lack of diversity in the development of IA. Pew’s report shows that the public and experts believe that AI reflects the views of white men more than any other group. About 75% of experts say that men’s perspectives are taken into account, but less than half say the same for women. The representation of black, Hispanic and Asian perspectives is even weaker.
As one black AI expert pointed out:
“Disabled people are under-represented… they’re mostly straight white men or men of color who really invest and excited about these technologies, but… [when] people start being replaced by technology, they will always affect under-represented groups first. »
It’s not just a design problem. It’s a system. When bias is expressed in algorithms, it can produce discriminatory results, whether in recruitment, lending or law enforcement.
Set the future or play capture?
Q: Can legislators and businesses be trusted to manage AI responsibly?A: Anyway, not really. Pew data show that 62% of the public and 53% of experts have little or no confidence in the government’s ability to effectively regulate IA. On the business side, 59% of the public and 55% of experts do not trust companies to act responsibly.
A private sector expert expressed frustration:
“I think [businesses] have a ton of responsibility. Unfortunately, I don’t necessarily think that… responsibility plays such an important role in your decision-making about what you’re going to follow and how quickly you’re going to release something. »
Lack of confidence is compounded by lack of clarity. More than half of GZ students and workers say that their schools or employers do not have clear access to information policies. But research shows that when institutions provide transparent guidelines, trust increases and users feel better prepared for the future.
There is an urgent need for a proactive and non-reactive regulatory framework. But this requires legislators to understand the nuances of technology, and at the moment this bar is still painfully low.
The AI revolution is not diminishing. OpenAI’s Sam Altman predicts that the first IA officers will be significantly integrated into the workforce by the end of 2025. It remains to be seen whether these agents will supplement or replace human workers. For now, one thing is clear: AI is no longer the future. It is the present, and we all have an interest in the form of its development.