OpenAI unveils policy plan for the era of super-AI
American company OpenAI (Developer of ChatGPT) April 6 published Their recommendations on how to change economic and social policy in the age of artificial intelligence. This is a 13-page document that includes OpenAI Calls The world is preparing for the emergence of superintelligence – artificial intelligence, which will finally surpass humans.
Superintelligence will accelerate scientific progress, increase labor productivity, reduce the cost of production, and “open the way to entirely new forms of work and creativity.” But OpenAI expects that risks will come with benefits: job losses, increased concentration of wealth and power, misuse of technology, and AI spiraling out of human control.
OpenAI makes no predictions about when superintelligence will be created. But he clarifies that he considers its emergence to be inevitable and imminent, and its impact is global. The company is calling for “sensible rules” to be introduced now to help counter economic shocks caused by the development of artificial intelligence. OpenAI describes its thoughts on this issue as not a final set of recommendations, but rather a starting point for discussions — not just for the United States, but also for other countries around the world.
What exactly does OpenAI offer?
Below are the main ideas of the company.
The right to artificial intelligence. OpenAI suggests that AI should be viewed as critical infrastructure for participation in the modern economy (along with electricity and the Internet) and calls for widespread access to basic AI models, including in schools and libraries.
Tax on the use of artificial intelligence. Currently, many social and health care programs are funded by taxes on human labor. As artificial intelligence replaces humans in the workplace, corporate profits will rise and payroll taxes will fall. Therefore, OpenAI proposes to shift the tax burden to income associated with capital and automated labor (in the American media the last proposal was called a “tax on robots”).
National Welfare Fund. OpenAI calls for the creation of a fund that would give every citizen of the country a stake in the economic growth associated with AI. “The state and AI companies are supposed to jointly determine how to fill the fund. They can invest in AI companies themselves and in a wide range of companies implementing AI. The income of the fund can be distributed directly to citizens, regardless of their initial wealth level,” the company wrote.
Shorten the work week. Employers should turn efficiency gains from AI into benefits for employees — for example, moving them to a four-day workweek, increasing pension contributions, or shouldering a larger share of health insurance costs.
Transforming people into “people-oriented professions.” Care must be taken to ensure that those who lose their jobs due to AI have opportunities to retrain and move into new professions. Especially in those places where human contact is important – child and elderly care, education, health care and social services. OpenAI believes that the authorities should support the transition into such areas and encourage employers to raise wages and improve working conditions.
Security systems. This means developing tools to test AI results, monitoring misuse by governments and companies, and containing AI models if they pose a risk to humans.
Development of energy networks. New models of public-private partnerships are proposed to rapidly build the energy infrastructure needed to power AI. As planned, this should not increase people’s energy costs.
What do they say about this plan?
After the plan was published, OpenAI CEO Sam Altman gave an interview to Axios, in which he spoke male: “We want to raise these issues [регулирования ИИ и перестройки экономики] For discussion. We feel the seriousness of the situation. We want this matter to be discussed seriously.”
Axios believes the OpenAI plan could be viewed in different ways — either as a fair warning of a coming storm, or as an attempt to amplify the hype around AI and unseat rival Anthropic (which has been building a reputation for itself as a responsible participant in the AI race for six months). Released similar recommendations). However, in any case, OpenAI’s findings are worth deep thought, as Axios summarizes.
With this He agrees Experts. According to them, government regulation has not kept pace with advances in artificial intelligence. However, OpenAI is a stakeholder, so its thoughts on this topic should be treated with caution. “Proposed architecture [ИИ-компаниями]As a rule, architects are preferred. <…> This is not a reason to reject the document. This is a useful contribution to the debate, but the political debate needs to be broader and include many more viewpoints than the already dominant Silicon Valley voices. books Adrian Brown is a British government expert who analyzes the impact of artificial intelligence on society.
At the same time, many of the ideas proposed by OpenAI have been criticized mysterious or secondary.
She served in the US Senate from 2023-2024 [там] All this has already been said. I wrote this by hand in notepad! It’s all been said already, everything. <…> These ideas are not wrong. The problem lies in the gap between naming solutions and creating real mechanisms to achieve them.” male Independent AI Policy Consultant at Fortune Sorribel Velez.
Will Manides, AI entrepreneur and contributor, I noticedsaid that OpenAI’s most discussed idea—creating a fund to distribute profits from artificial intelligence—contained no details: “The fund needs a source of funding. <…> But OpenAI is reluctant to say directly that it will contribute funds. It works because Norway taxes oil at 78%. <…> This document does not suggest anything of the sort; “It only suggests discussion.”
The release of OpenAI’s recommendations coincided with the publication of an article about the company’s president
On April 6, the day OpenAI published a list of its political ideas, in The New Yorker Released Great article about company president Sam Altman. Written by journalists Ronan Farrow and Andrew Marantz. Farrow is best known as the author of investigations into Hollywood producer Harvey Weinstein. These posts inspired the #MeToo anti-harassment campaign in the United States, and Weinstein himself eventually ended up behind bars.
Farrow He saidHe and Marantz worked on the material about Altman for a year and a half, speaking with hundreds of people from the businessman’s circle and himself (more than a dozen times). In their article, they describe how Altman helped create OpenAI, prevailed over an attempt to spin it off in 2023, and turned OpenAI from a nonprofit into a company. receipt It is valued at $850 billion and is preparing to go public.
The article isn’t particularly revealing against Altman (accusations of sexual violence against his sister have not been confirmed), but it can’t be described as gratuitous either. The authors describe OpenAI’s CEO as power-hungry and prone to embellishing reality, so much so that he was accused of lying. For a man who heads one of the largest companies in one of the most important and unpredictable industries, these are dangerous traits, some of Altman’s colleagues say. He himself rejects accusations of dishonesty and says he simply avoids conflict.
Here’s what the New Yorker writes:
Most of the people we spoke to shared the assessment: Altman has an insatiable thirst for power that sets him apart even among the tech moguls who send spaceships in their own names.
“He’s not tied to the truth,” one board member told us. [OpenAI]. “He has two qualities that are almost never found in a single person. The first is a strong desire to be loved. The second is a lack of concern for the consequences that deception can entail.”
