OpenAI Announces $100,000 Grant for Ideas on AI Governance to Address Bias
The $100,000 grants will go to recipients who present compelling frameworks for answering such questions as whether AI ought to criticize public figures.
OpenAI, the startup behind the popular ChatGPT artificial intelligence chatbot, said Thursday it will award 10 equal grants from a fund of $1 million (roughly Rs. 8,300 crore) for experiments in democratic processes to determine how AI software should be governed to address bias and other factors.
The $100,000 (roughly Rs. 82 lakh) grants will go to recipients who present compelling frameworks for answering such questions as whether AI ought to criticize public figures and what it should consider the “median individual” in the world, according to a blog post announcing the fund.
Critics say AI systems like ChatGPT have an inherent bias due to the inputs used to shape their views. Users have found examples of racist or sexist outputs from AI software. Concerns are growing that AI working alongside search engines like Alphabet's Google and Microsoft's Bing may produce incorrect information in a convincing fashion.
OpenAI, backed by $10 billion (nearly Rs. 81, 950 crore) from Microsoft, has been leading the call for regulation of AI. Yet it recently threatened to pull out of the European Union over proposed rules.
"The current draft of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back," OpenAI's chief executive Sam Altman told Reuters. "They are still talking about it."
The startup's grants would not fund that much AI research. Salaries for AI engineers and others in the red-hot sector easily top $100,000 (roughly Rs. 82 lakh) and can exceed $300,000 (roughly Rs. 2.4 crore).
AI systems “should benefit all of humanity and be shaped to be as inclusive as possible," OpenAI said in the blog post. "We are launching this grant program to take a first step in this direction."
The San Francisco startup said results of the funding could shape its own views on AI governance, though it said no recommendations would be "binding."
Altman has been a leading figure calling for regulation of AI, while simultaneously rolling out new updates to ChatGPT and image-generator DALL-E. This month he appeared before a U.S. Senate subcommittee, saying “if this technology goes wrong, it can go quite wrong.”
Microsoft too has recently endorsed comprehensive regulation of AI even as it has vowed to insert the technology into its products, racing with OpenAI, Google and startups to offer AI to consumers and businesses.
Nearly every sector has an interest in AI's potential to improve efficiency and cut labor costs, along with concerns AI could spread misinformation or factual inaccuracies, what industry insiders call “hallucinations.”
AI is already behind several widely believed spoofs. One recent phony viral image of an explosion near the Pentagon briefly affected the stock market.
Despite calls for greater regulation, Congress has failed to pass new legislation to meaningfully curtail Big Tech.