How to Make Artificial Intelligence More Human in 2024

In 2024, Big Tech can't be the only one building AI. Regulators need to give 
access to more people.

By  Maxwell Zeff  December 20, 2023
https://gizmodo.com/how-should-we-regulate-ai-big-tech-startups-2024-1851101733


In the last year, computers have started acting strangely human.

As OpenAI’s Ilya Sutskever put it, you can think of AI as a “digital brain”, 
one that’s directly modelled after that of a human. Just like a young child, AI 
has incentives and learns from those around it.

If ChatGPT was a young child, it would be growing inside a $90 billion company, 
OpenAI, and learning to maximize profit above all else.

If a profit-seeking AI was tasked with treating cancer patients, would it solve 
for a cure or an ongoing treatment?

The answer is whichever generates the most revenue, but an AI built by people 
could prioritize core human values such as health and fairness.

We can make AI more human, and less profit-driven, if we create rules to decide 
who builds and benefits from it in 2024.

“AI technology has the potential to generate enormous wealth, but we need to 
ensure workers get their share of it and not just those at the top,” 
Congressman Ro Khanna, whose district includes Silicon Valley, told Gizmodo.

Representative Khanna says we must give workers a seat at the table in 
conversations about how AI will change our economy. However, OpenAI’s CEO, Sam 
Altman, believes his artificial intelligence will one day render workers 
useless.

Think he’s bluffing? He cofounded Worldcoin, a cryptocurrency company that 
claims it intends to redistribute wealth to the people when AI ultimately takes 
our jobs. It also has the problem of being a private company looking to gobble 
up biometric data and be the go-to source for identification.

Many AI developers, researchers, and regulators, like Representative Khanna, 
believe it’s essential to get AI into the hands of as many people as possible. 
Technology has always reflected the communities that built it, but none more so 
than artificial intelligence.

In other words, AI is too important for just Big Tech to build it. We don’t 
know what AI will ultimately look like, but experts say our regulation needs to 
broaden our society’s access to artificial intelligence.

The Internet is a model for success

Mark Surman, President of the Mozilla Foundation, believes non-profit, 
open-source institutions should play a big role in AI, just like Wikipedia, 
Mozilla, and Linux did for the early internet. Ideally, everyone would have 
access to a toolkit of AI building blocks, that are trustworthy, easy to use, 
and don’t steal your data. In his words, “open source Legos for the AI era.”

“OpenAI sounded like that’s what they wanted to build at the beginning, but it 
feels like they just ditched that mission along the way and just chose to be a 
startup,” said Surman.

OpenAI is still technically controlled by a non-profit board, but the company’s 
profit-seeking branch seems to have overtaken the company.

This was clarified in a dramatic flare-up leading up to Thanksgiving this year, 
where the board was unsuccessful in firing Sam Altman, and ended up leaving 
themselves. Just like AI, humans are also susceptible to profit-seeking in 
lucrative environments.

The internet is a good model to look to because it has become one of the most 
useful tools our society ever built, largely because it was open to all. Surman 
says we need to look at who controls AI, and who controls the data underneath 
it.

Big Tech’s corruption has limited innovation before

“If it’s the social media model, or even the OpenAI model, where it’s a big, 
rich, centralized cloud provider—that doesn’t give me hope we’re creating 
agents we can truly trust to work for us,” said Surman. “So we need a different 
model. We need a model that puts control and ownership of decision-making in 
the hands of everybody.”

The social media model Surman references was a privacy disaster for the tech 
industry. Shanen Boettcher, the Chief AI Policy Officer at AI21, also says 
social media had a “move fast and break things” approach, with little focus on 
its implications for society. “We are seeing similar behavior in the 
marketplace today with GenAI,” said Boettcher.

App stores are another example of what goes wrong when there is a lack of 
open-source models.

Apple and Google quickly locked down the distribution of content with the App 
Store and Google Play Store, and suddenly there were only two places to get 
your apps, and they both take 30% of the profit. Now we’re seeing antitrust 
regulators try to tear down these billion-dollar structures 15 years later. 
With AI, 15 years may be too late.

Industry concerns outside of Big Tech

“Our big concern is that the disparity between the AI haves and the AI 
have-nots will be worse than anything you have ever seen,” said Aible CEO and 
founder Arijit Sengupta, whose company is working to get AI into the hands of 
more people. Sengupta hopes that the government can set up societal goals for 
AI without stifling innovation at smaller companies with onerous processes.

The Aible CEO sees each country approaching AI with its own unique lens. The EU 
looks at AI from a standpoint of privacy, whereas China views it from a lens of 
“social cohesion,” worried that it could enable people to be more free.

“From the US perspective, we are looking at it from a standpoint of ‘the AI 
will take over the world and harm us.’ A lot of the discussion about AI, it 
starts from the position of harm,” said Sengupta, rather than focusing on how 
it can empower people.

“We’re not optimistic about the current legislation and discussion going on 
around it,” said Alex Reibman, co-founder of the AI startup Agent.Ops. One of 
Reibman’s products, an AI-enabled PDF reader, was knee-capped when OpenAI 
launched its own version. Reibman, Boettcher, and Sengupta are all concerned 
about regulatory capture. If Big Tech receives preferential treatment with AI, 
through regulatory capture, the result could be devastating.

New Jersey Congressman Josh Gottheimer came under fire recently for disclosing 
up to $50 million in Microsoft stock options, called out by X account Quiver 
Quant. Congressman Gottheimer sits on subcommittees that oversee capital 
markets and illicit finance, among others. Microsoft declined to participate in 
Gizmodo’s story on AI regulation.

What next?

Dr. Ramayya Krishnan is a Carnegie Mellon Professor who advises the U.S. 
Department of Commerce on artificial intelligence. He says the U.S. needs to 
invest heavily in research, small organizations, and startups, who have all 
been effectively locked out of being major innovators in the space.

“That’s Google, OpenAI, Anthropic, Cohere, and Microsoft,” said Krishnan 
calling out a few. “If you look historically at what’s benefited us as a 
society, as a country, it’s this broad participation and innovation where the 
universities, smaller companies, and smaller organizations were involved.”

Thankfully, there are some efforts on this front. Senators created the National 
Artificial Intelligence Research Resource (NAIRR), which provides cloud 
computing resources for American universities. Krishnan says this is a huge 
boost to AI innovation, but is only funded at $140 million, and Krishnan says 
it needs much more investment to compete with Big Tech.

Last week, the unofficial “Godmother of AI,” Fei-Fei Li, wrote a Wall Street 
Journal feature saying we need a “moonshot mentality” in 2024 around AI.

The ‘60s were the last time the U.S. government stood behind its tech sector to 
initiate progress and the resulting boost in innovation was felt for decades.

Today, there is great promise regarding artificial intelligence, but the public 
sector needs to invest broadly across the AI ecosystem. We need to bring AI 
innovation down to the human scale.

_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to