The AI divide: How regulation favors big players and hurts startups

Big AI companies like OpenAI, Google, and Microsoft are leading the conversation on how to regulate AI.

 These companies have formed industry coalitions and agreements with the White House to promote responsible AI and watermarking features.

 Smaller AI companies that build apps and tools on top of foundation models are feeling excluded and worried about the impact of potential regulations on their businesses.

Foundation models are large-scale AI frameworks that can generate language or images, and they are run by big AI companies

Regulations that hold smaller AI companies accountable for how the models they use get information or answer queries may affect them negatively.

They want to have a say in how they will be scrutinized and suggest calibrating requirements and fines to the size and scale of AI players.

Regulatory capture is a risk, where big AI companies shape the rules in their favor and protect themselves from competition.

Experts and civil society groups warn about the risks of regulatory capture and call for more public involvement in shaping AI policies that prioritize the public interest over corporate gains.

AI regulation is important, but it is also important how it happens, and what are the consequences.