Fri. Nov 29th, 2024

There is a growing movement surrounding “open AI” that aims to create transparent AI models that can be easily accessed and used by the public, similar to the open-source software movement of the early web era. However, this movement runs the risk of being co-opted by Big Tech companies.

One example of the tension between open AI and Big Tech is Llama 2, an AI system created by Meta (the parent company of Facebook, Instagram, WhatsApp, and Threads). While Llama 2 is touted as an “open” model, it still has restrictions on usage and its development pipeline is secret, making it questionable whether it truly qualifies as open.

Creating truly open AI models requires more than just releasing the final code. It also requires releasing training data, processing code, and other steps taken to fine-tune the algorithm. However, the complexity and resource requirements of generative AI make it difficult for smaller entities to create and audit open models.

Efforts are being made to shift AI infrastructure away from dominant tech companies and towards the public. The federal government is working on a National AI Research Resource, and universities are partnering to create high-performance computing centers for advanced AI research. Additionally, smaller, open models are being designed that are powerful enough for commercial use but cheaper to train and run.

However, even with these efforts, smaller entities still struggle to create their own AI models without substantial grant money or access to models provided by larger companies. The tech giants also benefit from seemingly open initiatives, as they can draw users into their product ecosystems and shape the direction of AI research.

The tech industry’s focus on scale and performance has shaped the public’s expectations of AI and limited the ways in which AI is built and used. Expanding the concept of open AI will require redefining open-source for AI and reimagining what AI itself can and should look like.

Related Post