What are the outcomes of openness in AI?

Jennifer Ding
3 min readApr 8, 2024

--

Like many of the technology cycles before it, the latest hype cycle for generative AI has re-introduced familiar narratives, such as the inevitability of advancement, the arms race for corporate or national power, and how AI will change how we do everything.

People joining the Open AI garden bringing different languages, skills, and backgrounds
This illustration is created by Scriberia with The Turing Way community. Used under a CC-BY 4.0 licence. DOI: 10.5281/zenodo.3332807

While these narratives and the major players shaping them may not be new, amidst the latest advances in generative AI, the position of “open” and “open source” in AI has taken a new role. This position is in many ways responsive to ongoing issues and tensions that generative AI has exacerbated and growing awareness and anxiety around the power imbalance between the producers of AI and the rest of us. Our hopes and fears for AI map onto our hopes and fears for life in the 2020s, as we push for more democratic, safe, trustworthy, inclusive, and open AI.

To date, much of the conversation around “open source AI” has focused on the importance of defining this term and finding a stable descriptor that doesn’t dilute the power of Open Source. This is in part a reaction to the exorbitant use of the term in countless press releases and model launches of AI power players without accompanying documentation or resources that back up this claim. Open Source AI has become a branded buzzword and perhaps an attempt to competitively position against other AI companies in the market.

The problem with orienting our conversation around definitions is that boundary concepts like Open serve as an umbrella term for so many different groups, connecting communities from open data, open source software, open science, and now open AI, for which all of these and more are mandatory ingredients. Agreeing on a singular definition is probably impossible, and in my opinion, missing the point.

To help us move beyond the “open” vs “closed” binary, Irene Solaiman has introduced a framework to understand the gradient of practices for release of generative models that allow us to talk about specific practices that align with different values or priorities of different organisations. To capture the limitations of openness alone to reach our target outcomes of more transparent, democratic, and responsible practices in AI, David Gray Widder and his colleagues have demonstrated how today “open source AI” has been used successfully for more corporate and regulatory power entrenchment by the AI power players. This work demonstrates how a term and definition for open source AI is not enough, and how digging into the specifics of governance decisions, business models, and the practice of building AI across from data collection to model release is necessary to reach the outcomes we hope for.

Open may help start our conversation, but if what we are aiming for are outcomes like democratic, safe, trustworthy, and inclusive, we will need different tools to achieve those aims. Responsible AI licenses and data sovereignty charters like the Te Mana Raraunga Charter may improve our legal capabilities for safeguarding and enforcing safe and ethical AI, while limiting our desire for free and unrestrictive use. Collaborative data collection platforms may improve our ability for participatory and localised AI, but may raise questions on fairness depending on the conditions of work for data contributors and annotators, and their ability to access and shape the use of their data. Accessing models through an API may not always be free or transparent, but does it lower the cost compared to hosting a model on our own servers? All of our different dreams for AI cannot be achieved through the single dimension of openness, and certainly not through an open source AI definition or license alone. What tools can we add to our toolkit for the diverse and wide ranging outcomes we seek to achieve?

Personally, I find an outcomes-focused approach to AI to be a much better framing to have the essential conversations around the different futures with AI we hope to have in our public and private lives. If more people are able to discuss, answer, and action this question, we will be on a better pathway to ensuring that the impacts of AI are positive not just for some of us, but all of us.

--

--

Jennifer Ding

Researcher at the Alan Turing Institute. Formerly: @numina, @ParkIT_Team Founder, 2x @ideocolab Design Fellow, EE & CS at @RiceUniversity & @Cornell_Tech