There’s an important conversation happening within tech about the need to reduce single-gatekeepers from consumer products. Some might call this decentralization, but that word tends to turn off people both within and outside of crypto. So instead let’s just say it’s about bringing the consumer into the product development cycle. Companies large and small already do this with large product management teams, but the conversation today has focused on the need to go beyond the product management teams and instead incorporate users within the process. Mark Cuban exemplified this recently with this post in which he calls on Elon Musk to decentralize the content feed algorithm for X. It’s a great example of how consumers are waking up to the influence of single-gatekeepers and asking for more involvement or accountability.
The fact is, the Internet Computer has already provided a solution to this problem and it is possible for online products to turn some or all of their product management over to their users or another trusted body of people. If Elon wanted to take Mark Cuban’s up on his challenge and fully decentralize the X feed algorithm, it would be as simple as loading the algorithm’s code into a canister on the Internet Computer and turning over it’s control to a DAO. As the tech stands today, this is not only possible, but almost easy.
One key aspect of the above decentralization for X’s feed algorithm is that it is not necessary to move all of X’s infrastructure to the Internet Computer for this to work. X could move over the feed algorithm, but keep the remaining applications living on centralized servers and this arrangement would probably satisfy both X and its users so long as X and its users can satisfactorily verify that it is using the IC-hosted algorithm without alteration.
What does this have to do with AI? Well, AI has a similar trust issue where consumers are naturally weary about the “black box” aspect of AI. Solving this weariness is the same as solving the trust issue Mark Cuban raises about the X feed algorithm. Moving some or all of the components of an AI model development and/or deployment to the Internet Computer and allowing democratic governance over that component(s) would go a long way to resolving consumer weariness.
This has two direct implications: (1) solving trust issues between AI deployment and its intended consumers (discussed more in a later article) and (2) solving the “alignment problem” (discussed in the remainder of this article).
The alignment problem centers around aligning a super-intelligent AI with human values to ensure that the super-intelligence is always operating in a manner that is beneficial for humans. Basically, how do we stop the machines from taking over the world? However, human values are extremely diverse, so it begs the question: whose human values are we aligning super-intelligence to? It’s not hard to see that aligning super-intelligence to one set of human values would be to the benefit of a small set of humans at the expense of the majority of humans. Consider the different value sets based on business interests, geography, religious and political factors and how they have impacted human history.
The solution to “whose values?” is to use a democratic process in the alignment of super-intelligence to human values. As discussed above, this is possible with today’s technology. If the AI model inputs, controls and outputs are placed in a canister on the Internet Computer then it is possible to introduce a democratic process for how they are governed. This ensures (assuming the democratic body is diverse) that the model is aligned to the human values of a diverse set of human values. It also removes the need for a “benevolent” gatekeeper… a person who has true control of the code and therefore can overturn the democratic decisions.
These are exciting times. Democratic control over AI inputs, outputs and processes is an area worthy of academic resource. However, companies have little incentive to worry about the alignment problem; in fact, they are incentivized to align AI towards their own unique values. However, I do believe there’s a robust use case in which businesses will want to allow democratic control over their AI products. That will be the topic of a future article.