In episode 2, I laid out the case that most AI components could be placed on the Internet Computer such that their control could be decentralized to a democratic governing body. In that article I focused on how this is a solution to the alignment issue, not just focused on how AI can align to human values, but also how we determine whose human values it aligns to. This article will focus on the business use case for such a system… specifically, why democratic governance of AI components is good for business.
First, let’s clear up what I mean by “AI components”. Essentially, an AI component is any of the building blocks that make up an AI model, whether in the development stage or in the deployment stage. AI components vary drastically based on the process used to develop the AI model, so let’s look at one type of AI model: an expert system. An “expert system” is comprised of four main components: (1) a user interface for the consumer to access the system, (2) an inference system that acts like the “brain” of the model, (3) a knowledge base which is comprised of structured and unstructured data and (4) a rules base that consists of simple or complex human knowledge rules which serves as a baseline of understanding for the model. AI model developers combine the above 4 components to create an expert system, which is an AI intended to do one thing very, very well (for example, evaluate x-rays for cancer).
So which of the 4 components can be placed on the Internet Computer? Quite frankly, 3 of them, however with limitations to one of them. The user interface and rules base can be easily hosted on the Internet Computer due to their storage and computational requirements. Many apps on the Internet Computer already store their user interface and large code bases on the IC. The knowledge base of a complex expert system may require TBs of data, which is most likely outside the capabilities of the Internet Computer and would be better stored in web2 tech. However, simple data sets (on the order of magnitude of GBs) can be stored on the Internet Computer. Inference engines typically require large amounts of data stored in RAM, and will typically be outside the current design of the Internet Computer. So for an expert system, the user interface and rules base can be easily deployed to the Internet Computer, the knowledge base can be deployed for simpler models and the inference engine will most likely not be feasible in the current design (however, much work is currently being done to enable inference engine deployment on the IC).
Alright, back to why companies should be interested in democratic governance of these AI components. I can think of two clear use cases:
Industry-wide AI: It seems reasonable for every industry to have multiple industry-specific AI to help consumers acquire information, to evaluate safety across the industry, to track supply issues, and for a million other reasons. These industry-wide AI systems would acquire data from each industry participant and would produce an AI product that, in theory, should benefit each participant in the industry. However, whoever controls the code, controls the AI and therefore trust could be an impediment to launching an industry-wide AI… for example, why would one company want to participate if their competitor could sway the AI towards an unfair advantage? The only way to solve this is to put some or most of the AI components into a system in which each of the participants have some form of control over the code. This would be one implementation of democratic governance over AI components; in this case, the democracy is solely comprised of participants in an industry.
Crowdsourced AI: Crowdsourced AI has significant opportunity to unite a diverse group of people and resources towards building and deploying an AI model. This group of people could combine diverse data sets to ensure a fair, balanced and non-biased AI model, could funnel financial or technical resources to a specific problem and/or could evenly distribute the gains created by the AI model. Due to the complexity of AI models and the requirement of access to massive amounts of (sometimes proprietary) data, crowdsourced AI may be the only alternative to Big Tech AI. A democratic governance system over the AI components is one way to ensure the AI project is protected against malicious behavior, provide participants control over the project and create incentive structures that can fairly compensate for participation in the project.
Both of these use cases resolve the trust issue associated with “whoever controls the code, controls the AI” and, in addition, they open up opportunity for global collaboration. Here are a few hypothetical examples of how this could play out:
The pharma industry wants to create an AI to advise doctors on best course of action when a patient shows signs of an adverse event to a medical treatment. Each participating pharma company supplies data related to their drugs, symptoms of adverse events and corrective actions taken by the doctor (training data). This data can live in pharma companies’ legacy web2 tech stack, however the master file for which data sets to use for training, testing or validating is stored on the Internet Computer. New data can only be added to this list (or old data removed) based on a majority vote of all participants, preventing malicious intent from one participant and preventing one pharma company from supplying data that might lead to AI outputs more favorable to their drugs relative to their competitor’s drugs.
A developer is interested in creating an open source app that helps gardeners identify how to improve their plants livelihood by providing a diagnostic tool leveraging photos of the plants. The developer creates an AI project in which anyone can upload a photo of their own plants with a diagnosis of what was wrong and what corrective action they took (training data). These uploads can be stored on the IC and a governing body can determine whether the upload meets the data quality expectations and whether it should be included in the model. The governing body is located in geographically disperse locations to ensure the data set being built isn’t biased. In addition, the governing body can control a text file that contains the rule base. This allows global participation in the AI project.
Let’s say the above developer wants to turn this AI into a sellable product, but needs to raise capital. She could raise funds through a crowdfunding site (like Funded.app) and give voting rights over the AI model, the uploads and the rules base directly (and cryptographically) to the funders of the project. This would align the funder’s interests with the developers. In addition, since the training data is on the Internet Computer, the developer could set up an incentive model that rewards the persons providing the training data uploads with a share of the revenue generated by the model.
These are just a few examples I can think of off the top of my head, however the surface area for innovation is quite large. History has always arced towards decentralization of control and the concept of putting AI components on a blockchain is part of that evolution. The main question is: who will come build out that innovation on the Internet Computer?
One final note, I’ve intentionally called these “democratic governance bodies” and not DAOs in order to appeal to the non-web3 readers. However, everything above can be boiled down to DAO-controlled AI components. I have a few more thoughts on DAO-controlled AI, but I’ll save it for a later episode. In addition, later episodes of this series will focus on digital ownership and the monetization of AI components like data sets. Until then, happy reading.
Kyle