Hi everyone. Here are the summaries of all the models I’ve developed during my Open Web Fellowship on Simulations for Science Token Communities.
This model is based on the current status quo of scientific funding and value flow. It consists of researchers competing for funding from an exhaustible funding agency via research proposals. The research grant is then spent on 1. research costs (e.g. equipment, data, etc.) and 2. getting the research published in a journal.
In this model, the knowledge curators (journals) lock most of the value from the research and they have full control over who gets access to the knowledge assets that have been published. This in turn means that researchers who have been given a grant in the past have a much higher chance of receiving grants in the future and while this itself has multiple parameters (e.g. expertise in the field, the ability to produce high quality proposals, reputation, etc.), it is modelled by a single variable in the simulation called
A schema of the baseline model (current scientific research pipeline)
Apart from knowledge curators locking value, there is a significant loss in value and time due to a lack of incentive to share research data and collaborate (note that research papers don’t usually include all the data that has been collected). As a result, if the same dataset is useful for two independent research projects, it would have to be collected twice since researchers don’t have incentive to share their work with each other and they also have limited access to other people’s research via knowledge curators.
The plots above show how this model leads to a winner-takes-all system where the first researcher to win a grant gets a significant competitive advantage over everyone else. Also, since the university (or any other grant funding agency) is separate from the knowledge curators, no value is flowing back, therefore it is only able to fund a limited number of research projects depending on the available funding.
Additional plots from other simulations:
Baseline Grant Funding Model: Value Flow
Baseline Grant Funding Model: Number of Research Proposals Funded
This model is the simplest representation of how a web3 scientific ecosystem could function. Essentially, it is a variation of the Web3 Sustainability Loop where researchers are still competing for funding from an exhaustible funding agency, but instead of publishing their results to centralized knowledge curators, they publish to the web3 knowledge market, which allows them to retain ownership of their data, articles, algorithms, etc. whilst still sharing your work with the scientific community.
schema of the web3 profit sharing model
web3 sustainability loop
The knowledge market allows researchers to publish their results at any stage of their research, so naturally, it is also the perfect place to get all the necessary resources for research.
From the simulation with 100 researchers, it seems to be clear that funding one proposal at a time is not realistic in a scientific community that is larger than a couple of researchers. This model enables an arbitrary number of proposals funded at a time,
As expected, this model yields similar results to the simple profit-sharing model, only this time the treasury is depleted much sooner. In the plots above, 5 researchers are competing for 3 proposals and all researchers are funded at least once. It is no surprise that the profit-sharing aspect of the model doesn’t make much of a difference since all the funds are disbursed too quickly and the fees are too small.
With the introduction of multiple proposals at a time, the next reasonable step is to remove the fixed funding periods. This was achieved by adding a new time parameter to the
proposal that researchers submit indicating how long their research project is going to take. This time parameter is also used in the evaluation of proposals by the Treasury (shorter projects are favored over longer ones). Now, at the start of the simulation, a number of projects are funded and as they finish at different times, new projects are funded immediately.
Note, since we have multiple proposals funded at a time, the Treasury is still being depleted quite quickly, however, we can see that all researchers have been funded and they have been receiving appropriate rewards for their research. In this model, the knowledge access index is no longer in synch across researchers. The reason for that is that when a researcher is not funded, there are suddenly multiple research projects that they can buy into to gain a higher knowledge access index than if they were funded. This means that once a project finished, the researcher that wasn’t funded has a higher chance of getting funded than the researcher who was funded previously, and each time a researcher is not funded, their chances increase significantly. In other words, in the 5 researcher, 3 proposals at a time simulation shown above, the probability that one researcher would not get funded at least once is very close to 0.
This model inevitably has some limitations. Firstly, it is hard to say whether having a guarantee of getting funded at some point is desirable. While it ensures a fair competition, we are also assuming the researchers are all doing research of comparable quality and importance and that no researchers have malicious incentives.
Moving on from the simple web3 profit sharing model, I now introduce the public funding model. This model takes what works from the profit sharing model, but applies it to a open science ecosystem in which funding is only contributed towards public research, i.e. research projects that don’t belong to any individual but are available for the entire community to use. Note that this does not mean the knowledge assets produced by these research projects are free (they are quite cheap though), it only means the assets are owned by the community, therefore whenever somebody buys access to public data, all of the tokens spent will go to the DAO Treasury (in future variations of this model, the tokens might be distributed across multiple stakeholders within the ecosystem).
Schema of the public/private open science model
In this model, knowledge assets are split into three categories: data, algorithms, compute services, and each researcher is assigned one of these asset categories to produce. These new researchers are referred to as Data Providers, Algorithm Providers, and Compute Providers, respectively. In addition to setting the output knowledge asset of the specific researchers, these types also determine the assets that a researcher might need to buy. For instance, an Algorithm Provider could be a theorist trying to find a new pattern in other people’s data, so they would make use of the knowledge market to buy the data they need. On the other hand, a Data Provider might either collect new data themselves or they might transform their existing data with an algorithm from the marketplace. Lastly, the Compute Provider can be thought of as a private research organization that has collected a very large dataset (so large it would not be efficient to store on IPFS), so it allows other people to run computations on their data as a cloud service.
This model has a number of fixed parameters, such as the number of researchers, the prices of specific knowledge assets in the marketplaces (public marketplace is cheaper than private, compute services are more expensive than access to data or algorithms) and the costs of publishing (publishing to the private market is more expensive than to the public one.
This model has so far been the most effective in terms of longevity, since after 30 years, the treasury is still not depleted, however, it has some limitations that will be improved in newer versions:
- private agents publish/buy assets until they run out of funds, which is not realistic
- there is a fixed number of researchers, but in reality we should expect a growth of the community
- we are not tracking most of the high resolution metrics that this model includes (like the performance of different types of researchers)