OWF - Simulations for Science Token Communities: Model Summaries

Hi everyone. Here are the summaries of all the models I’ve developed during my Open Web Fellowship on Simulations for Science Token Communities.

1 Baseline Model

This model is based on the current status quo of scientific funding and value flow. It consists of researchers competing for funding from an exhaustible funding agency via research proposals. The research grant is then spent on 1. research costs (e.g. equipment, data, etc.) and 2. getting the research published in a journal.

In this model, the knowledge curators (journals) lock most of the value from the research and they have full control over who gets access to the knowledge assets that have been published. This in turn means that researchers who have been given a grant in the past have a much higher chance of receiving grants in the future and while this itself has multiple parameters (e.g. expertise in the field, the ability to produce high quality proposals, reputation, etc.), it is modelled by a single variable in the simulation called knowledge_access.

A schema of the baseline model (current scientific research pipeline)
A schema of the baseline model (current scientific research pipeline)

Apart from knowledge curators locking value, there is a significant loss in value and time due to a lack of incentive to share research data and collaborate (note that research papers don’t usually include all the data that has been collected). As a result, if the same dataset is useful for two independent research projects, it would have to be collected twice since researchers don’t have incentive to share their work with each other and they also have limited access to other people’s research via knowledge curators.

Knowledge_access_index_LINEAR
number_of_proposals_funded_LINEAR
number_of_proposals_LINEAR
University_OCEAN_LINEAR

The plots above show how this model leads to a winner-takes-all system where the first researcher to win a grant gets a significant competitive advantage over everyone else. Also, since the university (or any other grant funding agency) is separate from the knowledge curators, no value is flowing back, therefore it is only able to fund a limited number of research projects depending on the available funding.

Additional plots from other simulations:

Grant_Funding_Treasury_USD_vs_Knowledge_Curators_USD_LINEAR
Baseline Grant Funding Model: Value Flow
Researchers_No._proposals_funded_LINEAR
Baseline Grant Funding Model: Number of Research Proposals Funded

2 Profit-Sharing Models (simple, mult, mult-time)

#2.1 Simple Profit Sharing Model

This model is the simplest representation of how a web3 scientific ecosystem could function. Essentially, it is a variation of the Web3 Sustainability Loop where researchers are still competing for funding from an exhaustible funding agency, but instead of publishing their results to centralized knowledge curators, they publish to the web3 knowledge market, which allows them to retain ownership of their data, articles, algorithms, etc. whilst still sharing your work with the scientific community.

schema of the web3 profit sharing model

schema of the web3 profit sharing model

web3 sustainability loop

web3 sustainability loop

The knowledge market allows researchers to publish their results at any stage of their research, so naturally, it is also the perfect place to get all the necessary resources for research.

DAO_Treasury_OCEAN_LINEAR
Researcher_OCEAN_LINEAR

#2.2 Profit-sharing with multiple proposals funded at a time

From the simulation with 100 researchers, it seems to be clear that funding one proposal at a time is not realistic in a scientific community that is larger than a couple of researchers. This model enables an arbitrary number of proposals funded at a time,

#_proposals_FUNDED_LINEAR
#_proposals_LINEAR
DAO_Treasury_OCEAN_LINEAR
Staker_X_KnowledgeMarket_OCEAN_LOG

As expected, this model yields similar results to the simple profit-sharing model, only this time the treasury is depleted much sooner. In the plots above, 5 researchers are competing for 3 proposals and all researchers are funded at least once. It is no surprise that the profit-sharing aspect of the model doesn’t make much of a difference since all the funds are disbursed too quickly and the fees are too small.

#2.3 Profit-sharing with rolling basis funding

With the introduction of multiple proposals at a time, the next reasonable step is to remove the fixed funding periods. This was achieved by adding a new time parameter to the proposal that researchers submit indicating how long their research project is going to take. This time parameter is also used in the evaluation of proposals by the Treasury (shorter projects are favored over longer ones). Now, at the start of the simulation, a number of projects are funded and as they finish at different times, new projects are funded immediately.

#_proposals_FUNDED_LINEAR
Assets_in_Knowledge_Market_LINEAR
Knowledge_access_index_LINEAR
Researcher_OCEAN_LINEAR
Staker_X_KnowledgeMarket_OCEAN_LOG
DAO_Treasury_OCEAN_LINEAR

Note, since we have multiple proposals funded at a time, the Treasury is still being depleted quite quickly, however, we can see that all researchers have been funded and they have been receiving appropriate rewards for their research. In this model, the knowledge access index is no longer in synch across researchers. The reason for that is that when a researcher is not funded, there are suddenly multiple research projects that they can buy into to gain a higher knowledge access index than if they were funded. This means that once a project finished, the researcher that wasn’t funded has a higher chance of getting funded than the researcher who was funded previously, and each time a researcher is not funded, their chances increase significantly. In other words, in the 5 researcher, 3 proposals at a time simulation shown above, the probability that one researcher would not get funded at least once is very close to 0.

Limitations

This model inevitably has some limitations. Firstly, it is hard to say whether having a guarantee of getting funded at some point is desirable. While it ensures a fair competition, we are also assuming the researchers are all doing research of comparable quality and importance and that no researchers have malicious incentives.

3 Public Funding Model

Moving on from the simple web3 profit sharing model, I now introduce the public funding model. This model takes what works from the profit sharing model, but applies it to a open science ecosystem in which funding is only contributed towards public research, i.e. research projects that don’t belong to any individual but are available for the entire community to use. Note that this does not mean the knowledge assets produced by these research projects are free (they are quite cheap though), it only means the assets are owned by the community, therefore whenever somebody buys access to public data, all of the tokens spent will go to the DAO Treasury (in future variations of this model, the tokens might be distributed across multiple stakeholders within the ecosystem).

Schema of the public/private open science model

In this model, knowledge assets are split into three categories: data, algorithms, compute services, and each researcher is assigned one of these asset categories to produce. These new researchers are referred to as Data Providers, Algorithm Providers, and Compute Providers, respectively. In addition to setting the output knowledge asset of the specific researchers, these types also determine the assets that a researcher might need to buy. For instance, an Algorithm Provider could be a theorist trying to find a new pattern in other people’s data, so they would make use of the knowledge market to buy the data they need. On the other hand, a Data Provider might either collect new data themselves or they might transform their existing data with an algorithm from the marketplace. Lastly, the Compute Provider can be thought of as a private research organization that has collected a very large dataset (so large it would not be efficient to store on IPFS), so it allows other people to run computations on their data as a cloud service.

This model has a number of fixed parameters, such as the number of researchers, the prices of specific knowledge assets in the marketplaces (public marketplace is cheaper than private, compute services are more expensive than access to data or algorithms) and the costs of publishing (publishing to the private market is more expensive than to the public one.

DAO_Treasury_OCEAN_LINEAR
#_proposals_FUNDED_LINEAR
Assets_in_Knowledge_Market_LINEAR
Private_vs_Public_Market_OCEAN_LOG
Total_fees_collected_through_private_vs_public_market_LINEAR

This model has so far been the most effective in terms of longevity, since after 30 years, the treasury is still not depleted, however, it has some limitations that will be improved in newer versions:

  • private agents publish/buy assets until they run out of funds, which is not realistic
  • there is a fixed number of researchers, but in reality we should expect a growth of the community
  • we are not tracking most of the high resolution metrics that this model includes (like the performance of different types of researchers)
2 Likes

Very interesting modelling. Have you thought about how you might go about modelling the quality of research outputs by different groups? Perhaps a weighting assigned to transaction fees?

1 Like

That’s an interesting idea. I’m not sure how we could assign weights to transaction fees based on the quality of research outputs since that is something to be determined in retrospect, perhaps the transaction fees could be altered when people are accessing the knowledge assets, although I am not quite sure about the purpose of that.
Would love to hear your thoughts on modelling the quality of research outputs by different groups. So far that has been reflected in the proposals (and it’s mostly random), meaning projects that don’t have a high enough standard will not get funded in the first place.

1 Like

A dynamically varying fee based on the demand for the research outputs sounds cool - I suppose the idea would be to use popular research to generate more revenue helping to fund further future research?

I’ve lost the citation, but I recently read that allocation of funds for research is essentially random currently anyway! Any improvements on this would be good of course. As a DAO, we could do it based on votes from diff stakeholders. Votes could be weighted based on background so that peers in the field have more say than someone more removed.

Although the above was described for the proposals, a similar thing could also be implemented to use alongside traditional measures of impact like citations and altmetric scores to judge the quality of outputs.

These are all just random thoughts; just writing as I’m thinking ATM!

1 Like