Science Token Engineering Part 1: The Problem with Science

Science Token Engineering Part 1: The Problem with Science

:church: This is the first article in a series of blog posts discussing science token engineering, the focus of my Open Web Fellowship at OpSci.

First, what is science token engineering? I highly recommend this excellent blog post by Trent McConaghy as an overview of this exciting, rapidly growing field in the Web3 space.

Essentially, science token engineering is a practice of applying token engineering principles specifically to scientific systems. More specifically, we look at science from the perspective of value flows (Where does the value originate? What happens to it in the system? Where does it end up?). This can lead us to identify inefficiencies in the system that we can directly address with alternative value flows, which are later verified in a simulation. The software used for simulating scientific value flows is called DARC-SPICE and if you’re interested, I highly encourage you to check out this technical excursion of the different simulations.

This will all make more sense down the line, so let’s begin with the main question: What does the current science value flow look like?su

The Science Value Flow

To answer this question, imagine you’re a research scientist applying for funding. You submit a proposal for a research project in the hopes of receiving a grant either from the institution you work at or from an external agency. Once you receive that funding, you’ll use it to cover the costs of getting all the necessary resources for your research (data, equipment, personnel, etc.), but part of that funding will inevitably go towards publishing your results in a scientific journal to disseminate your findings to the larger scientific community.

Going back to the original question of value flow, we can see:

  1. all value originated in some grant funding agency (fully monetary value),
  2. which was then transformed by the researchers into new knowledge (intellectual value),
  3. which was finally captured within a knowledge curator, in this case a scientific journal (both intellectual and monetary value).

Omitting the inevitable leakage of value caused by uncontrollable variables, it’s clear that the science value flow is incredibly linear; it starts in one centralized place (a grant funding agency) and ends up in another (a scientific journal).

Figure 1. A schema of the science value flow model

Figure 1 shows this linear value flow. Now, you might wonder whether there is anything wrong with this model. After all, this is how scientific research has been conducted for almost three centuries. I outline the problems with this model below.

Problems with the Baseline Science Model

The centralization of value, both at the level of funding agencies and knowledge curators, can (and does) introduce a number of inefficiencies. For instance, if I am a researcher and have spent years collecting valuable data, that data probably doesn’t belong to me, so I have little to no control over what happens to it [1,2]. Furthermore, with the little control I do have, I will not want to share that data since it represents my potential competitive advantage in getting future grants for additional research and for getting recognition within the scientific community when utilizing this data to justify/support/falsify evidence-based claims.

And what if somebody wants to do research on data that has already been collected, but isn’t available to use? It means the data needs to be collected again, which requires resources that could have been used on processing the existing data. Essentially, the current flow of value does not incentivize collaboration and data sharing, which is inefficient.

In summary, we have so far identified the following problems with the current science value flow:

  • linear flow of value,
  • value is centralized, and
  • research is dependent on centralized agencies.


These three points outline the motivation behind science token engineering, which seeks to solve these issues by designing a new community where the incentives of all participants are aligned to maximize efficiency of scientific research and fairness of value distribution. Thanks to the incredible world of Web3, scientists can be free of the dependence on centralized agencies, they can retain ownership of the work they do and receive fair rewards based on their contributions. Together, we’ll explore how we can reach this goal. Stay tuned for Part 2.

Join Open Science

Building a new open science ecosystem that solves the problems outlined above is not going to happen overnight. If you want to join this exciting space, check out OpSci, say hello on Discord, and consider applying for an Open Web Fellowship.

  1. Petsko GA. Who owns the data? Genome Biol. 2005;6: 107.

  2. Responsible Conduct of Research : Data Acquisition and Management. [cited 20 Jan 2022].

1 Like

This article could be used as the beginning of getting to know a science DAO. I have benefited greatly from it. Thanks.

1 Like