Incentivized data provisioning
Our token distribution plan (see The token distribution and launch plan of the Fractal Protocol and Deep Dive: The Fractal Token Distribution and Release Schedule) promises the minting and distribution of an additional 400M FCL as block rewards:
will unlock over more than 50 years, with a decreasing inflationary curve, starting with the main net launch on Polkadot
These new FCL will serve as incentives to help grow our data commons. 2,302,047 new FCL will be minted on month 1, an amount that decreases monthly (e.g. 2,004,048 on month 24):
  • at the time of writing's average price of $0.85, that yields the equivalent of ca. $2M every month
  • an optimistic rise to $5, perhaps due to increased traction and a successful launch, would yield the equivalent of $11.5M every month
There are several ways how these incentives could be distributed, for example adding to a treasury that disburses a fixed FCL for data point provided until it runs out, or dividing every new batch of minted FCL, either equally or pro-rata based on data amount, among all data providers. Albeit simple, these mechanics are exploitable in that they encourage the provisioning of any data, regardless of value or accuracy.

Incentivizing only good data

How could we avoid wasting incentives on spam, and encouraging only the provisioning of valuable, accurate data?
Transaction fees for writing to the blockchain will work as a spam prevention mechanism, but only if they're not overshadowed by the value of a guaranteed incentive. If they are, a malicious agent would still find it profitable to pay a small fee now in exchange for a reward later. If they aren't, then these fees act as a disincentive for data provisioning.
Our research might yield good insights here: Aurel expects the anti-rivalry of data to incentivize truth-telling. However, more work is needed to further validate this assumption, and understand how the protocol could be designed such that it enables this equilibrium.
One way would be to distribute incentives only when data is purchased, possibly proportionally to the purchase value, therefore piggybacking on the pricing discovery mechanism provided by a buy-side market (see Potential business models for examples of what that market could be). However, for this to be possible a market must not only exist, but have sufficient depth. It would be good to sidestep this dependency on business development, as it's unlikely we'll generate enough traction in the data buying market to see this mechanism function in time for launch.
For similar reasons, it's probably unwise to rely only on a curation market where data providers must stake FCL to be eligible for incentives, and stand to have those FCL slashed if an incentivized curator successfully challenges their data accuracy in a vote among token holders. It's not clear how curators could ascertain data accuracy, and unlikely that we'd drum up enough curation activity for datasets as large as we're planning to gather.
Another option, also highly dependent on coordination and as such likely to hamper growth, is to incentivize only data that is attested by multiple parties. For example, my claim that I visited a web site would have to be backed by the website's claim that I did so. This is also exploitable but puts some coordination burden on malicious actors. This mechanism could be enhanced by relying only on a set of authorized second-attesters determined through protocol governance.
Initially, our best option might be to require data providers to undergo identity verification to be eligible for incentives, the assumption being that non-anonymous actors are less likely to behave maliciously and that we can blacklist malicious individuals. This is also probably the simplest mechanism to implement. Real and ideological privacy issues would likely act as a disincentive, so this mechanism might be better seen as a temporary crutch to use before the buy-side market fully forms. Fractal could act as the only authorized identity verifier initially, and additional identity verifiers could be onboarded through governance.
Last modified 7mo ago