-----BEGIN PGP SIGNED MESSAGE-----
Optimizing COMP Rewards
As Compound pioneered liquidity mining, it is only natural to ask if the distribution of rewards can be optimized. Why might the community decide that this makes sense? Firstly, it is likely — as often happens with market maker and liquidity incentive programs at centralized venues — that an issuer ‘overshoots’ and overincentivizes the either the supply or demand side of a two-sided market. This problem isn’t unique to financial asset incentives, either — even ride-sharing markets like Uber and Lyft are notorious for losing money due to excess payments to both the demand side (riders) or supply side (drivers). In Compound’s case, COMP emissions are used for a multitude of reasons, including but not limited to:
Given the fixed cap, finite supply of COMP, overspending on one of these components reduces the protocol’s ability to spend on other components. As such, it is prudent for the protocol to optimize how much it spends on liquidity to ensure that other uses of COMP are not drowned out. Moreover, liquidity providers already earn yield from borrowing activities and it is not clear how much of that liquidity would leave without active experiments. However, it is clear that liquidity incentives are one of the best ways to decentralize a protocol and maximize the number of active token holders — which should be taken into account when adjusting rewards.
Let’s do some back of the envelope math to compute how much the protocol is currently spending on liquidity
- 7 day moving average of the number of blocks produced by Ethereum per day: ~6514.29
- Price of COMP: ~$150 (USD)
- Total COMP emitted per block (both supply and borrow sides): 0.352 COMP
This means that the protocol is spending ~$343,954.29 per day or $125,543,314.28 per year on liquidity incentives. Spending ~$125M to attract users is quite expensive! If we approximate the number of users of Compound by the an optimistic estimate for number of active addresses (~300K) that means that the protocol is spending $416.67 per user per year on customer acquisition costs (CaC). These are numbers that would scare any investor outside of cryptocurrency!
How can we optimize incentives?
Numerous other industries that manage two-sided marketplaces, such as ride-sharing, centralized trading exchanges, and gaming, deal with incentive optimization using data science. Given historical data and models for how users behave, market designers can effectively construct A/B tests. These tests could operate in the following manner (stylized and simplified for readability):
- At time
t, on-chain data and centralized market data is used to make a prediction of how much liquidity will change as a function of incentive (= USD price of COMP * COMP emitted)
- This leads to a hypothesis test:
- Null Hypothesis: Reducing the COMP emitted to the Dai market by 10% will reduce liquidity by 15%
- Hypothesis: Reducing the COMP emitted to the Dai market by 10% will reduce liquidity by much less than 15%
- A governance proposal is proposed (akin to Proposal 021) to reduce emissions by 10% to the Dai market
- If passed, the data aggregated from monitoring emissions for some time interval
T (say, 30 days) will provide us a way to estimate whether the null hypothesis can be refuted or not
- New data from the test will be reincorporated into the model (first step) and used to construct another hypothesis test
The literature on designing such trials for online, two-sided marketplaces is vast, but none of this has ever been done using markets with full data transparency, like a DeFi protocol.
How to answer @rleshner’s questions
Part of what we are working on at Gauntlet is something called Automated Governance. This is system that continuously runs simulations (improved versions of those used in the Compound market risk report and in our DeFi Pulse score for Compound) and estimates ‘safe’ and ‘unsafe’ parameter values. These parameter values can be for per collateral protocol parameters (e.g. collateral factors, reserve factors) as well as for COMP incentives.
We propose that all of @rleshner’s questions can be analyzed in a scientific and experimental manner using a system like this for proposing hypotheses. In my opinion, it is impossible to answer these questions for a novel, unique market like Compound without performing some experiments via adjusting COMP speeds. These experiments will naturally be able to tell the community if spending $125M per year is expensive or if it is necessary in the competitive DeFi environment. Moreover, experiments can easily tell us whether these speeds need to be adjusted infrequently or not. For instance, if an experiment to reduce emissions by 10% leads to the null hypothesis staying true, then we have accrued information that suggests that user behavior is insensitive to changes and speeds should not be adjusted too frequently.
The beauty of cryptocurrencies is that their data is completely transparent and public. This means that anyone who has a clever model can test it out without having to spend money on data (like in traditional finance) or harvest user data and make it nonpublic (like a tech company). However, someone has to actually produce this data and analyze it. The only way to do that is via carefully designed experiments whose results can be clearly interpreted, much like a clinical trial or A/B tests in centralized online platforms. Compound led the way by showing that yield farming was a viable methodology for protocol decentralization and it can lead the way in providing scientific rationale for parameter choices in the years to come.
-----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----