Community Risk Level Consensus Check

Simple Summary

Three distinct levels from which the community can select a preferred risk tolerance for the Compound ecosystem. The results of this survey will inform Gauntlet’s analysis to deliver Dynamic Risk Parameters to optimize yield, capital efficiency, and mitigate depositor losses.


From a market risk perspective, the goal for Gauntlet’s simulations is to standardize Value-at-Risk (VaR) across all assets. Matching risk tolerance to a normalized expected yield throughout Compound ensures no subset of assets adds disproportionate risk to the protocol.

Following asset onboarding, empirical data on user behaviour (e.g., average health factors), and changes in market conditions (e.g., expected slippage) improve our simulation precision. Improved precision allows for higher confidence in model outputs—particulary for aggressive recommendations.

Gauging risk appetite is something Gauntlet will do quarterly to ensure our risk parameter recommendations track the preference of the Compound community.


As expected and observed, liquidity risk, volatility risk, and market capitalization frequently change for all assets on Compound. Updating risk paramaters (including Collateral Factor, Close Factor, Borrow Cap, Reserve Factor, and Liquidation Incentive) to remain in lockstep with the market is key to improving the Gauntlet’s target metrics.


We would note that the below are subject to change by the time a vote goes up but should provide a good illustration of the delineation between risk categories.

Current Collateral Factor (Current CF) Conservative CF Moderate CF Aggressive CF
AAVE (50%) 55% 60% 70%
BAT (65%) 60% 65% 65%
COMP (60%) 55% 60% 60%
DAI (75%) 75% 75% 80%
ETH (75%) 75% 75% 80%
LINK (50%) 50% 60% 65%
MKR (35%) 40% 50% 65%
SUSHI (40%) 40% 50% 65%
UNI (60%) 60% 60% 65%
USDC (75%) 70% 80% 80%
WBTC (65%) 60% 65% 70%
YFI (35%) 45% 50% 60%
ZRX (65%) 60% 65% 65%

Risk Level Detail

Conservative metrics: the models target a 5-10% decrease on each side of the capital efficiency (opportunity cost of capital) as well as risk metrics (some conflation of liquidations, insolvencies, and market impact) on each asset.

Moderate metrics: target a similar risk profile to the current risk parameters with reweightings to achieve improved capital efficiency.

Aggressive metrics: target a 5-10% increase for capital efficiency (opportunity cost of capital) with considerations for the same risk metrics (liquidations, insolvencies, and market impact) on each asset.

Key Model Inputs & Notes

We chose to include Collateral Factor to illustrate risk categories in this poll, as Collateral Factor is the parameter that has the most obvious and significant impact on Compound’s risk levels and capital efficiency. As a general note, we make efforts to update parameters on a single-batch basis (as opposed to updating several parameters for any given asset at the same time), which the Compound community has voiced as their preference. Gauntlet’s scope also covers Close Factor, Borrow Cap, Reserve Factor, and Liquidation Incentive, which are parameters that have more complex impacts. Global parameters such as Liquidation Incentive and Close Factor can have broader impacts on the Compound ecosystem. As such, we are continuing to work on validating such parameters through our simulations with updated liquidation economics.

Symbol (Current CF) Volatility Collateral Supply (USD) Average Daily Volume (USD) Liquidation Volume (USD, 90 day) Liquidation Count (90 day)
AAVE (50%) 1.17 6000000 144000000 0 0
BAT (65%) 0.95 80000000 43800000 26640 4
COMP (60%) 1.2 171000000 60000000 478516 34
DAI (75%) 0.087 2900000000 350000000 136437 18
ETH (75%) 0.916 5500000000 2360000000 12700000 144
LINK (50%) 1.21 158000000 600000000 11855 2
MKR (35%) 1.17 750000 71300000 0 0
SUSHI (40%) 1.58 630000 219000000 19502 1
UNI (60%) 1.35 172000000 310000000 2792288 15
USDC (75%) 0.073 2950000000 2030000000 4351064 31
WBTC (65%) 0.7 2400000000 566000000 1293364 26
YFI (35%) 1.04 700000 146000000 0 0
ZRX (65%) 1.24 130000000 41200000 131729 11

As on-chain options and perpetual futures become more popular, it is important to acknowledge the various sources of liquidity available. The liquidators in our simulations mimic the behavior of liquidations observed on the Ethereum blockchain. Namely, many liquidators sell liquidated collateral in an atomic transaction on a decentralized exchange. For liquidations larger than 10,000 USD in size in the last 90 days, over 60% went through a Sushiswap, Uniswap V2, or Uniswap V3 pool within the liquidation transactions. The asset slippage calculations focus on spot market conditions. As market liquidity and behavior changes, we will adapt our simulations accordingly.

Consensus Check

What risk level should Gauntlet target with our parameter updates?
  • Conservative
  • Moderate
  • Aggressive

0 voters


Nice work Gauntlet.

Regarding conservative vs moderate vs aggressive: what are the differences between these? It is just a matter of capital efficiency and insolvencies? I am wondering if we can use your Value-at-risk stat to remove some of the guesswork. For example, conservative might be $0 VaR, moderate $10m, and aggressive $50m. I am just making those numbers up for the example.

In your table of CFs, some assets like COMP, for example, have the same CF in more than one column. Given the labeling, I would think each column would have a different number?

I voted for moderate, but I could be amendable to aggressive once I learn more about the differences between the two.


Thank you, @getty . Please see below for answers to your questions:

  • With our current simulation and model specifications, conservative metrics target a decrease in capital efficiency and risk (relative to current), whereas Aggressive metrics target an increase. Specifically, using a baseline level of 90MM VaR for current parameters, we see a 6MM decrease in VaR in the conservative case, <1MM decrease in VaR in the moderate case, and roughly 5.5MM VaR increase in the aggressive case. We also expect a 1.9% decrease in borrow usage with the conservative parameters, a 1.2% increase in borrow usage with the moderate parameters, and a 4.1% increase with the aggressive parameters. These values are with assumptions on borrower behavior changes and given current onchain data.

  • Some assets have the same CF in more than one column. This is because our analysis shows that an additional increase/decrease in CF does not result (statistically) in a higher/lower risk threshold, given the variance resulting from agent interactions in the ecosystem, volatility, etc. We would note that the overall risk of the protocol is what is optimized for, not a specific asset.


If I understand this correctly, the protocol in its current state has 90MM VaR using your models? Can you shed some more light on this, please?

1 Like

Correct, at Black Thursday volatility levels, we expect there to be around $90MM of assets liquidated and insolvent (not an explicit sum but rather a calculation that captures both of those risks to align with results observed historically).


Interesting, is there a particular asset/market representing the majority of that?

Your base is $90MM

  • Conservative: $-6MM change in VaR

  • Moderate: $-1MM (essentially no change) change in VaR

  • Agressiv: $+5.5MM change in VaR

With a base of $90MM. A ~$5MM change doesn’t seem meaningful to me. Maybe $15MM or $30MM?


Thanks, @getty . Dai, USDC, ETH and WBTC represent the majority of VaR, driven by the fact that they represent a substantial amount of the collateral base.

As for increasing the delta between risk categories, our analysis recommends that changes should be incremental as users respond to these parameter changes and we want to avoid shocking the ecosystem. Also, it’s important to note the friction involved with lowering CF from aggressive levels and the governance challenges in doing so. It is safer and more efficient to incrementally increase CF levels than it is to overshoot and have to decrease CF afterwards. However, by monitoring user behavior as inputs to the models, we can safely recommend more aggressive changes over time.


Makes sense.

I wonder if we should create target CFs for assets and then incrementally move to the targets. It makes more sense to me to separate the targets from the process of getting to the targets. Does that make sense?

1 Like

Thanks, @getty . We agree that it makes sense to separate the targets from the process of getting to the targets. We are actually incorporating this into our analysis already, as our models determine the optimal settings but the implementation itself is constrained as to not shock the ecosystem. We would note, however, that optimal settings do change over time as a result of changes in market conditions.