[Chainrisk] - Economic Risk Simulation Engine for Compound v3

Economic Risk Simulation Engine for Compound v3

First and foremost, I’d like to thank @allthecolors and the CGP team and for his valuable contributions and data inputs for this proposal.

Note that this proposal has been accepted and can be viewed at Questbook.

Application Criteria under Project Ideas in RFP

  • Open-source risk frameworks and associated analysis
  • Governance research, analysis, and integrations in the context of Compound’s deployment of Comet contracts on mainnet and other chains

Summary

Chainrisk is proposing to onboard the Compound community into its risk and simulation platform to test Compound V3 (Comet) Modules ( Interest Rates, Collateral and Borrowing, and Liquidations Module ), and new protocol upgrades in various market scenarios.This platform will support the community in stress testing the mechanism design of the protocol and bespoke protocol research with publicly available analysis and results.

Company Background

Chainrisk specializes in economic security, offering a unified simulation platform designed for teams to efficiently test protocols, particularly in challenging market conditions. Our technology is anchored by a cloud-based simulation engine driven by agents and scenarios, enabling users to create tailored market situations for comprehensive risk assessment.

Our team comprises experts with diverse backgrounds in Crypto, Security, Data Science, Economics, and Statistics, bringing valuable experience from institutions such as Ethereum Foundation, NASA, JP Morgan, Deutsche Bank, Polygon, Nethermind, and Eigen Layer.

The Chainrisk Cloud simulation platform mirrors the mainnet environment as closely as possible. Each simulation runs forks from a specified block height, ensuring up-to-date account balances and the latest smart contracts and code deployed across DeFi. This holistic approach is crucial for understanding how external factors such as cascading liquidations, oracle failure, gas fees, and liquidity crises can impact a protocol in various scenarios.

With grant funding, our focus is onboarding Compound onto the Chainrisk Cloud simulation engine. Additionally, we aim to develop and deploy multiple statistical features in subsequent milestones. Our goal is to analyze and verify that Compound modules can withstand high market stress.

The Proposal

To enhance risk coverage of the Compound protocol and transparency to the community, we’d propose tooling to cover a few major areas:

  • Risk parameters for all Compound V3 markets ( Borrow Collateral Factor, Liquidation Collateral Factor, Liquidation Factor, Collateral Safety Grade, Supply Cap, Target Reserves, Store Front Price Factor, Liquidator Points, Interest Rate Curves) across Ethereum, Polygon, Base and Arbitrum.

  • Coverage of base asset (USDC) and all collateral assets (WBTC, WETH, LINK, UNI, COMP)

  • Parameter Optimization using Scaled Monte-Carlo Simulations (All within a SaaS)

  • An end-to-end Economic Audit Report

Economic Risk Management today is very boutique and blackboxed. We have come up with a Scalable Risk Management SaaS solution based on the ethos of verifiable simulations and proven statistical models.

The On-Chain Simulation Engine & Economic Security Index

The sole aim of Economic Security is to make Cost of Corruption > Profits.

From a security and infrastructure perspective, we recognize the need for additional tools to be built and maintained to bolster Compound’s security posture on top of the protections offered by teams like Gauntlet, OpenZeppelin, Trail of Bits, and Cetora.

Chainrisk delivers a groundbreaking, cloud-based simulation platform built on the philosophy that the most valuable testing environment closely resembles a real-world production environment.

On-chain Simulations

On-chain simulations create a fork of the blockchain from a designated block height and deploy a catalog of agents, scenarios, and observations within the Chainrisk Cloud environment. During on-chain simulations, Chainrisk Cloud executes a massive number of Monte Carlo simulations to evaluate the protocol’s Value at Risk (VaR) per market (chain) and across markets.

Agents represent user behavior, allowing for the emulation of diverse protocol user actions. The Chainrisk Scenario Catalog empowers control over macro variables and conditions such as gas prices, DEX and protocol liquidity, oracle return values, significant market events like Black Thursday, and more. Observers facilitate in-depth protocol analysis and yield more insightful simulations.

Through this robust software, users can control and test a host of different factors that can impact protocol security and user funds, including:

  • Oracle data (e.g., asset prices, interest rates): This allows users to simulate how external data feeds can influence the protocol’s behavior.
  • Transaction fees (gas prices): Users can test how varying gas costs impact user behavior and protocol functionality.
  • Liquidation thresholds (price at which collateral is sold): Simulations can assess the protocol’s vulnerability to cascading liquidations triggered by price drops.
  • Flash loan availability: The platform allows testing the impact of flash loans, a type of uncollateralized loan that can be used for arbitrage or manipulation.

Economic security testing and simulations via the Chainrisk Cloud platform allow you to test your protocol in different scenarios and custom environments to understand where your risks lie before a malicious actor can exploit them. Some examples:

  • Simulating Market Swings: Test how your protocol’s reserves handle periods of high volatility.
  • Uncovering Asset Dependencies: Analyze how correlated assets impact liquidations.
  • Stress Testing Price Crashes: Gauge the system’s response to dramatic price drops in terms of liquidations and overall liquidity.
  • Evaluating Gas Fee Impact: Assess how high gas fees affect the efficiency of the liquidation process.
  • Projecting New Asset Integration: Simulate the impact of introducing new borrowable assets on demand, revenue, and liquidations.

Economic Security Index

  • Chainrisk’s Economic Security Index provides insights into the platform’s ability to withstand and mitigate economic exploits, such as market manipulation, liquidity shortages, and other risks that could impact the value and stability of the protocol and its users. Economic Security Index is a function of Loss Likelihood and the Expected Loss.

Previous work at Chainrisk

We have built an economic risk simulation engine for protocols built on top of all EVM compatible chains, where a protocol engineer can create granular simulations to test their mechanism design. We are working with a couple of protocols on the Ethereum Ecosystem namely, Angle Protocol and Gyroscope.

Our sandboxed environment runs on top of the Rust EVM. We aim to enable Simulation-driven Development and run Agent-based Simulations for the Compound Ecosystem.

Research Work at Chainrisk -

Work with Angle Protocol

Context & Background -

Angle liquidation system is conceived as an improvement over more traditional liquidation mechanisms. It allows for variable liquidations amounts meaning that during a liquidation the fraction of the debt that can be repaid is not fixed.

On top of that, discounts given to liquidators are based on a Dutch auction mechanism which minimises the discount liquidators get while making sure they can still be profitable. This protects borrowers getting liquidated and lets them keep a maximum amount of collateral in their vaults.

Auctions and liquidations are a primary functionality responsible for the stability of Angle and agEUR and hence must be thoroughly tested and verified.

Angle <> Chainrisk Deck - https://bit.ly/chainriskxangle

Integration Specifics with Angle - https://twitter.com/chain_risk/status/1753358118573183032?t=u9MBIT2FIImHRPV3dDwLEA

Angle Collaboration Video - https://www.youtube.com/watch?v=zvHV5qRUZx0

Funding disbursement schedule

Sprint Goals for First 2-3 weeks / Milestone #1

Goal: To build a robust, dynamic, Agent-based Scenario-based simulation engine for Compound where protocol engineers can create simulations in a sandboxed environment to stress test their mechanism design, integrate a built-in analytics dashboard and a block explorer.

Objectives and Specific Commits:

  1. Integrate the simulation engine with Compound:

    • Developers shall be able to create a mainnet/testnet fork at user defined block height and make RPC calls
    • Write, store and edit js/ts scripts of Agents, Scenarios, Observers, Assertions, and Contracts.
    • Create and Configure agent-based simulations to test specific features, strategies and mechanisms ( Liquidations, Oracle Manipulations etc ).
    • Run these simulations on-chain across user-defined block length.
    • Analyse the simulation results through the Observers on the built-in analytics and make decisions on the right set of risk parameters ( Borrow Collateral Factor, Liquidation Collateral Factor, Liquidation Factor, Collateral Safety Grade, Supply Cap, Target Reserves, Store Front Price Factor, Liquidator Points, Interest Rate Curves ).
  2. Block Explorer and Transaction tracing:

    • Integrate an in-built block explorer with the simulation engine to show the blocks mined, transactions executed, contracts triggered and events logged during the simulations.
      *Add detailed transaction tracing feature which will show the flow of funds, and break down the transactions to the opcode level.
  3. Dynamic backend (protocol agnostic):

    The goal here is to make a singleton backend that can stress-test any DeFi protocol through Js/Ts scripts of Agents, Scenarios, Observers, Assertions, and contracts and reduce the dependencies.

    • Developing modules and auxiliary tools for Agents, Scenarios, Observers, Assertions, and contracts to seamlessly communicate without manual intervention.
    • Make the block explorer dynamic capable of decoding transactions within each simulation block for comprehensive analysis.
  4. Release beta-testing version on a single server :

    • Release a beta-testing version where Chainrisk Team works on specific simulations closely with Compound community and inculcates the feedback received.
    • This is a single-node server that will be able to run granular simulations.

Reward after 1st Milestone - $5000

Sprint Goals for next 3-4 weeks / Milestone #2

Goal: Finalise all the features of the simulation engine, implement security measures, create granular simulations to test Borrowing, Liquidation and Interest Rate Modules of Compound, engage with the community, and prepare comprehensive documentation.

Objectives and Specific Commits:

  1. Implement Two-Factor user authentication :

    • We will be using the Amplify and AWS Cognito pool to implement Multifactor auth.
    • The multi-factor auth will consist of a username (email or wallet), password, and any Authenticator App (TOTP-based).
  2. Implement CI/CD Integrations and deploy the full app on the cloud :

    • We shall use AWS Rust SDK and Terraform to deploy the whole infra on AWS ECS and Fargate along with AWS Kinesis, AWS Glue, and Redshift for Data Analysis. This system will be able to run 1000s of simulations parallelly for long hours.
    • When a simulation is configured, every Agent, Scenario, Observer, and Assertion code will run on the same containers packed together. The container life cycle will be the same as the simulation life span and will be run by ECS.
    • We will be also using DynamoDB as the main DB, S3 as Data lake and Redshift as Data Warehouse for storing all the data this will also give a super fast web app experience and AWS Streams and Lambda for event-driven workloads or database triggers.
  3. Create Agent Based Simulations to test out different modules of Compound :

    • Once cloud deployment is done, we will be testing our platform for parallel 1000s of simulations running at the same time ensuring the scalability and robustness of the platform.
    • Assign a group of researchers/data scientists from Chainrisk Team to generate risk scenarios akin to Black Thursday
    • Prepare multiple scripts for Agent, Scenario, Observer, Assertion, and Smart contracts for the protocol to test their mechanism design
    • Take developer feedback on the platform, fix outstanding bugs, and implement features suggested by Compound community
  4. Prepare Comprehensive Documentation:

    • Along the way we will be preparing comprehensive risk management docs for the simulations that we run for Compound.
    • Also we will be delivering architectural diagrams for more clarity.
  5. Governance Risk Calls:

    To deepen our commitment to community engagement and strengthen protocol security, Chainrisk will host monthly risk calls for the Compound community. These calls will focus on discussing:

    • New risk tooling and analysis
    • New Asset Listing Proposals
    • Protocol launches and technical
    • The broader market environment
    • Anything else the community deems important and relevant for discussion

To facilitate ongoing risk assessment, we propose establishing monthly recurring calls. These calls will be held for a dedicated hour and will be supplemented by additional ad-hoc community calls when critical risk issues arise. Recordings and summaries of all calls will be made available for reference.

Reward after 2nd Milestone - $10,000

Sprint Goals for final 4-6 weeks / Milestone #3

Goal: Redesign the backend and cloud infrastructure to be able to run Monte-Carlo Simulations at scale and provide static parameter recommendations and perform an end-to-end economic audit

Objectives and Specific Commits:

  1. Enable creation of stochastic price paths:

    • Develop an algorithm to generate random price paths based on GARCH parameters such as volatility clustering and time-varying volatility.
    • Validate the accuracy and statistical properties of the generated price paths against historical data.
    • Develop and integrate the GARCH price oracle contract to the backend.
  2. Enable running thousands of simulations simultaneously:

    • Dockerize service and use kubernetes to orchestrate containers.
    • Distribute containers across multiple machines in the cloud.
  3. Integrate bespoke visualisation library:

    • Create a backend endpoint for the visualisation library to query observer data for plotting.
    • Create a python fastapi server to receive HTTP requests from the frontend for visualisations.
    • Dockerize service for easy deployment.
  4. Enable longer simulations:

    • Increase transaction throughput by making optimizations to customised anvil infrastructure and revm
    • Minimise networking with a remote node to avoid HTTP timeouts
  5. End-to-End Economic Audit

    • We test whether your protocol parameters or financial modelling can withstand the blow of past economic exploits and market manipulations via thousands of monte-carlo simulations.
    • Analyse VaR, LaR and Borrowing Power under market duress (i.e. Black Thursday).
    • As a conclusion, we recommend parameters based on our statistical inferences. These recommendations are made to make the incentives/profit of the attacker minimal or none.
    • We will finally release a detailed economic audit report.

Reward after 3rd Milestone - $10,000

Measures of Success/KPIs

We will measure things such as:

  • Product deliverables ( As per milestones )
  • Improvement in the values of VaR, LaR and Borrowing Power (without affecting protocol solvency) post our recommendations
  • Community NPS of our relationship
  • Communication and transparency to the community on work done and product access

Budget Breakdown:

Team Costs - $21000 for 525 Man-hours ( $40/Man-hour ) over 3 months

For -

  • One Data Scientist
  • One Protocol Economist
  • One Cloud Engineer
  • One Core Backend Developer
  • One Security Researcher

Tech Costs - $4000

Where are we heading with this grant?

Open Source -

Yes we shall open source certain sections of the work. We will be creating simulations for the protocol during our engagement (mentioned in the 2nd Milestone). The agents, scenarios, observers, assertions and contracts we create during those simulations can be open sourced for the greater good of the community.
However, the backend of the platform shall remain closed-source due to security and proprietary reasons.

Follow on Grants for Maintenance and Updates -

Post the completion of our first grant engagement, we would like to move ahead with a bigger follow-on grant that’ll help us cover the maintenance costs, cloud compute costs and HR ( dev + research ) costs of managing and improving the platform over time.

We are of the opinion that a lot more enhancements can be done to secure Compound from impending economic exploits and market risks.

Project Roadmap in next 12 months -

In the next 12 months, we have targets of implementing the following features to deliver a Full Stack Risk Management platform to the Compound Ecosystem:-

  1. LLM Models for Data Query and Visualisations on Simulations

  2. Bayesian Learning Models for Parameter Recommendations

  3. User Risk Monitoring Dashboard

  4. Economic Alerts

About the Team

Sudipan Sinha ( Co-Founder and Chief Executive Officer ):

  • MTech in Mathematics and Computing at IIT BHU

  • Top Security researcher at HackerOne, Bug Crowd

  • Ex-Chief White Hat at DetectBox

  • Tech and Economics Mentor at EthGlobal 2023

  • 2x Ethereum Foundation Grantee & 1x StarkNet Foundation Grantee

  • Part of the Antler India Fellowship 2023 ( Top 5 / 3k+ founders )

Arka Datta ( Co-Founder and Chief Product Officer ):

  • Grad @ IIT Guwahati

  • Fellow @ IISc Bangalore

  • Part of the prestigious Polygon Fellowship 2022 (Top 50/ 10K + builders)

  • Prior getting into Web3, Arka has worked as a Software Engineer at Walmart working on various security products

  • Previously co-founded DetectBox - world’s first decentralised smart contract audit marketplace with Sudipan, backed by Antler India Fellowship, Ethereum Foundation and Starknet Foundation

  • 2x Ethereum Foundation Grantee & 1x StarkNet Foundation Grantee

  • Part of the Antler India Fellowship 2023 ( Top 5 / 3k+ founders )

Abhimanyu Nag ( Head of Economics Research ):

  • Previously worked at Nethermind, the Ethereum giant, as a Data Scientist and HyperspaceAI, then Supercomputing now Decentralised LLM specialists, as a Protocol Economist and AI researcher.

  • An International Centennial Scholarship holder at the University of Alberta, he co wrote the Ethereum Improvement Proposal 5133 which decided the date of Ethereum’s Merge on 15th of September, 2022 and as a result, has his name cited in the Ethereum Yellow Paper.

  • He also developed the data modelling and analytics of TwinStake - an institutional staking service developed in partnership between Nethermind and Brevan Howard Investment Fund.

  • He has given talks about his work at maths research conferences and also the Ethereum Merge Watch party by Nethermind.

  • He played a key role in pushing HyperspaceAI to reach a $500 million valuation within 6 months of joining and helped the startup land a pitch at Ethereum Community Conference in France in 2023.

  • He also was part of the EigenLayer Research Fellowship hackathon and developed an innovative currency exchange system on EigenLayer.

  • His main research has always been in Applied Mathematics and Statistics. He is co writing three papers on Hidden Markov Modelling on tumour progression and Health Insurance Fraud along with Professors at the University of Alberta and he also recently solved a new proof of the Saint Venant Theorem, to be published shortly.

  • He was also a reviewer at the The 4th International Conference on Mathematical Research for Blockchain Economy.

Siddharth Patel ( Core Blockchain Developer ):

  • Previously worked at Antier Solutions, Asia’s largest blockchain development firm, Siddharth has extensive experience in working on smart contracts and backend development for multiple complex DeFi protocols, including staking, yield farming, DEX, and various token types, collectively managing assets exceeding $8 million.

  • He possesses hands-on experience in building various NFT standards and marketplaces, as well as contributing to projects involving account abstraction and asset tokenization.

  • He has been curiously participating in the blockchain industry for over 4 years and won multiple hackathons including a bounty at EthGlobal’s SuperHack '23 hackathon for developing CoinFort, a zk-based account abstracted wallet (CoinFort | ETHGlobal).

Rajdeep Sengupta ( Frontend & Cloud Engineer )

  • Previously worked at several startups - Immplify, DryF, and DetectBox.

  • Worked as a core team member in Immplify in the app architecture part. Worked with multiple services of GCP during that time like Firebase, rtdb, cloud functions, cloud run, IAM, API gateway, KMS and much more.

  • During his time at DryFi, he worked with AWS, DynamoDB, NextJs, lambda functions API gateway tailwind and more.

  • He has been a contributor at Layer 5 and worked with AWS, GCP, and Azure.

  • Won national hackathon Ethos. Have been within top 10 finalists in over 6+ international hackathons.

Saurabh Sharma ( Head Of Risk Engineering ) :

  • Grad School @ IIT Bombay

  • Ex. Data Scientist at Safe Securities, Worked on Quantifying web2.0 Cyber risks, building and implementing various Bayesian and Monte Carlo models

  • Did a Research aimed to build Quantum Computers using phonons(quantised particles of Sound) and build a prototype

  • Data Science at Nexprt, worked on analysing Home decor Data, targeting potential Countries for trade

  • Fellow @ IISc Bangalore

  • Currently working on a research along with IITB peers related to Quantum Error Correction Codes and Quantum Cryptography

Samrat Gupta ( Senior Security Researcher ):

  • Proven experience in smart contract auditing, web application security and web pentesting.

  • Participated in competitive audit platforms like Code4rena, Sherlock and CodeHawks and found 100+ high and medium vulnerabilities and secured $100M+ of user funds.

  • Worked on 3 independent audits on DetectBox platform for Kunji Finance, Etherverse and Payant escrow.

  • Worked at QuillAudits - a leading web3 audit firm as a smart contract auditor and security researcher, where he led QuillCTF.

X ( Twitter ) - https://twitter.com/chain_risk
Linkedin - Chainrisk ( prev UNSNARL ) | LinkedIn

Stay Tuned For Regular Updates :eyes:

2 Likes

Backend + Node Service Architecture Update

Here’s the first update for the Economic Risk Simulation Engine we are building for Compound v3.

1. The Cloud Architecture

Different Modules and Parts:

  • Virtual Private Cloud ( VPC )
  • Application Load Balancer
  • Cloud Watch
  • ECR
  • ECS
  • IAM
  • AWS Document DB
  • EC2

VPC Config

CIDR: 10.0.0.0/16

Subnets : 3 Public Subnet is there in regions [“ap-south-1a”, “ap-south-1b”, “ap-south-1c”]. 3 private subnets are used. The Public Dashboard data is publicly available.

Security Groups: There is one security group right now that allows traffic to the REST API and simulation Engine.

Internet gateway: One internet gateway is there

Route Table One route table is there associated with all the public subnets that connect them to the internet gateway.

Application Load Balancer:

One ALB is connected to the REST API for elastic IP and load management.

ECR : ( Elastic container repositories )

There are two ECRs :-

  1. one for the simulation engine
  2. The REST API which communicates with Frontend

ECS : ( Elastic Container Service )

There is one ECR Cluster two ECR Services and two Task definitions are defined.

  1. one for the simulation engine that is responsible for running all the simulations
  2. The REST API which communicates with Frontend. This is a node express server communicating with AWS Document DB and frontend.

Cloud Watch

This cloud watch is linked to the ECS Cluster. This gives logs for all the services running and sends alerts if anything is down and not working properly.

AWS Document DB :

The main data store is where we are storing the data. This is a Document base database and we can use Mongoose to connect with it just like MongoDB

EC2:

We are using EC2 for the local development. As Document DB does not allow us to connect to it if we are not in the same VPC that’s why we are tunneling our localhost:27017 through the EC2 container to access the database and be able to do local development.

2. Backend Architecture

  • Initialized simulation backend with fork feature using revm.
  • Discovered RPC batching feature to reduce rpc query time during the simulation.
  • Created an api endpoint which forks the ethereum mainnet on a specific block and does batched balance queries for multiple account and stores into the database.

cc @allthecolors1

See you guys soon with another exciting update :saluting_face:

Risk Simulation Overview - Test Scenarios

Update#2 for the Economic Risk Simulation Engine we are building for Compound v3.

Here’s a list of scenarios we would be working on during the first 2 milestones to showcase the power of our risk simulation engine.

Disclaimer: These simulations shouldn’t be used for statistical inferences. We would be working on statistically rigorous models in the third milestone to recommend risk parameters.

01. Black Thursday Simulation:

Description:

A sudden and dramatic price drop for ETH (Black Thursday event) hits the USDC market. The simulation utilizes the Chainrisk Cloud platform, a dedicated environment for controlled testing of DeFi protocols. Within this platform, parameters are manipulated to replicate a Black Thursday scenario.

Scenario:

The simulation will focus on a 43% price decline for ETH (over a period of 14 Hours) within the USDC market on the Ethereum blockchain. The USDC market in the Ethereum Chain has a total value locked (TVL) of approximately $1.3 billion and a wrapped Ether (WETH) holds a significant share with the supply of roughly $500 million.

The Chainrisk Cloud platform specifically targets the Compound V3 Price Oracle, which relies on Chainlink’s Price Feed. By altering the return values of this oracle, the simulation mimics the dramatic price drop associated with a Black Thursday event for the ETH asset.

Inferences:

This simulation will reveal the protocol’s resilience under extreme market conditions and the effectiveness of Compound V3 in mitigating risks associated with volatile asset prices.


2. Black Thursday with Varying Parameters (Extended Black Thursday)

Description:

A sudden and dramatic price drop for ETH (Black Thursday event) hits the USDC market. The simulation utilizes the Chainrisk Cloud platform, a dedicated environment for controlled testing of DeFi protocols. Within this platform, parameters are manipulated to replicate a Black Thursday scenario.

We’ll systematically vary key parameters (borrow collateral factor, liquidation collateral factor, etc.) to assess their impact on the protocol’s ability to manage risk and identify the optimal configuration for enhanced resilience during extreme market volatility.

Scenario:

The simulation will focus on a 43% price decline for ETH (over a period of 14 Hours) within the USDC market on the Ethereum blockchain. The USDC market in the Ethereum Chain has a total value locked (TVL) of approximately $1.3 billion and a wrapped Ether (WETH) holds a significant share with the supply of roughly $500 million.

The Chainrisk Cloud platform specifically targets the Compound V3 Price Oracle, which relies on Chainlink’s Price Feed. By altering the return values of this oracle, the simulation mimics the dramatic price drop associated with a Black Thursday event for the ETH asset.

We will run the BlackThursday Simulation by varying the below Parameters.

  • Borrow Collateral Factor
  • Liquidation Collateral Factor
  • Liquidation Factor
  • Supply Cap
  • Target Reserves
  • Store Front Price Factor
  • Interest Rate Curves

Inferences:

This simulation will reveal the protocol’s resilience under extreme market conditions and the effectiveness of Compound V3 in mitigating risks associated with volatile asset prices.

By systematically varying key parameters like borrow collateral factor and liquidation factor, we can assess the protocol’s resilience under extreme market conditions and identify the optimal configuration for mitigating risks.


3. LST Depeg Scenario (wstETH / ETH Depeg):

Description:

This scenario simulates a severe depeg event for wstETH on the ETH Market in Compound v3 protocol.

LST/BASE depeg scenarios are rare and uncertain, there is still a risk that needs to be considered. The WSTETH/ETH rate drop of more than 10% on May 11, 2022, was the biggest historical depeg event for LST/ETH. In these situations, exchange liquidity decreases, making lending and borrowing protocols’ insolvencies worse. With the help of the Chainrisk Cloud Simulation Platform, we will simulate the situation and analyze how a severe stETH depeg event impacts Compound V3 Markets.

Scenario:

This simulation scenario focuses on a severe depeg of wstETH (by 20%) over a period of 14 Hours in the WETH market within the Compound v3 protocol on the Ethereum network.

Compound V3’s WETH market in the Ethereum Chain has a total value locked (TVL) of approximately $122.9 Million and a Lido Wrapped Staked ETH(wstETH) holds a significant share with the supply of roughly $122.1 million.

Inferences:

This simulation will analyze how a severe stETH depeg event impacts the ETH market in Compound v3. It will stress test Compound V3 market’s resilience to such occurrences.


Note : All these scenarios shall only be used to showcase the range of features on the Chainrisk Cloud Simulation Platform and should not be considered as a financial & economic risk mitigation advise to Compound Governance until statistically rigoured by Chainrisk Team.

2 Likes

Milestone 1 - Risk Simulation Engine for Compound v3

:zap: Chainrisk <> Compound Finance - 3rd Major Update !

In this update, we simulate ‘Black Thursday’ on Compound V3 which will be used to showcase the power of the Chainrisk Risk Simulation Engine.

Disclaimer: These simulations shouldn’t be used for statistical inferences. We would be working on statistically rigorous models in the third milestone to recommend risk parameters.

Chainrisk’s SoW for 1st milestone :

  1. Onboarding Compound community onto the Chainrisk Risk Simulation Engine.
  2. Integration of an in-built block explorer, transaction tracer and visualisation library with the simulation engine.
  3. A robust simulation engine (protocol agnostic) that can stress-test any DeFi protocol through Js/Ts scripts of Agents, Scenarios, Observers, Assertions, and contracts and reduce the dependencies.
  4. Release a beta-testing version where Chainrisk Team works on specific simulations closely with Compound community and inculcates the feedback received.

Simulation Description:

The simulation will focus on a 53% price decline for WETH (over a period of 1000 blocks ) within the USDC market on the Ethereum chain. The USDC market in the Ethereum chain has a total value locked (TVL) of approximately $1 billion and a wrapped Ether (WETH) holds a significant share with the supply of roughly $440 million. ( As of May 9,2024 )

The Chainrisk Cloud platform specifically targets the Compound V3 Price Oracle, which relies on Chainlink’s Price Feed. By altering the return values of this oracle, the simulation mimics the price drop associated with a Black Thursday event for the WETH asset.

Prodecure:

We run on-chain Agent-based Scenario-based Simulations: They directly interact with the blockchain to create a realistic testing environment that mimics actual operations.

These simulations fork the blockchain at a specified block height, incorporating real data up to that point, including account balances & smart contract states.

Steps for running an Agent-based Simulation -

  1. Environment Creation: On-chain simulations create a fork of the blockchain to use real, up-to-date data.

This includes all the conditions of the blockchain up to a specific block height, making the simulation as close to real-world conditions as possible.

  1. Agent-Based Interactions: Agents are deployed within these simulations to mimic the behaviour of different network participants (like lenders, borrowers, & liquidators).

This allows the simulation to evaluate how these actors would behave under various conditions without risking real assets.

  1. Scenario Testing: Various scenarios can be tested, including extreme market conditions, oracle failures, and gas price spikes.

This helps in understanding the potential impacts of such events on the protocol’s stability & security.

  1. Analysis: The simulation can track how changes to the blockchain, initiated by user actions or smart contract executions, impact the system.

Our in-built block explorer and visualisation library helps in understanding potential vulnerabilities & areas for optimization.

Agent-based Simulation Setup for Black Thursday:

Agents -

  • Liquidator Bot: Utilizes Uniswap flashloans to exploit under-collateralized loans by borrowing the underlying asset (ETH or wBTC), repaying the original loan, and profiting from the price discrepancy.
  • Borrower/Supplier: Supplies collateral and takes a loan on the protocol using a specific collateral asset.

Scenario -

Base Token: USDC | Collateral Token: WETH

  • Market Conditions: ETH price dropped in the USDC market by 53%.
  • Market State:
    • USDC Market Cap: $1B
    • ETH Supply: $440M

Observers -

  • Price of WETH: Monitors the real-time price fluctuations of WETH during the simulation.
  • Reserve Balance:
    Tracks the total amount of protocol reserves of the base asset before and after
    each liquidation event.
  • Total Borrow: Tracks the total amount of debt (total borrow) of USDC market during the simulation.
  • Utilization Rate: Tracks the current protocol utilization during the simulation.
  • User Borrow Balance: Tracks the borrowed amount of a users before and after liquidation.
  • Collateral Reserves of WETH: The total balance of protocol collateral reserves for WETH.

Assertions -

  • After each block is mined, check if there is a drop in price of WETH.

Contracts -

  • Oracle: Provides modified price feeds for assets involved (WETH, wBTC, USDC).

Inferences -

  • This simulation will reveal the protocol’s resilience under extreme market conditions and the effectiveness of Compound V3 in mitigating risks associated with volatile asset prices.

Platform Overview -

Here’s a full demo of the simulations :-

Platform Screenshots -

This shows how to create an Agent-based simulation:

  1. Filling in the basic simulation metadata ( like, Name and Description )
  2. The duration of simulation ( lets say, of 1000 blocks )
  3. The block height at which we need to fork
  4. Select whether the transaction tracer is to be turned on/off ( for enabling funds flow, balance changes and internal transactions for the simulations )
  5. Select the right set of simulation Agents, Assertions, Scenarios, Observers and Contracts for Simulation configuration

All Simulations page:

  1. View the simulations created on the platform
  2. Timestamp of simulation
  3. Agents used in the simulation
  4. Simulation Length ( in blocks )
  5. Run, Edit & Delete the simulation

All Simulation Results page:

  1. View the simulations which have been run on the platform
  2. Timestamp of simulation
  3. Duration of simulation
  4. Status of simulation ( Completed | Running | Failed )
  5. Number of assertion cases: Passed & Failed
  6. Delete the simulation

Simulation → Simulation Details

  1. Has the basic simulation metadata - Name and Description
  2. Simulation Configuration: Agents, Assertions & Run History

Simulation Results → Basic Simulation Metadata of a simulation -

  1. State
  2. Timestamp
  3. Simulation Duration ( in blocks )
  4. Number of assertion cases: Passed & Failed
  5. Terminal - View Logs
  6. Jump into Block Explorer

Simulation Results Configurations →

Show the Agents, Assertions and Scenarios for the simulation

Terminal - Client Logs

Shows the main steps performed in the simulation along with the timestamps

Simulation Results Configurations - Agent Zoom in
You can read through the JS/TS scripts by simply clicking on the respective agent, scenario and assertion.

Simulation Results - Observer ( utilization )

This tracks the protocol utilisation and the sudden decrement in the value of the utilisation is marked by the liquidation event.

Simulation Results - Observer ( borrowBalance )

This shows the borrow balance of the 10 wallets we selected for the simulation. Based on their collateral ratio, they’ll be liquidated at different time periods as the price keeps on decreasing. When the summation of all the collateral deposits multiplied by liquidation factor is less than the total borrow value, the account is eligible for liquidation. After the liquidation event, the borrow balance plummets to zero as the position is liquidated.

Simulation Results - Observer ( price )

This shows the gradual drop in price of WETH. Total drop is of 53% across the simulation duration set by the user.

Simulation Results - Observer ( reserves )

Reserves = usdc_balance + totalBorrow - totalSupply
usdc_balance = the balance of USDC held by Compound i.e. ERC20(baseToken).balanceOf(address(this))
totalBorrow = the net amount of USDC owed to Compound from borrowers
totalSupply = the amount of USDC “owed” by Compound
Liquidators call the absorb() function - the amount of loan is paid from the reserve. Let’s suppose there is a loan of 100 USDC. When absorb() is called, 100 USDC is deducted from the reserve so we see a sudden downward spike. Then, buycollateral() is called, so assuming 5% liquidation discount ( 100 - 5 = 95 ) 95 is added to the reserve. So, overall there’s less reserves for the protocol.

Simulation Results - Observer ( totalBorrow )

This tracks the total borrow and the sudden decrement in the value of the borrow is marked by the liquidation event.

Simulation Results - Observer ( collateralReservesWETH )

When the absorb() function is called the collateral of the borrower is set to zero and is absorbed by the protocol. So, the collateral reserves of WETH shots up in this case. Now, after deducting the liquidation penalty, rest is used to cover the debt.

Block Explorer ( Latest Transactions ) shows the latest transaction details ( Transaction Hash, Status, the ‘FROM’ address, the ‘TO’ address, function called, gas used, gas limit, block number and timestamp )

Block Explorer - Transactions Overview

Block Explorer ( Latest Blocks ) shows the latest blocks details ( Block Number, Status, timestamp, number of transactions in the block, gas used, gas limit and block size )

Block Explorer - Block Details

Block Explorer - Block Transactions shows the transactions within a particular block

For Economic exploit simulations, we have integrated the transaction tracer. For demonstration purposes, we have simulated the Harvest Finance ( Compound V2 Fork ) Hack and the screenshots attached below are of the same.

Shows the Fund Flow b/w the addresses along with the order of transactions.

Shows the Internal Transactions

Shows the balance changes for the internal transactions b/w the addresses

Note : The first tranche of the grant has been released by Compound and can be verified at Safe{Wallet} – Transaction details

Milestone 2 to be completed soon ! Stay tuned as we prove our commitments to the economic security of Compound :eyes:

2 Likes

Milestone 2 - Risk Simulation Engine for Compound v3

:zap:Chainrisk <> Compound Finance - 4th Update!

In this update, we refine the features of the simulation engine ( live logging, visualization library, and transaction tracer ), implement security measures, update the backend and cloud architecture to run high throughput simulations, prepare extensive documentation of our work, and engage with the community.

Chainrisk’s SoW for 2nd milestone :

  1. Implement TOTP-based Two-Factor User Authentication.
  2. Implementation of User Management for Organisations
  3. Finalize the main features ( independent ABS scripting, live logging, visualization library, and transaction tracer ) of the simulation engine
  4. Implement a new cloud architecture so, that it can run 2000s of simulations in parallel
  5. Open-sourcing the ABS scripts for the greater good of the community
  6. Prepare comprehensive risk management documentation for the simulations

Implement TOTP-based Two-Factor User Authentication -

Ensuring a more secure ( than OTP-based ) login system with TOTP-based authentication.

Implementation of User Management for Organisations -

Implemented Role-based Access Control ( RBAC ) for the simulation engine -

  • Admin - Can manage all aspects of the organization
  • Developer - Can view and manage simulations
  • Viewer - Can view simulations

Finalise the main features ( independent ABS scripting, live logging, visualization library, and transaction tracer ) of the simulation engine

→ Catalog scripts for Black Thursday Simulation for Compound V3

→ Live logging

→ Visualisation Library

→ Transaction Tracer

Implement a new cloud architecture so, that it can run 2000s of simulations in parallel

New architecture for Cloud to run the simulations -

  1. A new cloud architecture is implemented that can scale automatically to run 2000 simulations in parallel each with 4 vCPU core and 8 GB memory. That means at a time 8000 vCPU cores and 16000 GB of Memory can fired up to run different simulations.
  2. We have completely moved to AWS Graviton Processors cores ( LINUX/ARM64 machines) for all our workloads from simulation and database to CICD workloads. This saves up to 40% on cloud costs without any impact on performance. Also, this machine consumes 60% less energy making it more environmentally friendly at the same time.
  3. The CICD pipeline is approximately now 9 times faster. We are now using our own EC2 instances as CICD runners. This speed jump is made possible as code building runs directly on a LINUX/ARM64 machine. This helps us push critical iterations and bug fixes faster.
  4. Simulations are faster by 10% now thanks to their isolated nature and unshared CPU cores and memory.
  5. All simulations run in separate containers in a completely separate environment so one high CPU-hungry simulation does not slow down another Simulation.
  6. AWS Cloud costs were reduced by approximately 78.2%, making the model more sustainable.
  7. Added extra security layer using AWS WAF to prevent all malicious activities on our API routes.

Backend Update -

  1. The simulation failure handle improved making the engine more robust and a time limit was added for simulation for extra security to prevent system failure from any loop or code fault.
  2. Simulations can be stopped now midway ensuring more system redundancy.
  3. A live progress bar and client-side logs are now shown while a simulation is running.
  4. A new logs library is implemented for users. Users can use the logs library to debug their agents, scenarios, observers, or assertions code.
  5. Rate limiting was added from the server side for all the endpoints to prevent spam and all malicious activity.

Open-sourcing the ABS scripts for the greater good of the community

Prepare comprehensive risk management documentation for the simulations

Milestone 3 to be completed soon ! Stay tuned as we prove our commitments to the economic security of Compound :eyes:

Compound 3rd Milestone Announcement

Chainrisk <> Compound Finance - 5th Update! :zap:

In this update, we focus on harnessing the power of the Chainrisk simulation engine to optimize the risk parameters of the Compound V3 (Comet) protocol. This report delineates a comprehensive methodology aimed at calculating key risk metrics of the protocol. This optimization framework is pivotal for mitigating systemic risks and enhancing the overall stability of the protocol.

Chainrisk’s SoW for 3rd milestone:

The scope of the economic audit is the native USDC market on Arbitrum One Chain. We will conduct a comprehensive examination of the collateral assets in the USDC market, specifically Wrapped Ether (WETH), Wrapped Bitcoin (WBTC), GMX, and Arbitrum (ARB). We will focus on stress testing the protocol and recommending risk parameters on the Arbitrum USDC market.

Goals of the Analysis:

In this milestone, we provide the market risk calculation framework that Chainrisk has utilized to determine risk parameters for the Compound V3 protocol. We specify our methodology for finding optimal risk parameters that juggle capital efficiency and protocol risk. we use Chainrisk’s agent-based simulation platform to estimate protocol losses.

The primary objective of this analysis is to stress test the Compound V3 market under adverse conditions to evaluate its resilience and performance metrics. Key metrics such as Value at Risk (VaR) and Liquidations at Risk (LaR) within the Arbitrum USDC market will be calculated to assess the protocol’s risk exposure. Based on the findings, we will provide tailored parameter recommendations to optimize the Arbitrum USDC market on the Arbitrum chain, thereby enhancing its stability and mitigating potential systemic risks.

Chainrisk Simulation Environment:

The Chainrisk Simulation Engine is a modular testing environment configured to align with the Compound’s existing state including lenders, borrowers, liquidators, contracts, and oracles. This engine has been used to run all the simulations and results in this report. The Chainrisk Simulation Engine is unique for its two-part high-fidelity simulation system.

  1. RiskEVM - This Rust-based simulation engine is optimized for Heavy Computations.
  2. On-Chain Simulation - This Engine is responsible for backtesting on forked networks and is optimized for factual precision.

Why Agent-Based Monte Carlo Simulations?

An important class of applications for agent-based Monte Carlo simulations is financial applications, like estimating the distribution of the future value of a static portfolio. Here, we want to simulate protocol losses depending on price dynamics and understand the interaction of borrowers and liquidators. In each simulation, we update what the borrowers and liquidators are doing at every time step. Because one simulation may not be able to give the complete picture, we will do multiple simulations to capture a broad range of possible outcomes. Indeed, so long as the simulations are statistically unbiased and independent, we may use such results to estimate the expected value of losses our protocols will sustain.

Computing Price Trajectories -

We decided to use the time series model of GARCH. With the application of a battery of statistical tests on the historical price data, we decided on the order of GARCH and the corresponding ARMA model. The historical data has initially been subjected to testing of all the assumptions of the GARCH model and further statistical tests to develop the desired price prediction model.

A correlated GARCH model, also known as a multivariate GARCH model, extends the univariate GARCH model to capture the dynamic correlations between multiple time series. This model is particularly useful in finance for analyzing the joint volatility of asset returns as it allows for the modeling of time-varying correlations that can reflect changing market conditions. By incorporating the correlation structure, the model provides a more comprehensive understanding of the co-movements and risk dynamics of the assets, making it a valuable model to be used in analyzing financial data.

Simulation -

The historical data used for the simulation is sourced from different assets, but it is uniformly sampled at intervals of 200 blocks over the past month. This consistent sampling interval ensures the data is comparable across different assets and allows for a standardized analysis period. Initially, we employed a single-asset model using Ethereum’s historical data. This model demonstrated a lower Mean Squared Error (MSE) compared to other models when tested across various test-split ratios. Additionally, we evaluated other efficiency statistics such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to assist in selecting the most effective model.

In the subsequent phase, we expanded our approach to a multi-asset scenario by accounting for the interdependencies among the price movements of different assets. The historical data revealed a significant correlation between the assets under consideration, indicating that their price movements are not entirely independent. Leveraging this correlation structure, we modeled the multi-asset price trajectory.

Borrower Agents -

The Borrower Agents are initialized based on on-chain historical data on Arbitrum One Chain. We take into consideration only significant borrowers with a minimum borrowed amount. In our case, the minimum borrowed amount is $1000. Additionally, there is an upper bound on the Health Factor of the borrower’s address. The Health Factor must be equal to or lower than 2.

Slippage -

Building on the understanding of price trajectories, we now delve into a comprehensive analysis of slippage data to explore the relationship between slippage and trading activities.

Liquidator Agents -

Liquidator agents play a crucial role in the liquidation process within decentralized finance protocols. They operate based on a profit function that is influenced by several key parameters, which dictate their behavior during liquidations.

Key Parameters Influencing Liquidator Behavior -

  • Slippage of Collateral Asset
  • Trading Fee on DEXes
  • Liquidation Penalty
  • StoreFront Price Factor

Condition for Liquidator Bot to Buy Collateral -

To determine whether the liquidator bot should proceed with purchasing collateral, the following condition must be satisfied:

Max Liquidator Slippage(%) ≤ LP ∗ SFP

Profit Function for Liquidator -

The profit function for the liquidator agent can be expressed as follows:

P = SC − PC − (f × SC ) − S,

where P represents profit, SC represents the sale price of the Collateral, PC is purchase price of the collateral, f is the Trading fee rate, S represents the slippage costs.

Estimating Value at Risk -

To estimate the Value at Risk (VaR) using a simulation-based approach, the following steps are taken:

  • Initial Simulation: 5000 simulations were run using an initial set of estimated parameters s. Once these simulations are complete, calculate the 95th percentile of the losses generated from these 5000 iterations. This percentile represents the potential loss level that will only be exceeded 5% of the time.

  • Convergence Check: To ensure the reliability of the 95th percentile estimate, run an additional 5000 simulations using the same parameters, s. Combine these new simulations with the initial set to create a total of 10,000 simulations. Calculate the 95th percentile loss again based
    on this larger dataset. Compare this new 95th percentile with the one obtained from the initial 5000 simulations. If the absolute difference between these two percentiles is within a pre-specified tolerance levels, the process can move forward. If not, this step is repeated until the difference is within the acceptable bound.

  • Final Calculation: Once the convergence condition is met, conduct a final round of 5000 simulations under the same parameters, bringing the total to 15,000 simulations. If the difference in 95th percentiles from previous steps is still within ϵ, the final VaR is estimated as the 95th percentile loss from the entire set of 15,000 simulations. This ensures that the VaR calculation is both stable and accurate.

Results -

It can be observed that each of the three simulation sets (total and asset-wise) employs the same distribution model. This denotes equal risk characteristics (nearly the same LaR distribution curve) across three sets, encompassing all extreme market conditions. In each simulation set, scenarios are generated randomly and independently. Such randomness helps prevent one scenario from having an undue effect on LaR estimates as a whole.

Every simulation set aims to cover all possible scenarios that can occur for purposes of liquidity analysis. By including extreme events, a plausible liquidity hazard evaluation can more easily be made. The distribution function curve for estimated LaR values are more or less similar for each of the set of price trajectories. Thus, the estimator employed for calculating LaR is statistically consistent. It is to be further noticed that even if more such sets of price trajectories are created, it consistently produces accurate estimates.

The LaR_ARB (blue plots) in all three simulations shows multiple peaks, meaning there are multiple scenarios of varying risks affecting the system. This could be due to changes in the market, types of collateral, or user actions that create different levels of liquidation risk. Each peak represents a group of similar risk levels. On the other hand, the LaR_total (red plots) also has multiple peaks, but they differ from LaR_ARB, showing that the overall risk is influenced by many combined factors. This difference suggests that while Arbitrum has its risk patterns, the total risk comes from a mix of several sources. LaR_GMX, LaR_WBTC, LaR_WETH: These metrics exhibit the same behavior across all three simulations. They are unimodal, showing a concentration around a single central value, which suggests a single dominant risk factor.

Do check out the full economic audit report here -

1 Like