Compound DAO vendor management capability & Gauntlet

Background

At this time, Compound DAO lacks the tools, structures and the dedicated resources necessary to evaluate vendor performance in a rigorous fashion, and hold them accountable. Community members, by and large, put on their best cheering hats, and make only superficial efforts to look through vendors’ deliverables closely. The concern here is that this has created a permissive environment in which vendors are not sufficiently challenged to be accountable to the community, or be diligent in vetting their work product / deliverables.

My gut sense tells me that this is an industry-wide problem across many vendors, and perhaps, across many DAOs. Because, as I see it, only a few DAOs are undertaking a systematic and thoughtful approach towards creating a scalable and sustainable DAO ecosystem (see, for example, MakerDAO’s SES initiative).

In this post, however, I will focus on examples pertaining to Gauntlet as a vendor of Compound DAO, because I have followed their work more closely relative to that of other vendors.

Gauntlet

This post has been hard to write for several reasons. First, I truly believe that Gauntlet has some very smart people and that they also hold some unique intellectual property. Second, while I believe that they can do better in terms of accountability, their current performance is partly a result of the permissive DAO environment and culture, as I noted above. So, in writing this post, my purpose is not to engage in blame game, but to use specific examples as case studies to initiate a discussion within the community and elicit constructive change going forward.

Some examples to draw lessons from

  1. Last November, Gauntlet’s initial Risk Dashboard was showing a counterintuitive decrease in VAR while there was an increase in Collateral Factor. While they fixed a bug after it was pointed out, a basic commonsense due diligence should’ve caught the error upfront. The dashboard also contained several typos and references to Aave, which showed that there was not much of a quality control from their side, and it looked like they were just being lackadaisical about it. See this discussion here.

  2. As another example, they were repeatedly urged to enhance the quality of their documentation and explain certain aspects of VAR better, to make it more relatable to an average user. Specifically, certain ideas deserve a greater degree of attention: One being that VAR represents the minimum amount of loss that the protocol can incur (during tail events), not the maximum. They are yet to enhance their dashboard or Medium posts to contain these basic admonitions.

  3. Gauntlet’s response has often been too slow or lacked accountability. (Note: At times, I was the only one interacting with their posts, so this is a personal observation). See this example. In the traditional risk management world, if a large or notable event occurs (such as the $12.8M liquidation sustained by one account cited in the post, a significant outlier compared to others), it’s very common for the report producers to make an effort to provide explanation. It would be considered an utter lack of accountability if they say to their audience “go look it up yourself”.

    There are two issues here:

  • Unlike, say, Dune Analytics reports, the logic behind Gauntlet’s “Market Risk Monthly” reports is not publicly available. The community can’t tweak or play around with reports. If there is an error or inadvertent mistake, the community can’t catch it. We simply have to take them at their word, or verify the accuracy from scratch using block explorers.

  • Second, if Gauntlet truly believes that their job is done once they publish the reports at a high-level, and that they are not obligated to drill down and explain to the community members, then they need to spell-out their commitments and obligations in a great detail (something similar to vendor contracts). Then the community can make an educated decision if the compensation is commensurate with the value being delivered. Compound DAO currently pays a very significant amount of fee (about $1.9 Million/ quarter at the current rate). I would submit that Compound DAO deserves a lot more value and accountability from Gauntlet than what we see today.

Recommendations to Gauntlet

I urge them to consider enhancing their service on two fronts:

Transparency : In business, there’s an adage: “Trust but verify”. Crypto industry superbly follows this motto, by and large. All code is opensource and verifiable. Gauntlet is an exception. Their models are black box, and their reports’ logic is not accessible, and their internal quality control / assurance mechanisms are opaque. On almost all fronts, they are simply saying “trust us”, but there’s no avenue for the community to verify anything. Even if their math is accurate, there could be some unresolved bugs (see Example 1 above). Enterprise software companies, such as Oracle, have a great deal of intellectual property, but their products’ functionality and performance is verifiable. But, that’s not the case with Gauntlet. Their operating model essentially is: “Trust but not Verify”. I urge them to radically move towards the “Trust but Verify” model, even while safeguarding their intellectual property.

Without such transparency, it’s hard to judge if the community is deriving value that’s commensurate with the compensation being paid. How does anyone know if their models are correct and error-free?

Accountability: I have provided several examples above. They can take a closer look and improve the way they interact with the community.

Recommendations to the Community

Despite being an industry pioneer, Compound DAO hasn’t done much to strengthen the DAO ecosystem and make it scalable and sustainable. I urge the community to take a closer look at MakerDAO SES initiative / Core Unit structures, and start first with enhancing Compound DAO’s vendor management capability.

3 Likes

Apparently Gauntlet is making a couple of lazy suggestions to avoid errors, rather than actually trying to improve asset utilization within Compound. I would suggest not working with them for now, just like what Balance DAO did.

3 Likes

@ClairvoyantLabs, thanks for pointing to the link to Balancer DAO proposal. The first bullet point in Balancer proposal says this: “Gauntlet uses closed-source code - this is against the open-source spirit of Balancer”. That’s exactly the point. It’s hard the judge the true value of what Gauntlet is offering because everything is closed. In the crypto community, it’s like trying to fit a square peg in a round hole.

There’re great many examples of Silicon Valley companies going open-source and becoming behemoths. Gauntlet will do a lot of good to themselves and the crypto community if they open up everything (except for some key algos) and offer API. Their value proposition will only increase. There’s a range of apps that can be built on top of such an API.

Hi @RogerS , we hear your constructive feedback and always welcome thoughts from the community.

To clarify, at a high level Gauntlet’s Risk Management platform quantifies risk, optimizes risk parameters, runs economic stress tests, and raises the alarm when needed. These are the decisions that Wall Street failed to make in the 2008 financial crisis.

In late January 2022, for example, there was a market downturn. Over the months preceding, we have largely been raising the capital efficiency of all of the protocols we work with. In light of this, none of our clients experienced any meaningful insolvencies even though some assets crashed by more than 50%. This is how our platform makes robust tradeoffs between risk and capital efficiency.

While working to accomplish this primary objective, we actively participate in the most impactful ways to the community, including providing analysis on lowering MKR’s borrow cap, working with OpenZeppelin and the FRAX team on listing new assets, and providing a market risk analysis on TUSD. We prioritize whatever is most valuable to the community from a market risk perspective, and we receive such feedback via user studies. We note the examples you mentioned above and apologize that we were not able to respond to you as soon as you would have liked - moving forward, we’ll certainly try to be more expeditious while also balancing your requests against other needs of the protocol.

Roger, you mention transparency. Your point around verifying data for market risk reports is well taken. We have been thinking through how to make the metrics publicly available to the community. As for explainability of models, this question speaks to a common issue in AI/ML systems (or complex systems in general). For complex systems, like Gauntlet’s agent-based simulations, there are many ingested features that drive our simulations in non-linear ways. Although we will not open-source our intellectual property, to increase transparency we published our Parameter Recommendation Methodology, Model Methodology, and Deep Dive on Value-at-Risk to explain our product in detail to the community. In every set of parameter recommendations, we also include rationale and relevant datasets that feed into our simulation platform.

3 Likes

With regards to ClairvoyantLabs comments - please see response below:

2 Likes

Hey @pauljlei, it’s completely chill. No apology necessary, and it was nothing personal. This is all about value discovery, complicated by Gauntlet’s closed-source model.

Yes, it’s a brave new world, but financial discipline still matters, and I believe that protocols / DAOs need to start showing profit and optimize costs.

Regarding the specific points:

  1. Great that you’re thinking about making the market metrics publicly available. Community will surely appreciate that.

  2. I understand your concerns about open-sourcing your software. However, there’s a lot you can do to give confidence to the community and give a better sense of the value. Some ideas to consider:

  • Provide a simulation platform with historical / fake data (such as stock paper trading)

  • Lot more can be done in explaining your product with process-flow diagrams with inputs and outputs at every stage. It’s not credible if you say that your models are not explainable in more detail than what’s been presented in your Medium post.

  1. Regarding the dashboard and making it more useful for average user
  • Explain VAR better. This was asked by other community members too. Is VAR maximum loss or minimum loss? It’s both. Given a Confidence Level of 95%, it’s the maximum 95% of the time, but is the minimum remaining 5% of the time (time being the defined VAR period). So, protocol can experience loss beyond VAR amount in extreme tail events. VAR number is confusing to many people, and contextual explanation will help a lot. Also, what’s the defined VAR period in your models?

  • If your VAR is a daily VAR, how often do you update the Dashboard? (As of this writing, I see that the dashboard was updated 2 days ago). One of your Medium posts seems to indicate that Volatility Scalar is a proxy for forward volatility, and is (or should be) calculated daily using the previous day’s price movement. So, if the VAR on the dashboard is not up to date (not based on the most recent Volatility Scalar), then it’s a bit less useful during extreme price events. I understand that you will have computational limitations, but then the limitations of the dashboard need to be made clear to the users.

  • The dashboard is not user friendly. Heatmaps are unintuitive. It’s understandable that, being a quant company, your focus is not on UI, but I would suggest engaging a third-party UI design company. Also, all the terminology needs to have links or some other easy way to get to the full definitions and math formulae where applicable (say, except for sim outputs). Right now, a few terms have endnote popups, but lack full-fledged depth. (this was discussed during the Dashboard user study, and it’s a bit disappointing that not much has been done so far.)

I will post any additional suggestions here, and likewise, request the community to post their ideas for improving the product. Sending feedback directly to Gauntlet via feedback link / email is not ideal, because community would lose the advantage of a central repository for tracking progress and discussion.

2 Likes