Thanks to @blck for pointing me toward web3ās Contract.events
, I no longer need to loop over all transactions and can obtain all the metadata on early users we should need with a very modest number of RPCs. @blck also pointed out that my first attempt ignored V1 users (!) I think Iāve rectified that with the new version.
Here is the updated tool
Iāve also posted the resulting information in csv format, separately for V1 and for V2:
Compound V1 Early User List (36942 txs by 28021 unique addresses)
Compound V2 Early User List (254436 txs by 218725 unique addresses)
I think/hope this is moving us in the direction of what @getty had in mind by:
I didnāt provide headers, but youāll see in the lists that each row contains the block height, user address, type of interaction with the Compound protocol, amount of tokens, and underlying token ID (cashtag) for each transaction. Also, I didnāt remove duplicate addresses: if you interacted with the protocol several times, there should be a separate line for each transaction.
As a first sanity check, it would be awesome if the folks who didnāt see their addresses in @grasponcryptoās Dune Analytics results could check these outputs and see if theyāre listed here.
As a next step, I would suggest we slice the data in a few of the different ways that have been suggested on this forum already. Here are a few of the ways I noticed:
1. (UNI-like or socialized distribution style) equal distribution to all early user addresses
- with or without a minimum value threshhold (what value?)
- with or without a bonus multiplier for V1 users (what multiplier?)
2. (COMP-like or capital-weighted distribution style) pro-rata distribution based on (value supplied+borrowed)*(time)
- how should early liquidators be included in a pro-rata distribution, if at all? They lack the ātimeā axis that suppliers and borrowers have.
3. Pro-rata distribution based only on total value supplied, borrowed, and liquidated, not based on time.
- for either 2 or 3, should the pro-rata distribution be rescaled to elevate small early users relative to early whales? I have seen quadratic scaling suggested; or we could take an even more aggressive exponential scaling, where the amount received grows linearly with each order of magnitude of value supplied to, and/or borrowed from, the protocol.
The script does not currently access the price data that would be needed for distribution styles 2 and 3. If someone else wants to add that piece to the puzzle, feel free to submit a pull request! Otherwise Iāll take a stab at it, but again this isnāt really my wheelhouse so it might take me a while.
What do folks think? Are there other ways of slicing the data youād like to see? Can we narrow it down to one or two of these options before producing the actual list of addresses and distribution amounts for a proposal?
Last but not least, will we be able to pull together enough COMP to submit a CAP once weāve converged on the details? I can get us a few percent of the way there; maybe if there are a few dozen of us with a few COMP eachā¦