Proposal: Implement Almanak’s daily trading fee and trading liquidity recommendations

In the report you describe a methodology for computing volatility (displayed here):

image

But in the math courses I have taken with regard to finance (and the textbooks I check for reference) volatility is typically expressed as the standard deviation of the set of price points and would be

image

In the alternative formula used here the number of items in the data set (N) has been replaced with the average price. Why?

It makes - some - reasonable sense that because the average of price data here will always be positive we can replace the number of items in the data set with the average price, but I wasn’t able to come up with a stronger, intuitive reason for such a change.

The other change made, which is much more confusing to me personally, is you entirely replaced the “sum of the square of the difference of the items and the population mean” with the much simpler “difference of highest and lowest price.”

I have a degree in mathematics and have some experience in accounting. I am certainly not a high level financial data analyst.

My guess is that the alternative formulas you have used are industry standard for something and during my formal education for math I never ran into that kind of thing.

Since the volatility formula is used as the basis for the observed volatility and the adaptation factor I was hoping you would provide insight into why you chose these specific formulas instead of the more widely known formulas.

Thank you so much for this report by the way. I really tried to approach it as a total skeptic and by the end you guys made me a total fanboy. I was really trying to come up with some useful criticism for the paper - for the sake of academics - and this was the only thing I could come up with.

So if it seems like a bit of a contrived nitpick… it kinda is.

1 Like

Thank you for your explanation, my last concern that I hope you could clear up relates to volatility.

With your explanation, the optimizer checks the volatility over the past 24 hours and adjusts accordingly. What would happen in the event that there is a sudden spike of volatility?

example: trading liquidity is lowered ( and the optimizer will then re-check the volatility 24 hours later ), but soon after the liquidity is lowered, an individual uses some sort of flash loan to hit the pool repeatedly. In this scenario, volatility would not gradually increase, rather it would be a sudden spike.

I do not know if increased or decreased trading liquidity would matter in such a scenario, so my example may be flawed. But if decreased trading liquidity does matter in such a scenario, then what would happen in this event vs if the trading liquidity remained as is.

Thanks @55-Stephan

Hey @ZenoBNT
Thank you so much for your question which is on spot and an actually very exciting one.
I now realize our wording may raise confusion so our apologizes for that.

To provide you with a clear answer, let me take a step back and provide a bit of background.

You’re absolutely right about the definition of the volatility estimator. This indicator is widely used in Finance, generally applied to Close price returns, taken at a certain time interval T so that volatility refers to that interval. When looking at Gaussian-type processes (like very often assumed so in general Quantitative Finance), one can scale the volatility to a larger time interval by using the so-called “square root of time” law. Say if you have volatility for a unit time interval, you can scale it to say 5 time intervals by simply multiplying by square root of 5.

Very convenient formula but highly misguiding in many practical cases. To illustrate this, consider the time interval tends towards zero, you are assuming there’s less and less risk as time interval reduces. In other words, according to this formula price accuracy can be indefinitely improved by reducing measurement interval. Even more, at T=0, risk is null. That is of course not true in real markets since price accuracy cannot be lower than the spread between bid and ask. And this is even more so in markets where liquidity is limited with potential discontinuity due to lack of depth and market impact of large trades.

Think of it that way: Every transaction on a financial market (either CEX or DEX) is an elementary act of price measurement. Said otherwise, markets have quantum nature.

In our case, this is a very important feature to keep in mind since we are interested in modeling the relationship between volume, price and volatility at both an aggregated time interval level and an aggregated transaction level. Our capacity to reflect price uncertainty and its sources within the mechanism of the AMM is therefore paramount. Among those sources, slippage is by far the major one we want to reflect at the transaction level since it is irreducible, even when bid-ask spread is negligible.

This has two major consequences for us:

  1. First, we need to model things based on a spread, not a unique price such as a mid price or a close price. Since we have access to OHLC data, we can build bars (time, volume or block) and look at High(H) and Low(L) over the bar. The height of the OHLC bars (P_high – P_low) represents price uncertainty over its reference time interval. Since the underlying data behind the OHLC bars is characterized by spread and volatility as the only relevant parameters beside price, the height of the bars must be entirely determined by them.
  2. We can then use a volatility estimator based on HL. The most common one is known as Parkinson HL volatility. It relies on the sum of the square log ratio of H/L, averaged over the number of bars. The latter is overall a far better estimator than the standard deviation of log returns from Close price.

In the report, the formula provided corresponds to slightly shifted version of the Parkinson HL volatility, since we are using the absolute HL change normalized by the average price over the time bar instead of the log HL. Moreover, the formula provides the volatility estimate for a one hour time bar t, which we average over several time bars when we need to scale say to one day (24 time bars).

I hope I was able to bring clarity to that technical matter. Do no hesitate if you still have questions.
All the best.

2 Likes

I think this is what I didn’t understand during my reading. I was looking for a more ‘intuitive’ way to understand why you made these decisions and this is a great answer.

Thank you.

And with that I think every possible aversion I might have to implementation is gone. I am looking forward to voting yes for this on snapshot.

2 Likes

Hi @Jindo
I like the scenario you come up with :slight_smile:

As a general comment, note that (almost) any kind of economic scenario can be modeled on our platform by building the agent behaviour we are interested in. This kind of stress testing approach is also one of the use cases of our tech and could be included in the protocol risk dashboard. We will be happy to discuss this further with the DAO when the time is right.

To your point, if my understanding is correct there might be a confusion about the volatility in the scenario you depict. Our estimate of the volatility is based on the open market price (using Binance as main source) so that the in-pool price is not taken as “ground truth” for the trading liquidity update. So your scenario is not likely to happen as described because if a sudden influx of capital hit the pool, it would not impact our estimate of volatility.

example: trading liquidity is lowered ( and the optimizer will then re-check the volatility 24 hours later ), but soon after the liquidity is lowered, an individual uses some sort of flash loan to hit the pool repeatedly. In this scenario, volatility would not gradually increase, rather it would be a sudden spike.

However, to pick up on the scenario of a peak in the asset volatility, on-curve liquidity will be shrunk according to the trained adaptation factor (multiplier) if the 24h volatility is above the volatility threshold.

Not executing this liquidity shrinkage would impede the protocol’s ability to stabilize and reduce the impact of price changes on user deposits, therefore increasing the exposure to impermanent loss.

Hope this answers your question. Do not hesitate if further questions.
All the best.

@PaperStreetCapital @Jindo @umstah @dirtyfrenchman @ZenoBNT @Jon proposal is up, please make sure to vote! :slight_smile: thank you for your amazing feedback!

2 Likes

sorry, but could you re-phrase this a little for me? trying to understand it, and want to make sure that my understanding is correct.

as per the vote, it will grant Almanak the rights to move liquidity/pool on a daily basis without the need of a DAO vote ( makes sense given the daily changes to vol/etc ). So given this, I really want to make sure that any concerns I have are crystal clear. thank you

Hi @Jindo

Sure let me try to clarify below.

Explanation
The depth of a pool defines the on-curve liquidity and thus how much capital of all deposits can be traded. As of the time of writing, Bancor has allocated all deposits into trading liquidity. As the price of BNT diverges, more imbalance is created and more IL is accrued by the protocol.

Dynamically changing the pool size by moving the trading liquidity to an idle, staked-liquidity state, thus changing the factor of the CPMM pool equation (equivalent to the TL) is beneficial to reduce IL.
Especially with falling prices, taking components out leaves less capital at risk but would hurt protocol returns over time.
Our simulations show that adapting to price changes (volatility) swiftly, reduce IL risk, simply because it reduces the capital being exposed to price divergences, therefore accruing less IL. So relatively speaking, not adapting TL frequently, increases IL risk.
See chart below.

Moreover, the frequency of the dynamic TL change is key, and should be aligned with volatility. As shown in the next chart, adjusting daily or weekly ends up at the same level of TL at the end of the time interval, but the accrued IL, being based on on-curve liquidity at any point in time would be lesser in the former case since the cumulative amount of TL (Capital at Risk) over the period is smaller in that case (14 daily adjustments vs. 1 weekly adjustment).

Hope that clarifies it all.
All the best.

1 Like

I’m just a smooth brained, tiny investor here… But I voted NO for this proposal for a few reasons I will point out. I have total faith that the larger community will figure out what is best in the end, and I have no ill intention towards the Almanak team, who sounds like pretty cool folks… This is probably the first FUD I have ever shared, and I could be completely wrong, but it’s worth putting out there

  1. Introduces of high level of trust/centralization into tokenomics

Things went from bad to worse when the shutdown happened, right as the bear market ramped up. By time all the paperhands left, the deficit was fully realized. I totally understand introducing financial advice, but giving up DAO rights to a multisig is the definition of centralization and a slippery slope. We already got enough flack from the multisig shutting down IL-protection without the DAO, and having to (successfully) get approval retro-actively. Now we are telling the larger defi community ‘we got some third party to take care of our protocol for a while. trust us bros’. Regardless if Almanak can perform (I’m sure they will!), is this going against the decentralized ethos ? Introducing high levels of trust to a multisig and third party? To me, yes. To the larger defi community who by-and-large has lower faith in BNT, most likely hell yes

  1. Opens opportunity for another DAO whale

Going off some notes from Stephan in the discord, the safety mechanism sounds pretty solid though. 48-month vesting period, performance based, DAO can stop vesting… but point “(7) tokens allow Almanak to post proposal, vote on proposal…” does not seem wise. If the allocation is too high, Almanak can make proposals, and vote them through, creating a whale situation for the vote. Not to mention if we give up one DAO vote now to the multisig, what if it comes up again and get’s pushed through with this power. Again, this is considering worse case scenario which may very well never be close to true… but a precedent gets set to have a third-party become highly vested in the tokenomics which again leads to centralization.

  1. This financial advice should be paid for with fiat from the Bancor Foundation, not the DAO

Getting financial advice is super smart, super effective, and I’m sure will turn out phenomenal… This sounds like exactly the thing the Bancor Foundation should invest in with it’s own fiat currency as payment to Almanak. Not allocation of tokens to them, introducing a large ‘trusted third party’ into the tokenomics. The exploit by large whales before that lead to the shutdown was not due to the DAO. That is a larger problem perhaps the foundation themselves also could not have prevented… Why should the DAO get it’s voting power taken away, then have to ultimately live with another whale. Why not pay Almanak in fiat from the beginning… That doesn’t lead to a stripped DAO with a trusted third party able to propose votes, and swing them with a big stake. The financial advice is something the foundation needs, to improve it’s product and should in my opinion come out of it’s pocket, not the protocol… respectfully

That’s my two cents. Introduces a trusted third party, removes DAO rights for sake of multisig again, and should be a cost paid for by the Bancor Foundation, not the DAO.

2 Likes

Hey @Helios

My name is Michael and I am CEO of the Almanak:)

Thank you for your message and sharing of your concerns, asking challenging questions is the most important work every vibrant community have to do and allow us to provide clarity and improve our services:)

1.a) We are not getting direct access to the multisig, we are just sending recommendations that will be implemented by the current Bancor multisig owners. The recommendations will be sent via telegram bot and transparent to everyone. Because we are reacting to changes in the volatility, the faster the recommendation can be introduced the more effective for protocol revenue and deficit.

Our competitor Gauntlet, indeed got access to the multisig to optimise Balancer fee, however we are confident that if the recommendations are introduced fast enough, we don’t need this access.

You can take a look how Gantlet performed for Balancer here:

1.b) We would say it is a path towards decentralisation rather than centralisation. Managing complex protocol such Bancor is impossible without organised research and development initiative supported with technological and scientific resources. Currently, all the work is done by Bancor team and vetted by DAO. We are entering as an in-depended second team, that brings a new point of few to the table. So now DAO rather than vetting the work of 1 group is vetting the work of two in-depended groups.

This practice is slowly becoming a market standard. AAVE as a trend setter and market leader started to use Gauntlet two years ago and currently is in negotiation with another party - Chaos Labs to double down on the sophisticated risk and capital efficiency management. AAVE, and other clients of Gauntet and ChaosLabs (Maker, Compound, SNX, dydx) understand very well that using sophisticated simulation software allows them to stay ahead of the competition from the risk and revenue perspective.

2.Here we are totally open for any type of collaboration conditions that DAO is comfortable with. However having a stake in the game would align incentives. We haven’t started this conversation yet but I am sure we will find the golden mean with DAO:).

3.We are also open for any conversation, however, like mentioned above, aligned incentives game theory usually work very well here. All the above mentioned projects (Maker, Compound, SNX, dydx) have treasury that is used exactly to provide remuneration for such services. The game theory works simply here:
a)DAO pays from the treasury to the provider, with intention that in the long term, this will make the treasury bigger,
b)if the provider fails to make the treasury bigger, so costs of provider services are higher than revenues the provider is bringing, the DAO votes to stop using the provider services.

We are also open to do it on a Foundation Grants basis however this discussion has not been concluded yet.

Let me know if that clarifies some of your concerns:) II am more than happy to go deeper if you like:)

1 Like

Michael I appreciate the reply. It’s good to see other protocols are bringing on board similar companies and that sounds all good and fine. I was not aware of that before… But my points still remain

  1. I’m not saying Almanak takes over the multisig, I’m saying the DAO will only be an image, and the multisig will be running the protocol (it will for 3 month time period, and sounds like likely afterwards if decisions have to be made day-by-day and a vote takes 3-4)…
    There are protocols that do not even have DAOs and only multisigs. But we give up the DAO power once, we give it up twice, how many more times before everything is the multisig. This is centralization, which I am not purely against, as it is a handoff for performance. And that can be fine but it will be broadcast to the community, likely in negative ways. Almanak will not be independant if they have a financial incentive on the TKN side.
    That makes them dependent on the protocol (to pay for their services), and could (again, likely not going to happen) use the power they gain for their own purposes (changing a token fee that they have an undisclosed position in for example, options plays, etc). Multiple community members contributing to a project is decentralization, but creating a gamified way to pay them by removing DAO rights heads towards centralization in my opinion

  2. Giving something for free for 3 months (that uses advanced computers and algorithms), then negotiating terms to me sounds like there will be an expectation, and whale opportunity. I know business do not give pro-bono work out of the goodness of their heart (although it’d be great if true). Once Alamanak is a whale in the protocol, will the DAO really be able to make it’s own decisions, perhaps… but perhaps not depending on the % allocation

  3. If Almanak is paid for by the Bancor Foundation, there is no potential for conflict of interest. Almanak can also be awarded BNT TKNS that the foundation themselves purchase from the market as a performance incentive. Then Almanak will have the same rights as anyone else in the DAO. This is the decentralized route
    Treating Almanak on the token side as a regular user like everyone else, and on the foundation side as a business partner paid in fiat is the decentralized route. This is a business side need from the foundation to improve the protocols deficit, not DAO individual investors who did not write the protocol, so why involve us in the risk.

This is what it really boils down to for me, other risks aside (because they are probably not going to happen). Again, I’m a nobody on the grand scheme of things, but if we are going a centralized route, which a multisig leading the ship is, then we just need to own it and let the larger community do what they will. i don’t think financial advice should be paid for by the DAO or protocol, but by the foundation

4 Likes