This sounds like an indicator to me. I don't like the idea of effectively smoothing the data beneath the 'user space' in JBT. It sounds to me like we're placing multi-user consistency higher than everything else -- potentially above profitability even, and I'm not convinced this is in the best interest of JBT.
With Double depth balance, couldn't we obtain a similar result in user-space by using DepthBalanceEMA(period) or similar where ever we require DepthBalance()? I know it isn't exactly the same thing, but I wonder if there would be any, real practical difference with this approach. Using this approach would allow a) existing strategies to continue working as they are or perhaps with very tiny changes, b) existing data would be more relevant and c) strategies implementing the smoothed DB might already see better, if not excellent, multi-user consistency. Even if I haven't completely convinced everyone, can I at least ask that we take this only one step at a time and implement the Double depth balance first and see if we can get the consistency we require before making more fundamental changes that will render existing strategies and collected data irrelevant. I do like and welcome the idea of going to Double for the depth balance in general though and I would also like to see us retain the total size of the book as well as the balance (or just retain the bid-size and the ask-size so that both total size and depth balance can be recomputed as required). On Sat, Aug 22, 2009 at 6:32 AM, nonlinear5 <[email protected]>wrote: > > Ok, I thought about this and here is my plan for the upcoming release: > > 1. Instead of using the midpoint between the minimum and maximum > balances, 1-second depth balance will be represented by an exponential > moving average of market depth. I have not yet decided what multiplier > to use for that EMA, though. During every second, market depth changes > somewhere between 10 and 70 times, with the average of about 30, so my > guess is that the best multiplier would be half the average number of > updates, i.e, ema multiplier = 2 / (15 + 1). An alternative approach > is to make the multiplier dynamic: > multiplier = 2 / (N / 2 + 1), where N is the number of depth changes > in the last second. > Intuitively, the ema of depth balance is a better representation of > the current depth balance than the midpoint between min and max. I am > also guessing that this new representation will also be much less > sensitive to sampling differences. > > 2. In the current implementation, the resulting 1-second balance is > rounded to the next integer: > int balance = (int) Math.round((lowBalance + highBalance) / 2d) > In the next release, I'll make it a double, and round it to two > decimal places. This should also improve consistency. > > One big downside to all this is that it is unknown how it would affect > the strategies optimized on the data in the old format. That is, if > you have a strategy with a parameter which has a depth balance > threshold as a criteria for entry/exit, the value of this parameter > would probably need to change with the new depth balance > representation. To put it in other words, the previously recorded data > may need to be discarded. > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "JBookTrader" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/jbooktrader?hl=en -~----------~----~----~----~------~----~------~--~---
