I know this topic was given a lot of consideration during development
of the early versions of JBT, but I'd like to revisit it because I've
been developing some indicators that require more data from the book
than is currently saved in the market data files.

As I understand it, the major reason for not saving the entire book
was that the data files would get way too big way too quickly.  Large
data files would be difficult to share over the net, and would add a
lot of disk overhead during optimization and backtesting.

Over the last couple of years, it seems to me that the amount of data
file sharing has dropped off dramatically.  Other aspects of the
optimizer performance have been improved, which might now be able to
be traded for disk usage.  In my backtesting and optimization system,
I've got a 7-disk 3TB ZFS Raid, a dual-processor multi-core CPU
configuration, and loads more RAM than I had years ago.  I'm also on
fiber now, instead of cable.  Data file size is really not the same
obstacle it used to be for me.

I'm actually not proposing saving the entire book.  The indicators
I've been testing use an average price computed based on the size of
each line in the book, as opposed to the midpoint price.  This is easy
to do in real-time, but impossible to compute using the current data
format for backtesting and optimization.  I'm simply interested in
reopening the data format discussion to see if anyone else has found
that the current format does not provide enough information, and if
file size is less of an obstacle for other people now that a couple
years have passed.

-- 
You received this message because you are subscribed to the Google Groups 
"JBookTrader" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/jbooktrader?hl=en.

Reply via email to