> As I understand it, the major reason for not saving the entire book
> was that the data files would get way too big way too quickly.  Large
> data files would be difficult to share over the net, and would add a
> lot of disk overhead during optimization and backtesting.
>
> Over the last couple of years, it seems to me that the amount of data
> file sharing has dropped off dramatically.  Other aspects of the
> optimizer performance have been improved, which might now be able to
> be traded for disk usage.  In my backtesting and optimization system,
> I've got a 7-disk 3TB ZFS Raid, a dual-processor multi-core CPU
> configuration, and loads more RAM than I had years ago.  I'm also on
> fiber now, instead of cable.  Data file size is really not the same
> obstacle it used to be for me.
>

Disk space is cheap, indeed. It was never the issue, however. The
problem is with backtesting and optimizing these enormous quantities
of data. If you want to capture the entire book, it would be 40 items:
10 bid prices, 10 bid sizes, 10 ask prices, and 10 ask sizes. Each of
these 40 items changes approximately 4 times per second. This totals
to about 40 * 4 = 160 data pieces per second. This mean that JBT would
take about 100 times longer to backtest and optimize. I have a pretty
fast machine, but some optimization jobs take several hours. With the
"full book capture", these jobs would take weeks to complete.

-- 
You received this message because you are subscribed to the Google Groups 
"JBookTrader" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/jbooktrader?hl=en.

Reply via email to