Hi Simon --

>From your description, the system was developed on a set of data, but not
tested on any data that was not used during development.  The data used
during development is called the in-sample data.  Data used for testing that
was not used during development is called the out-of-sample data.

The in-sample results always look good -- we do not stop playing with the
system until they look good.  The in-sample results have no value in
estimating the future out-of-sample results.  In order to estimate what the
likely profitability will be when traded with real money, out-of-sample
testing is necessary.

I have documented systems that have over 1,300,000 closed trades and
reasonable looking results for the in-sample period, but were not profitable
out-of-sample.

There is no substitute for out-of-sample testing.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


On Thu, Apr 17, 2008 at 2:29 AM, si00si00 <[EMAIL PROTECTED]> wrote:

>   Hi all,
>
> I have a friend who has developed a trading system. It is an intraday
> system that makes on average around 5 futures trades per day. We were
> discussing it the other day and a point of disagreement arose between
> us. He claims that there is no necessity for him to test the strategy
> on out of sample data because he has back tested it using over 8 years
> of historical intraday data, and the patterns the strategy predicts
> occur 70% of the time or more.
>
> My question is, does anyone know if the data-mining bias can be
> considered irrelvant when the sample size is so large? (in this case,
> the sample size is roughly 8400 trades). Put another way, with so many
> observations, how many different rules would have to be back tested in
> order for data-mining bias to creep in?
>
> Thanks in advance for any thoughts you might have!
>
> Simon
>
>  
>

Reply via email to