If you are going to use the IS performance metrics how you suggest then not only don't you believe they are worthless, you believe they are more worthwhile then I do ...
WF testing is simply multiple OOS's ... If you believe in the latter there's no reason not to believe and utilize the former unless of course you believe that once you have a system that has been successfully tested OOS that you'll never move at least the end date of your IS forward and optimize again, whether you need to or not. --- In [email protected], "brian_z111" <[EMAIL PROTECTED]> wrote: > > Yes, that is true. > > What I am suggesting is that if the OOS holds up then I can use the > combined metrics of the IS and the OOS in say, Money Management or > other downstream analysis (assuming that the system rules remain > unchanged from the IS top model to the OOS validation). > > When you say they 'fail the steps that follow' do you mean WF testing > (are you an advocate of WF testing?) or do you add something extra > (perhaps I should read your docs?) > > brian_z > > > --- In [email protected], "Fred" <ftonetti@> wrote: > > > > Personally I don't see in sample results as totally worthless nor as > > particularly worthwhile ... I only see them as an initial yardstick > > which results in a decision of whether or not additional work is > > warranted i.e. if you can't make it work in sample then there's > > probably no point in looking beyond ... If you can it doesn't mean > it's > > tradable. Frankly I toss more systems because they fail in the steps > > that follow then because I can't make the in sample results look good. > > >
