Awesome Bruce. Look forward to it! You certainly have my curiosity piqued about the solution.
--- In [email protected], "bruce1r" <bru...@...> wrote: > > Ozzy, Howard - > > I thought about posting this when I saw the original note. Not > wanting to get into the question of methodology that you are about to > discuss, I mention that it is possible to distinguish IS from OOS. It > is a bit of a hack, though, and I was reluctant to post it because of > that. It works for me and I think it will work in every case that I > can imagine, but you never know ! Anyway, I'm about to leave until > late afternoon, but will post it when I return and can write up some > brief usage notes. > > Tomasz looks like he provided a IS / OOS flag in the optimizer DLL > interface, but for what I was interested in, I needed it in AFL and > didn't want to mod each optimizer DLL source > > BTW, for me, the reason that this was so important was that it is > critical to the use of walk-forward test results of market timing > signals. AB splices together an OOS equity curve for you, but in > timing signal applications, one really wants the spliced buy/sell > signal segments also. This is easily done in the CBT - if you know > when it is OOS. > > Later ... > > -- Bruce R > > > > --- In [email protected], "Howard B" <howardbandy@> wrote: > > > > Hi OAM -- > > > > If I understand correctly, you do not need the walk forward process > at all. > > > > > > You want to set a time period and run many tests, set a new time > period, run > > that same group of many tests, and so forth, collecting the results from > > each run from each period. Yes? > > > > If so, you might try BatMan. It used to be in the Yahoo Group > > "AmiBroker-ts/files", but I do not see it there today. Fred Tonetti > is the > > developer of BatMan. Perhaps someone can point us to the latest > version of > > BatMan? > > > > Thanks, > > Howard > > > > > > > > > > On Wed, Jan 14, 2009 at 4:36 PM, ozzyapeman <zoopfree@> wrote: > > > > > Howard, thanks for the input. I do have both of your books, btw. > And I > > > am aware of what you state. In 99.9% of cases what you outline is > > > exactly what should be done. > > > > > > In my particular system, however, I am not searching for the "best" > > > value, but rather a range of values that are then looped through in > > > OOS backtesting to Buy on the best of these conditions that happen to > > > be "true" at the given bar. These conditions are discontinuous, and > > > are either completely true or completely false - for e.g. GapUp( ) vs > > > GapDown( ). Assume that I have, say, three hundred of such different > > > "states" that I am testing buy/sell conditions against. Each state can > > > be assigned a variable, and I can optimize to check historical > > > performance of each state: > > > > > > optimize ("a", 1, 1, 300, 1); > > > > > > This type of optimization must be exhaustive, as opposed to using CMAE > > > non exhaustive, as each state has nothing to do with it's "neighbor" > > > state. In actuality they are more complex states than simply GapUp or > > > GapDown. Each state was arrived at by calculations from another > > > program, and the order that they are tested in the optimization is > > > somewhat random. Each one is truly unique. > > > > > > My AFL does the above optimization, and as it does so, I fput the top > > > 50 or so values of "a", according to some custom metric, to an > > > external file. When it comes time to backtest on OOS data, I then want > > > to pull those top 50 values and then test to see which "states" are > > > true (more than 1 can be true), and then Buy on the state that has the > > > best corresponding metric. > > > > > > Naturally I can do all of the above manually - e.g. run an > > > optimization on an IS period using the 'PUSH AFL', then run a backtest > > > on an OOS period using the 'PULL AFL', and repeat.... But I would > > > prefer to use the Walk-Forward engine to do all this automatically > > > and, preferably, with a single PUSH/PULL AFL. > > > > > > Hope that paints a clearer picture. > > > > > > > > > --- In [email protected] <amibroker%40yahoogroups.com>, > "Howard B" > > > <howardbandy@> wrote: > > > > > > > > Hi OAM -- > > > > > > > > The purpose of walk forward testing is to see what happens in the > > > > out-of-sample period after a trading has been chosen based on > > > performance > > > > over an in-sample period. In walk forward processing, you (the > system > > > > designer) do not have an opportunity to evaluate any of the > alternative > > > > systems that are associated with parameter sets other than the one > > > that is > > > > at the top of the ranking according to your objective function. The > > > purpose > > > > of the objective function is to incorporate everything that is > > > important and > > > > arrive at a single-valued score. > > > > > > > > So, the question is -- what happens during selection of the best > of the > > > > alternatives based on the in-sample period that tells you to change > > > > something (use a different system) in the out-of-sample period? If > > > you have > > > > a quantifiable answer to that question, incorporate whatever > > > decision making > > > > you would do into the original objective function and run the tests > > > in the > > > > normal manner. > > > > > > > > Or am I missing something? > > > > > > > > Thanks, > > > > Howard > > > > www.blueowlpress.com > > > > > > > > > > > > > > > > On Wed, Jan 14, 2009 at 1:57 PM, ozzyapeman <zoopfree@> wrote: > > > > > > > > > Hello, I am trying to branch my code whenever the Walk-Forward > > > engine > > > > > is processing OOS data. Hoping someone can give me a tip on this. > > > > > > > > > > Normally, when one does a Walk-Forward test, the trading > system being > > > > > tested should be identical for both in-sample (IS) and > out-of-sample > > > > > (OOS) data. But there are some rare instances when one might > want to > > > > > use a slightly modified version of the main trading system for > the OOS > > > > > data. Without going into details on my trading system, assume > for the > > > > > moment that the reason for doing such a thing is valid. On > that basis, > > > > > what I essentially want to achieve in my AFL is something like: > > > > > > > > > > if ( Walk-Forward engine is processing IS data) > > > > > > > > > > { Use Trading System A } > > > > > > > > > > if ( Walk-Forward engine is processing OOS data) > > > > > > > > > > { Use Trading System B } > > > > > > > > > > However, using traces with ActionEx status, indicates that the > engine > > > > > state for both of the above cases is always > > > "actionExOptimizeBacktest". > > > > > > > > > > As such, there is no way (that I can think of) to effectuate > the above > > > > > branching code. > > > > > > > > > > Is there a way around this? Some alternate way of testing when OOS > > > > > data is being processed vs IS data? Or is the only practical > solution > > > > > to build a custom version of the Walk-Forward process using > OLE with > > > > > the AA objects? > > > > > > > > > > Thank you for any input. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
