Paul, I would love to talk over your proposition in more depth but as you can guess, I am busy at the moment.
Keep in mind that I assume I understand what you are talking about which is not always the case (without some concrete examples)/ Also I am not certain what you mean re the geometricAve of Ln IS & OOS? A quick observation: I like that you are pushing yourself in this area - IMO while the commentators books (Pardo, Aronson, Bandy etc) and Fred's work give us an excellent base to work from there is still more to learn in theory, and application, and we should keep picking away at it. Yes, I am specialising in this area and I am fine with your idea e.g. say we 'optimise' 100 combinations and 5 pass a minimum value for our 'goodness' test. There is no reason we can't OOS those five and then select one of them based on say Walk Forward Efficiency (refer to Pardo for WFE definitions ). Also there is no reason the range of WFE for the five OOS tests can't be treated as another kind of sensitivity analysis (note I haven't had time to study Fred's work yet so if I am merely repeating something he has already said then apologies). WFE is only one example - if we put our thinking caps on we could find more (can WFE be applied to other metrics beside netprofit as used by Pardo?). brian_z --- In [email protected], "Paul Ho" <[EMAIL PROTECTED]> wrote: > > The purpose of OOS is to make sure there is no over fitting of data. So all > optimization is done on In sample data. It is always possible to have more > than 1 set of optimized parameters, because of different markets, or > different selection of stocks, different parameters being optimized or even > different fitness criteria. I currently run my optimzation nearly 3000 > tickers, generating thousands of trades. There needs to be a systematic way > of choosing the "right" system and I strongly argue that OOS has a major > role in that. I think it is not invalidating the OOS because, the amount of > data mining is very small compared to insample. I think by just examing it > casually, OOS is under utilised, at least in my case. From what I hear, a > lot of people only optimize on individual or just a few tickers and the > degree of freedom is comparatively low. What I do here is to run > optimization over the whole ASX exchange, past and present. > Finally, I think you should consider automating the process in IO and allow > the user a choice. > Cheers > Paul > > > _____ > > From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf > Of Fred Tonetti > Sent: Thursday, 8 May 2008 12:12 AM > To: [email protected] > Subject: RE: [amibroker] Re: Fitness Criteria that incorporates Walk Forward > Result > > > > > > > Paul, > > > > One other word of caution . > > > > If you are using OOS testing to drive the selection process of parameters to > be used in sample then you run the risk of invalidating the OOS. > > > > I could have automated this process in IO but I didn't for exactly this > reason. > > > > > _____ > > > From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf > Of Paul Ho > Sent: Wednesday, May 07, 2008 8:37 AM > To: [email protected] > Subject: [amibroker] Re: Fitness Criteria that incorporates Walk Forward > Result > > > > Hi Fred > Yes, I want to use the composite fitness to compare different systems > and or use it as a feedback in deciding on different parameter sets > of the same system, This is not too dissimilar to how sensitivity > analysis is incorporated into the fitness criteria. The only > difference is that sensitivity analysis during optimization, and walk > forward is done after a new fitness high is found. Instead of using > the insample fitness as the selection criteria for the best fit > system, the composite is criteria is used to choose among the various > peak values in one system or in different systems. > > What you said "the capability to automatically reoptimize when some > condition related to the performance metrics occurs during the out of > sample period i.e. MDD goes beyond some static threshold or when it > goes beyond some relationship to the same" is particularly > interesting. Because you are addressing a similar problem but using a > different method, in your case, you change the time frame and > reoptimize. In my case, I am looking at refining my fitness criteria > so I might end up in choosing a different optimized parameter set in > the same time frame. > > Paul. > > --- In [EMAIL PROTECTED] <mailto:amibroker%40yahoogroups.com> ps.com, Fred > Tonetti <ftonetti@> wrote: > > > > Paul, > > > > > > > > I understand what you are saying but I'm not sure what you do with > the > > combined fitness when you get it . Do you use it to compare > different > > systems to each other ? > > > > > > > > Personally from the perspective of multiple automated WF's I am more > > interested in . When to reoptimize . > > > > > > > > IO already has the capability to reoptimize based on: > > > > > > > > - Some static amount of time occurring during the OOS i.e. > > > > > > > > //IO: WFAuto: Rolling: 2: Weeks > > > > > > > > - or in some undefined amount of time based on some number of > long/short > > entries/exits etc i.e. > > > > > > > > //IO: WFAuto: Rolling: 2: LongEntrys > > > > > > > > What I've been playing with recently is something a little > different that is > > also based on a variable amount of time in the OOS i.e. the > capability to > > automatically reoptimize when some condition related to the > performance > > metrics occurs during the out of sample period i.e. MDD goes beyond > some > > static threshold or when it goes beyond some relationship to the > same or > > different performance metrics of in sample. > > > > > > > > For example . > > > > > > > > Assume the In Sample Performance Metrics are prefaced by IS and Out > of > > Sample Performance Metrics are prefaced by OS then one should be > able to > > write ( in terms of IO Directives ) > > > > //IO: WFAuto: Rolling: Condition: OSMDD > 10 or OSMDD > 0.75 * ISMDD > > > > > > > > In reality I suspect this is what most people actually do i.e. find > some > > yardstick(s) that tell them their system is broken or about to be > broken and > > then reoptimize at that time. > > > > > > > > > > > > _____ > > > > From: [EMAIL PROTECTED] <mailto:amibroker%40yahoogroups.com> ps.com > [mailto:[EMAIL PROTECTED] <mailto:amibroker%40yahoogroups.com> ps.com] > On Behalf > > Of Paul Ho > > Sent: Tuesday, May 06, 2008 10:41 AM > > To: [EMAIL PROTECTED] <mailto:amibroker%40yahoogroups.com> ps.com > > Subject: [amibroker] Fitness Criteria that incorporates Walk > Forward Result > > > > > > > > Howard calls it the objective function. Fred calls it Fitness. What > I > > meant by Fitness Criteria is a mathematical function on which > fitness > > or goodness of the system is judged, and is used as an objective > > criteria to compare different systems, as a score in optimization. > > > > My currrent question is - So why not incorporate the fitness in > walk > > forward analysis into our fitness criteria? What I am talking about > > is to formalise the visual inspection process. I am not proposing > to > > use out of sample data for optimization purposes. Rather the > > parameter set that has been previously optimized is forward tested > > and a fitness is obtained and incorporated into the original > criteria > > to form a composite fitness. > > > > For example. My current composite fitness is the geometric average > of > > In sample fitness and Out of sample fitness divided by the standard > > deviation (?) of In sample and out of sample fitness. > > > > Are there anybody doing something is this area? What are your > > thoughts? > > > > If you are wondering why not use visual inspection. My plan is to > use > > the computer to do most of the work and thats why I need a fitness > > criteria. > > > > Cheers > > Paul. > > >
