On Jan 19, 2014, at 12:48 PM, Saminda Wijeratne <[email protected]> wrote:
> > Also when retrying of API functions in the case of a failure in an previous > > attempt there should be a way to not to repeat already performed steps or > > gracefully roleback and redo those required steps as necessary. While such > > actions could be transparent to the user sometimes it might make sense to > > allow user to be notified of success/failure of a retry. However this might > > mean keeping additional records at the registry level. > > > > In addition we should also have a way of cleaning up unsubmitted experiment > > ids. (But not sure whether you want to address this right now). The way I > > see this is to have a periodic thread which goes through the table and > > clear up experiments which are not submitted for a defined time. > > +1. Something else we may have to think of later is the data archiving > > capabilities. We keep running in to performance issues when the database > > grows with experiment results. Unless we become experts of distributed > > database management we should have a way better way to manage our db > > performance issues. > > > > -1 on this. I may want to go back a year later and submit a previously > created experiment. I think its wrong to put a temporal bound on these, more > over these provide as a good source of analaytics to improvise usability. As > per data base performance, not in 2014, there should be many solutions to > handle zillions of experiments (atleast thats what the social networking > world claims). > I didn't mean that the experiments should be removed from users grasp by > archiving them. Its more like an idea of memory hierarchy. The data which is > most likely to be used should be available for quick querying. Ofcourse such > data distributions should be transparent to the users. Sure that makes sense and also agree that such a garbache collection is a system level implementation detail and ways of managing a high-speed access cache. Suresh
