On Wed, Nov 12, 2008 at 09:25:20AM -0700, Tim Schaub wrote: > The BBOX strategy is intentionally lazy. And I agree that we can add > properties to make it more aggressive (optionally). I think it is > short-sighted to tie this property to a maxfeatures or limit parameter > in a request.
I don't understand this. It seems to me that even with the 'smarter merge' you describe below, you'd still want a way to limit the amunt of data returned from the server at once -- if nthing else, then to limit server processing at some point. > Part of the intention with strategies was to improve the behavior of > vector layers with regard to merging newly requested features with > existing ones. I think Ivan is wandering into this territory below. For the record, I think Ivan's responses were interesting, but not directly related to the issue that motivated me to bring this up. > The merge methods implemented so far are simplistic. A smarter merge > would consider the fids of existing features before ditching all & > adding new. > > In addition, the bbox strategy could be enhanced to only remove features > that were outside the new bbox before requesting a new batch. I don't see how either of these two methods mentioned above helps in the use case I mentioned -- specifically, that I only want to request from the server a limited number of features which match my query parameters, and if the total is more than that, I want to ask again when the filter changes in a way that will affect the question. Am I misunderstanding something here? Perhaps you're saying that the smarter merge is a pre-requisite for this, but I'm not sure I see the value of a smarter merge here -- perhaps slightly less processing time creating a new feature, but since almost none of the time the application spends is going to end up being the 'processing from the server' compared to the CPU time spent dragging the vectors around, so I don't understand the importance. Is this the reason? Creating fewer features when you already have the data? If so, I suppose that's not too hard, though it depends on fids which don't change, which I don't have for the data I'm using (there is no unique identifier for the results I'm displaying). > All these were improvements that motivated the strategy code in the > first place. And it makes sense to implement them correctly. My goal is to start by helping to replicate the behavior of the existing Layer.WFS so that I can stop using it. :) Regards, -- Christopher Schmidt MetaCarta _______________________________________________ Dev mailing list Dev@openlayers.org http://openlayers.org/mailman/listinfo/dev