Hey- Christopher Schmidt wrote: > On Wed, Nov 12, 2008 at 09:25:20AM -0700, Tim Schaub wrote: >> The BBOX strategy is intentionally lazy. And I agree that we can add >> properties to make it more aggressive (optionally). I think it is >> short-sighted to tie this property to a maxfeatures or limit parameter >> in a request. > > I don't understand this. It seems to me that even with the 'smarter > merge' you describe below, you'd still want a way to limit the amunt of > data returned from the server at once -- if nthing else, then to limit > server processing at some point.
Right. Communicating this limit is protocol specific. Correct? > >> Part of the intention with strategies was to improve the behavior of >> vector layers with regard to merging newly requested features with >> existing ones. I think Ivan is wandering into this territory below. > > For the record, I think Ivan's responses were interesting, but not > directly related to the issue that motivated me to bring this up. > Of course. Didn't mean to imply anything was uninteresting. I just have trouble reading too much discussion that doesn't have to do with the topic. My own shortcoming. I look forward to reading it again later. >> The merge methods implemented so far are simplistic. A smarter merge >> would consider the fids of existing features before ditching all & >> adding new. >> >> In addition, the bbox strategy could be enhanced to only remove features >> that were outside the new bbox before requesting a new batch. > > > I don't see how either of these two methods mentioned above helps in the > use case I mentioned -- specifically, that I only want to request from > the server a limited number of features which match my query parameters, > and if the total is more than that, I want to ask again when the filter > changes in a way that will affect the question. > I was talking about how the bbox strategy could be made smarter. It doesn't seem to me like limiting the number of features returned (with a maxFeatures or limit parameter in the request) has anything to do with the bbox strategy. The bbox strategy is about determining when the previous data bounds become invalid, determining which existing features no longer belong on the layer, and deciding how to handle the results of a new request (based on a previous response). In general, I *think* you are talking about making the strategy more aggressive with regard to determining when the current feature set is invalid. In your specific case, this has to do with some limit that you specified in a previous request. How this limit is communicated only the protocol knows. Perhaps the limit wasn't even communicated by the protocol, but was instead configured on the server. Someone else might have a specific need to make the strategy more aggressive because the server returns more detailed data based on the extent of the request (I don't know, just making things up here). Anyway, both of these specific needs could be satisfied by providing an option to make the strategy more aggressive. I'm only suggesting that that option is not *just* about maxFeatures (or limit, or whatever your protocol may use to communicate that only a subset of the total features that meet some criteria are returned). And, limit (and sort) should be treated separately from filters (in the general sense). I can filter a set and ask for the first 10. Or maybe 11-20. Or I can filter a set then sort the results and then ask for 10 (whatever, just mentioning that they are separate operations). > Am I misunderstanding something here? Perhaps you're saying that the > smarter merge is a pre-requisite for this, but I'm not sure I see the > value of a smarter merge here -- perhaps slightly less processing time > creating a new feature, but since almost none of the time the > application spends is going to end up being the 'processing from the > server' compared to the CPU time spent dragging the vectors around, so I > don't understand the importance. > > Is this the reason? Creating fewer features when you already have the > data? If so, I suppose that's not too hard, though it depends on fids > which don't change, which I don't have for the data I'm using (there is > no unique identifier for the results I'm displaying). > >> All these were improvements that motivated the strategy code in the >> first place. And it makes sense to implement them correctly. > > My goal is to start by helping to replicate the behavior of the existing > Layer.WFS so that I can stop using it. :) Yeah, and I'm just trying to resist the tendency to lump distinct behaviors together in one big blob of code. (Not meaning to indicate anything really specific here, just saying this tends to happen when a bunch of different needs are addressed hastily.) > > Regards, -- Tim Schaub OpenGeo - http://opengeo.org Expert service straight from the developers. _______________________________________________ Dev mailing list Dev@openlayers.org http://openlayers.org/mailman/listinfo/dev