Hi, Don't worry, I'm following this discussion, About wouter optimisation, I believe it's to specification to his special need and style configuration. Sorry to say that but I don't think his approch can be generalize efficiently, the calculation cost will in most case be far over the rendering cost (even if we paint all features).
For the moment I'm using the rendering approch made by martin on the renderer he made a few years ago. ++ johann Milton Jonathan a écrit : > > Hello Wouter and Andrea > > I've just read the whole exchange of e-mails from you guys, quite > interesting stuff. > > We've also been thinking about using different renderers depending on > the kind of layer (e.g., some may be perfectly rendered with OpenGL, > some may not due to unsupported styling, for instance). And we also > have been thinking about different kinds of caching strategies too. By > the way, I am also cc'ing Johann Sorel from Geomatys, since he is the > cache guy over there (ahm, I'm not sure whether we should shift this > discussion somewhere else to make things easier for everyone to follow..) > > Well, regarding the caching itself, our current approach (which is now > still only implemented in a different project in C++) is to cache the > original geometries (not the decimated ones) in blocks of data > according to a spatially indexed tree constructed only once on first > access for each layer/FeatureSource. These blocks are managed so that > when a max memory limit is reached, least used blocks are discarded to > give way for new data to be stored. This way, if the user stays within > a certain region, data for that region will eventually be all in the > cache and user interaction will be fast, allowing quick zooming, > panning, editing and style changing. Also, loading of blocks of data > is done in a separate background thread, so that the user can go > around zooming and panning while the data is being loaded and > progressively rendered on the screen. > > On the other hand, we've also been considering the idea of caching > decimated geometries, since zooming out and fitting the entire dataset > may go well over the cache limit, making our current cache kind of > useless in this case. In that respect, one of the wild ideas we > started to think about was to keep in memory at least the last level > of detail requested (which is not really much of a memory load even in > the case of fitting the entire world in the user display). As such, > this would allow some kind of rapid response whatever the user did, > regardleess of the situation, while the full level of detail was being > loaded in a separate thread. In any case, I guess we should take a > look at alternative strategies for decimating the geometries: maybe > the simple one currently used by StreamingRenderer, or one based on > the Douglas-Peuker algorithm (or something else). > > Anyway, when we stop to think a little about it, it becomes clear to > us that there are different sorts of cache for different purposes, and > some of them may even be combined (full-blown object caches that allow > all kinds of object manipulations,visualization-specific caches, etc) > > Aside from all that, a completely different subject: I was really > interested when you mentioned HibernateSpatial - I hadn't heard of it, > and it sounds really interesting. Is anybody thinking about somehow > integrating it with GeoTools? Don't know, maybe a > HibernateSpatialDataStore? I'm just speaking in the heat of the > moment, and it surely wouldn't be the most interesting way to render > things fast, but I think it may allow applications to have a powerful > and flexible way to define layers that have complex feature objects > along with their relationships to other objects.. Any opinions? > > Cheers > Milton > > 2008/12/9 Andrea Aime <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> > > Wouter Schaubroeck ha scritto: > ... > > Very interesting. I'm wondering how easy is to generalize. > Certainly > we can setup a 1 pixel width grid and light up pixels as > points > hit them, but what about non trivial styling, labels and > the like? > > > Indeed, this is the next step. Personally, i think the labels > in this > matter should be last to look at. This for the following reasons: > 1. only draw labels for these points that we are certain that > they are > on the map > > > Yeah, there are a bit of label options that may make this less > trivial. > For example we have a vendor option to prioritize labels based on > a constant value you specify in the SLD (different symbolizers, > different priorities) or an attribute value. > > 2. There are several label optimization techniques (and > algorithms), > and implementing these together with the other will be hard: the > positions of the labels can only be calculated after that all the > visible points are known. > > > That's what DefaultLabelCache does, it's already in there. > I have to port a improved version of it we developed in GeoServer > but it's basically applying a conflict resolution algorithm and > in the improved version can also displace labels along lines > (but still not for points or polygons). > > For the non trivial styling: I was thinking on something like > this: > Let's say, we display the points with a simple car, so each > point is > displayed as a car. Now we calculate the spatial extend (on > the map) > of that car, and keep it in memory, together with the geometry > this > car represents ( in this case: a polygon). > We iterate over each point in the collection, and compare it's > position on the map to the spatial extend of the geometry of > the car, > if it's not contained by the spatial extend or the geometry, > then it > would be added to a MemoryFeatureCollection. The original > geometry of > the car will be extended to contain this new point (so this > geometry > may become a multipolygon!). > This continues for each point in the collection. > The final step is to draw the MemoryFeatureCollection. > > > Hmm... the car is an irregular shape, so you'd have to perform > a lot of topological comparisons (a car can be only partially > visible), those are very expensive, in the end you might take > less time drawing all the cars directly (after all it's a bit blit, > something accelerated in hardware). > > What do you think about this? I know, it has some issues, > like: you > need a beefy server to calculate all this, and i don't know if > it's > worth the juice... Perhaps there's a bigger performance gain if we > should use a different way of rendering the image > (BufferedImage <-> > volatileImage) or even opengl? (ofcourse there's allways the > hardware > requirements for these last ones). > > > Don't know about OpenGL, recent java runtimes turn BufferedImage > into volatile ones on your back as an optimization when possible. > > And this is only valid for big > collection of points. The technique I used, was only faster if > there > were more than 5000 points. > > > Ok, so that would be a good criteria for choosing a custom > renderer: the style must be simple enough and the points > per pixel on image ratio must be over a certain threshold > (that would be an heuristics, of course). > > > The current streaming renderer does exactly that, > generalization on the > fly before rendering. To get better performance you should > keep an > in memory cache of the generalized geometries I guess. > Anyways, I may be missing something. What's your approach, > in detail? > > > I haven't implemented a memory cache yet, I'm still working on the > implementation of the Douglas Peuker algorithm (because it's > only a > testcase). To be honest, I didn't know the streaming renderer > had a > generalization, I guess I may stop now, and focus perhaps on the > memory cache? > > > That sounds like a good idea. Wouter, meet Jonathan Milton, cc'ed. > He's interested in creating such a cache as well. > Btw, quite some time ago I wrote this on the topic: > http://docs.codehaus.org/display/GEOTOOLS/Datastore+caching > Unfortunately never got time/sponsoring to actually implement any of > that, but it may be of use in your efforts. > > Anyways, in the meantime we can talk about your work and a > possible > merge with the StreamingRenderer (which I help to > maintain), with > time we'll see if it's possible to merge everything with > Geomatys > work of if we'll have to roll a new multilayer renderer > for GeoTools. > > > Is there any date set for this framework of Geomatys? I like > the idea > of a specific renderer for each layer! > > > Last time I heard about it they were talking about end of December > but, > that might have changed in the meantime (months have passed). > > For the integration of my work in the streaming renderer, I'm > creating > an develop environment for geotools on my pc and next I'm going to > study the code, implement my stuff and run some tests. I'll > keep you > updated! > > Some other stuff, I guess you all heard of the cuda library of > nvidia > (use the gpu of your machine for complex floating point > calculations). > I know there's a javaport of this library, has anyone used it > together > with geotools or other geospatial stuff? > > > Nope, never tried. Personally I would try hard to get whatever > speedup possible without going down to native code, especially > code that assumes a specific graphics card to be available. > But that's just me, if Cuda is something that makes you happy > working with, by all means do ;) > > Cheers > Andrea > > -- > Andrea Aime > OpenGeo - http://opengeo.org <http://opengeo.org/> > Expert service straight from the developers. > > -- Johann Sorel Company - Geomatys GIS Developer Mail - [EMAIL PROTECTED] ------------------------------------------------------------------------------ SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada. The future of the web can't happen without you. Join us at MIX09 to help pave the way to the Next Web now. Learn more and register at http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/ _______________________________________________ Geotools-gt2-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/geotools-gt2-users
