Gregor Mosheh wrote:
It'd also be helpful, Pascal, to know whether and how you've indexed the PostGIS data, and whether you've analyze-ed it or vacuum-ed it or cluster-ed it, how much memory you've thrown at it. The PostGIS end of your setup is in itself an entire subsystem worthy of examination and optimization.

Then there's the setup of your classes and what sort of expressions you're using to classify, whether you have labels enabled, and so on. The number of layers isn't necessarily as important as the total number of classes, combined with how many classes will be examined for each vector feature before finding a match and moving on to the next point. (e.g. put the class that matches the most records at the top)


Personally, I think that benchmark-style stats such as "3.9 requests per second" are rarely accurate, since you didn't say how you got the figure. If it takes 0.25 seconds to render a map, and you could have an end-user experience of maps reloading in half a second, then that's pretty nice. It doesn't *necessarily* mean that you can only serve 4 requests in one second, either; unless your tests really were doing parallel requests and averaging the time, etc. And if the stats were accurate, I'd say that 4 requests per second or 345,000 requests per say, is pretty nice for what is now considered an entry-level PC!

At first, thank your for your answers.


Ok, I made the benchmark from a remote pc and know 3,9req/s aren't very meanful ;) Just wanted to give a quick figure, I benchmarked once with several requests at a time, once with only one.
Anyway..
The first thingwhich helped me a lot was PROCESSING "CLOSE_CONNECTION=DEFER".. That hit me upto 10req/s (see in relation with the other figure only ;)).

I didn't optimize the postgres db so far.
Ah, the map looks like this: http://85.199.1.166/tmp/Mozambique118209637430644.png

Just give me some general ideas what I could optimize further.




Thank you

Pascal

Reply via email to