---------- Forwarded message ----------
From: Smaran Harihar <[email protected]>
Date: Sat, Nov 17, 2012 at 10:44 AM
Subject: Re: [Geoserver-users] limit to cURL
To: Andrea Aime <[email protected]>


Thanks for the reply Andrea.

Can you tell me how I can map the indexes to an official EPSG? What do I
need to do for that? Presently the shapefiles are in a folder on the server
and I created the shapefiles and prj file using the following
code<http://code.google.com/p/pyshp/issues/detail?id=3#c1>,
which I converted from the CSV data.

As I said adding and publishing 1636 shapefiles were not a problem but when
the count rises to 33k it is a big problem. And I need to eventually add
way more.

Thanks,
Smaran




> I don't think there is a hard limit, but depending on the .prj file
> attached to your
> shapefiles the time to import them can grow a lot.
> Basically, if the .prj content can be mapped to a official EPSG code using
> indexes you
> can import the data quickly, if not the full scan takes 5 or more seconds
> per lookup,
> which would mean 46 hours of processing for your case.
>
> The code doing the lookup could indeed be optimized to do some caching, so
> that
> repeated lookups against the same code become fast after the first full
> scan
>
> Cheers
> Andrea
>
>



-- 
Thanks & Regards
Smaran Harihar
------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
Geoserver-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geoserver-users

Reply via email to