Zhonghai Wang wrote:
Hi guys,
I've tried to make a efficient map layer with the commands shp2tile,
tile4ms, and shptree, but something seems not right, because MapServer
can not draw the maps. I've made a test in a seperate folder, and I've
done the following processes:
***
data -- Forests.shp (for a country)
***
1. >shp2tile -r 50 -c 50 Forests.shp Forests_Test.shp
>>>results for this step: shp, shx, and dbf files, there is no prj file
for the output file
>>>error info on the console: failed to create shp Forests_Test.shp -1833
This is likely because you are trying to create 2501 files (50 x 50 + 1)
and they all half to be open at the same time for this mode so you
likely ran into a process file handle limit. Also how many point do you
have in this layer? What is the value of Num_points/2500, this number
should not be smaller than 8000 - 10000.
2. >tile4ms ---- get the tileindex.shx, tileindex.shp and tileindex.dbf
files for the Forests_Test.shp, there is still no prj file for tileindex.shp
>>>no error messages appear at this step
There are no prj files created. Mapserver does not use them.
3. >shptree -- to genetare .qix file for all shapefiles in this subfolder
>>> results for this step: Forests.qix, Forests_Test.qix, and tileindex.qix
>>>no error info on the console
but, only if I set DATA "Forests" in the layer object the map will be
rendered, others like DATA "Forests_Test" or TTILEINDEX "tileindex"
TILEITEM "location" do not work, the server simply sends back a blank
image.
did I make any mistakes or it really do not work well. (I am using MS4W
4.8.1)
I guess some errors occur when I perform the command shp2tile.
What version of shp2tile are you using?
[EMAIL PROTECTED]:/data/mdata$ ~/dev/shptools/shp2tile -v
$Id: shp2tile.c,v 1.13 2005/12/05 22:38:08 woodbri Exp $
If it does not respond with the Id string above you need to upgrade as
there is a serious crashing bug for point data in the earlier version if
you are using the -q option. It should work for the row col option, just
try to decrease the number of tiles.
Also if you do not have something like R x C files in your directory
then the process failed. For the row col option you can also specify
--no-write to just get a stats report of how the data is put into the tiles.
-Steve
thanks for any further info.
zhonghai
On 5/18/06, *Zhonghai Wang* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Hi Bob, Steve,
thank you very much for all these helpful clues, now I think I've
got the points of the shp2tile command, it's really a good tool to
slice shapefile.
zhonghai
On 5/18/06, *Stephen Woodbridge* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Zhonghai Wang wrote:
Hi folks,
I have a large shapefile, now I am trying to use shp2tile
command to
slice it into pieces, with -r and -c is ok, but I do not fully
understand the -q parameter, what does it actually mean? and
what number
should a use for this parameter normally?
or something like this? -- >shp2tile -q 10000 input_shapefile
output_shapefile
Hi Zhonghai,
The -r -c option breaks the extents of your shapefile into R x C
rows
and columns and then tries to fit the objects into the best
tile. I any
tile crosses a tile boundary by 5-10% then it is put into a
"supertile"
the could be the same extents as the original shape file. So
typically
you will end up with r X c + 1 tiles.
The -q N option splits the extents in half either vertically or
horizontally and then sorts the objects into the 2 halves or put
them in
a supertile. Then if the either of the two halves has more than N
objects it is again split in half and this continues until all files
have less than N objects. This can cause some strange effects
like tiles
with 1 or a small number of objects and most tiles will have
less than N
objects in them. Since this algorithm tends to spatially cluster
objects
in a file, there is a good chance that if you need the file that
all or
most objects in the file will be used.
I recommend trying numbers like 10,000 and 20,000 as you initial
tries.
I think you should probably not use numbers less then 8000, but
it is
really up to you to try and measure the results to find what
works best
for your data.
-Steve W.