Interesting thread...  A couple points from the sidelines:

My company sells a store-your-images-in-a-database product, for storing
JPEG 2000 and MrSID imagery; there are indeed people who see value in
using a DB to manage their raster assets.

Our product is not open source, but when using it with JPEG 2000 images
it *is* designed around the appropriate JP2 standards for storing the
data.  Very loosely speaking, it uses JP2's internal tiling scheme and
each such tile is stored as a blob.  Each band can be stored separately,
for the sort of workflows you describe.  (Also, note that "wavelet" does
not necessarily imply "lossy" anymore, as many assume.  Story of my
life.)

The whole
how-do-I-do-an-image-processing-workflows-across-a-chain-of-servers
thing keeps me up at night, esp. the points you bring up below (eek,
tile boundaries!).  I think WPS is "moving slower" for a few reasons:
OGC specs proceed at a deliberate pace, there are (relatively) few
people involved in WPS, and -- most importantly to me -- the workflows
are still not well understood enough to have a critical mass of people
pushing for a baseline functionality set.

-mpg

 

> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Randy George
> Sent: Wednesday, February 20, 2008 7:38 AM
> To: 'OSGeo Discussions'
> Subject: RE: [OSGeo-Discuss] OS Spatial environment 'sizing' 
> + Image Management
> 
> Hi Ivan and Bruce,
> 
>       Interesting, other than using JAI a bit on Space 
> Imaging data (this
> was awhile back) I have been mostly using vectors.
> 
>       I am curious to know what advantage an arcSDE/Oracle stack would
> provide on image storage. I had understood imagery was simply 
> stored as
> large blob fields and streamed in and out of the DB where it is
> processed/viewed etc. The original state I had understood was 
> unchanged
> (lossy, wavelet, pk or otherwise happening outside the DB), 
> just residing in
> the DB directory rather than the disk hierarchy. Other than 
> possible table
> corruption issues I imagined that the overhead for streaming 
> a blob into an
> image object was the only real concern on DB storage.
> 
> But I'm getting the idea that something a bit more is going 
> on. Does the
> image actually get retiled (celled) and then stored in 
> multiple fields? Is a
> multispectral broken into bands first before storing in 
> separate fields?
> CHIP sounds more like an additional database function to 
> optimize chipping
> inside a DBTable so that an entire image doesn't have to be 
> read just to
> grab a small viewbox. Does arcSDE add similar functions to 
> the base DB or
> does it just grab out an image and chip, threshold, 
> convolute, histogram,
> etc after the fact?
> 
> I'm just curious since I've been fascinated with the prospects of
> hyperspectral imagery.
> 
> >From an AWS perspective very large imagery would need some 
> type of tiling
> since there is a 5Gb limit on S3 objects. Larger objects are 
> typically tar
> gzipped and split before storage. It is hard to imagine a 
> tiling scheme that
> large anyway. For example Google's Digital Globe tiling pyramid uses
> miniscule tiles at 256x256 compressed to approx 18kb/tile
> http://kh.google.com/kh?v=3&t=trtsqtqsqqqt
> http://www.cadmaps.com/gisblog/?p=7
> 
> >From a web perspective analysis could proceed along a highly 
> tiled approach.
> So the original 70Gb image becomes a tiled pyramid with the 
> browser view
> changing position inside the image pyramid. Small patches 
> flow in and out of
> the view with each zoom and pan. Analysis, WPS, adds some 
> complexity since
> things like convolution algorithms need to be rewritten to 
> take into account
> tile boundaries. Or, alternatively the viewbox is re-mosaiced 
> before running
> a server side convolution that is subsequently streamed back 
> to the browser
> view, not extremely fast. Hyper-spectral bands would reside 
> in separate tile
> pyramids so that Boolean layer operations could proceed 
> server side for
> viewing at the browser. Analysis really can't take advantage 
> of predigested
> read only schemes like Google's since the whole point is to create new
> images from combinations of image bands. Consequently WPS 
> seems to be moving
> slower than WMS, WCS, WFS
> 
> Thanks
> Randy


[snip]
_______________________________________________
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss

Reply via email to