Re: [OSGeo-Discuss] OS Spatial environment 'sizing' + Image Management

2008-02-19 Thread Lucena, Ivan

Hi Bruce,

Here I am again...

Randy suggestion are pretty valuable and very well based but I have a 
especial interest on storing raster on databases so that is why I asked 
about it.


Yes, raster is chunky and not very fluid but I love to hear from 
successful experience like Bruce's. And as Bruce also mentioned 
analytical process often needs to query on cell space rather than bands.


Remember that decades ago some of us would be discussing the 
disadvantage of storing "vector" on databases now it is the norm for 
client/server application.


Bruce mentioned SDE and Oracle, but what are the *open source* options 
to do *image management* on open source databases and who is using it?


I can only think of two, the PostGIS CHIP datatype and Terralib schemas 
(MySQL, PostgreSQL, and commercial RDBMS) but I don't know of any 
*sizable* project that is using then.


Does anybody know and would like to share?

Best regards,

Ivan


[EMAIL PROTECTED] wrote:


IMO:


Hi Randy,

Thank you for your informative post. It has given me a lot to follow up 
on and think about.


I can see an immediate need that this type of solution could well be 
used for. I like it.


I suspect that in many larger corporate types of environments, it could 
well be used effectively for 'pilot' and 'pre-production' type tasks.


For 'production' type environments, there would be issues of integrating 
an external service hosting spatial data with internal services hosting 
corporate aspatial data sources and applications.




with regards to storing imagery in a database:

   (and not directed at you)

I've also seen a lot of reports suggesting that image management should 
be file based.


My personal preference is to use a database if possible, so that I can 
take advantage of corporate data management facilities, backups, point 
in time restores etc.


I've managed 70 GB orthophoto mosaics in ArcSDE / Oracle before with 
minimal problems. I found performance and response times to be 
comparable with other image web server options on the market that use 
file based solutions for storing data.


Ideally, I'm looking to manage state wide mosaics with a consistant look 
and feel that can be treated as a single 'layer' by client GIS / Remote 
Sensing applications (data integrity issues allowing).


One potential use is 'best available' data mosaics could undergo regular 
updates as more imagery is flown or captured. A database makes it easier 
to manage and deliver such data.


My definition of 'imagery' goes beyond aerial photographs and includes 
multi or hyper-spectral imagery; various geophysics data sources such as 
aeromagnetics, gravity, radiometrics; radar data etc.


Typically this data is required for digital image analysis purposes 
using a remote sensing application, so the integrity of 'the numbers' 
that make up the image is very important.


Many of today's image based solutions use a (lossy) wavelet compression 
that can corrupt the integrity of 'the numbers' describing the 
radiometric data in the image.


When we consider the big picture issues facing us today, such as Climate 
Change, I think that it is important to protect our definitive image 
libraries from such corruption as they will be invaluable sources of 
data for future multi-temporal analysis.


That said, if the end use is just for a picture, then a wavelet 
compression is a good option. Just protect the source data for future use.


  



So, does anyone know of a good open source spatial solution for storing 
and accessing (multi and hyperspectral) imagery in a database?;-)


WMS 1.3 and WCS are showing promise for serving imagery, including multi 
and hyperspectral data.




Bruce Bannerman





[EMAIL PROTECTED] wrote on 20/02/2008 10:09:28 AM:

 > Hi Ivan,
 >
 >The most common advice I've seen says to leave raster out of the DB.
 > Of course footprints and meta data could be there, but you would want to
 > point Geoserver coverage to the image/image pyramid url somewhere in the
 > directory hierarchy.
 >
 > Brent has a nice writeup here:
 > http://docs.codehaus.org/display/GEOSDOC/Load+NASA+Blue+Marble+Data
 >
 > In an AWS sense my idea is to Java proxy the Geoserver Coverage Data 
URL to

 > S3 buckets and park the imagery over on the S3 side to take advantage of
 > stability and replication. Performance, though, might not be as good as a
 > local directory. Maybe a one time cache to a local directory would work
 > better.
 >
 > Note: Amazon doesn't charge for inside AWS data transfers.
 >
 > So in theory:
 >   PostGIS holds the footprint geometry + metadata
 >   EC2 Geoserver WFS handles footprint queries into an Svg/Xaml 
client, just

 > stick it on top of something like JPL BMNG. Once a user picks a coverage
 > switch to the Geoserver WMS/WCS service for zooming around in the 
selected

 > image pyramid
 >   S3 buckets contain the tiffs, pyramids ...
 >   EC2 Geoserver handles WMS/WCS service
 >   EC2 proxy pulls the imagery fro

RE: [OSGeo-Discuss] OS Spatial environment 'sizing' + Image Management

2008-02-19 Thread Bruce . Bannerman
IMO:


Hi Randy,

Thank you for your informative post. It has given me a lot to follow up on 
and think about.

I can see an immediate need that this type of solution could well be used 
for. I like it.

I suspect that in many larger corporate types of environments, it could 
well be used effectively for 'pilot' and 'pre-production' type tasks. 

For 'production' type environments, there would be issues of integrating 
an external service hosting spatial data with internal services hosting 
corporate aspatial data sources and applications.



with regards to storing imagery in a database:

   (and not directed at you)

I've also seen a lot of reports suggesting that image management should be 
file based.

My personal preference is to use a database if possible, so that I can 
take advantage of corporate data management facilities, backups, point in 
time restores etc.

I've managed 70 GB orthophoto mosaics in ArcSDE / Oracle before with 
minimal problems. I found performance and response times to be comparable 
with other image web server options on the market that use file based 
solutions for storing data.

Ideally, I'm looking to manage state wide mosaics with a consistant look 
and feel that can be treated as a single 'layer' by client GIS / Remote 
Sensing applications (data integrity issues allowing). 

One potential use is 'best available' data mosaics could undergo regular 
updates as more imagery is flown or captured. A database makes it easier 
to manage and deliver such data.

My definition of 'imagery' goes beyond aerial photographs and includes 
multi or hyper-spectral imagery; various geophysics data sources such as 
aeromagnetics, gravity, radiometrics; radar data etc.

Typically this data is required for digital image analysis purposes using 
a remote sensing application, so the integrity of 'the numbers' that make 
up the image is very important.

Many of today's image based solutions use a (lossy) wavelet compression 
that can corrupt the integrity of 'the numbers' describing the radiometric 
data in the image.

When we consider the big picture issues facing us today, such as Climate 
Change, I think that it is important to protect our definitive image 
libraries from such corruption as they will be invaluable sources of data 
for future multi-temporal analysis.

That said, if the end use is just for a picture, then a wavelet 
compression is a good option. Just protect the source data for future use.

 


So, does anyone know of a good open source spatial solution for storing 
and accessing (multi and hyperspectral) imagery in a database?;-)

WMS 1.3 and WCS are showing promise for serving imagery, including multi 
and hyperspectral data.



Bruce Bannerman





[EMAIL PROTECTED] wrote on 20/02/2008 10:09:28 AM:

> Hi Ivan,
> 
>The most common advice I've seen says to leave raster out of the DB.
> Of course footprints and meta data could be there, but you would want to
> point Geoserver coverage to the image/image pyramid url somewhere in the
> directory hierarchy.
> 
> Brent has a nice writeup here:
> http://docs.codehaus.org/display/GEOSDOC/Load+NASA+Blue+Marble+Data
> 
> In an AWS sense my idea is to Java proxy the Geoserver Coverage Data URL 
to
> S3 buckets and park the imagery over on the S3 side to take advantage of
> stability and replication. Performance, though, might not be as good as 
a
> local directory. Maybe a one time cache to a local directory would work
> better.
> 
> Note: Amazon doesn't charge for inside AWS data transfers.
> 
> So in theory:
>   PostGIS holds the footprint geometry + metadata
>   EC2 Geoserver WFS handles footprint queries into an Svg/Xaml client, 
just
> stick it on top of something like JPL BMNG. Once a user picks a coverage
> switch to the Geoserver WMS/WCS service for zooming around in the 
selected
> image pyramid
>   S3 buckets contain the tiffs, pyramids ...
>   EC2 Geoserver handles WMS/WCS service
>   EC2 proxy pulls the imagery from the S3 side as needed
> 
> Sorry I haven't had time to try this so it is just theoretical. Of 
course
> you can go traditional and just keep the coverage imagery files on the 
local
> instance avoiding the S3 proxy idea. The reason I don't like that idea 
is
> the imagery has to be loaded with every instance creation while an S3
> approach would need only one copy.
> 
> 
> randy
> 
> -Original Message-
> From: Lucena, Ivan [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, February 19, 2008 2:59 PM
> To: [EMAIL PROTECTED]; OSGeo Discussions
> Subject: Re: [OSGeo-Discuss] OS Spatial environment 'sizing'
> 
> Hi Randy, Bruce,
> 
> That is a nice piece of advise Randy. I am sorry to intrude the 
> conversation but I would like to ask how that "heavy raster" 
> manipulation would be treated by PostgreSQL/PostGIS, managed or 
unmanaged?
> 
> Best regards,
> 
> Ivan
> 
> Randy George wrote:
> > Hi Bruce,
> > 
> > 
> > 
> > On the "scale relatively quickly" front, you should 
look 
> > at A

RE: [OSGeo-Discuss] OS Spatial environment 'sizing'

2008-02-19 Thread Randy George
Hi Ivan,

The most common advice I've seen says to leave raster out of the DB.
Of course footprints and meta data could be there, but you would want to
point Geoserver coverage to the image/image pyramid url somewhere in the
directory hierarchy.

Brent has a nice writeup here:
http://docs.codehaus.org/display/GEOSDOC/Load+NASA+Blue+Marble+Data

In an AWS sense my idea is to Java proxy the Geoserver Coverage Data URL to
S3 buckets and park the imagery over on the S3 side to take advantage of
stability and replication. Performance, though, might not be as good as a
local directory. Maybe a one time cache to a local directory would work
better.

Note: Amazon doesn't charge for inside AWS data transfers.

So in theory:
  PostGIS holds the footprint geometry + metadata
  EC2 Geoserver WFS handles footprint queries into an Svg/Xaml client, just
stick it on top of something like JPL BMNG. Once a user picks a coverage
switch to the Geoserver WMS/WCS service for zooming around in the selected
image pyramid
  S3 buckets contain the tiffs, pyramids ...
  EC2 Geoserver handles WMS/WCS service
  EC2 proxy pulls the imagery from the S3 side as needed

Sorry I haven't had time to try this so it is just theoretical. Of course
you can go traditional and just keep the coverage imagery files on the local
instance avoiding the S3 proxy idea. The reason I don't like that idea is
the imagery has to be loaded with every instance creation while an S3
approach would need only one copy.


randy

-Original Message-
From: Lucena, Ivan [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 19, 2008 2:59 PM
To: [EMAIL PROTECTED]; OSGeo Discussions
Subject: Re: [OSGeo-Discuss] OS Spatial environment 'sizing'

Hi Randy, Bruce,

That is a nice piece of advise Randy. I am sorry to intrude the 
conversation but I would like to ask how that "heavy raster" 
manipulation would be treated by PostgreSQL/PostGIS, managed or unmanaged?

Best regards,

Ivan

Randy George wrote:
> Hi Bruce,
> 
>  
> 
> On the "scale relatively quickly" front, you should look 
> at Amazon's EC2/S3 services. I've recently worked with it and find it an 
> attractive platform for scaling http://www.cadmaps.com/gisblog
> 
>  
> 
> The stack I like is Ubuntu+Java+ Postgresql/PostGIS + Apache2 mod_jk 
> Tomcat + Geoserver + custom SVG or XAML clients run out of Tomcat
> 
>  
> 
> If you use the larger instances the cost is higher but 
> it sounds like you plan on some heavy raster services (WMS,WCS) and lots 
> of memory will help.
> 
> Small EC2 instance provides $0.10/hr:
> 
> 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute 
> Unit), 160 GB of instance storage, 32-bit platform
> 
>  
> 
> Large EC2 instances provide $0.40/hr:
> 
> 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 
> Compute Units each), 850 GB of instance storage, 64-bit platform
> 
>  
> 
> Extra large EC2 instances $0.80/hr:
> 
> 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute 
> Units each), 1690 GB of instance storage, 64-bit platform
> 
>  
> 
> Note: that the instances do not need to be permanent. Some people 
> (WeoGeo) have been using a couple of failover small instances and then 
> starting new large instances for specific requirements. The idea is to 
> start and stop instances as required rather than having ongoing 
> infrastructure costs. It only takes a minute or so to start an ec2 
> instance. If you are running a corporate service there may be parts of 
> the day with very little use so you just schedule your heavy duty 
> instances for peak times. If you can connect your raster to S3 buckets 
> rather than instance storage you have built in replicated backup.
> 
>  
> 
> I know that Java JAI can easily eat up memory and is core to Geoserver 
> WMS/WCS so you probably want to look at large memory footprint for any 
> platform with lots of raster service. I'm partial to Geoserver because 
> of its Java foundation.  I think I would try to keep the Apache2 mod_jk 
> Tomcat Geoserver on a separate server instance from PostGIS. This might 
> avoid problems for instance startup since your database would need to be 
> loaded separately. The instance ami resides in a 10G partition the 
> balance of data will probably reside on a /mnt partition separate from 
> ec2-run-instances. You may be able to avoid datadir problems by adding 
> something like Elastra to the mix. Elastra beta is a wrapper for 
> PostgreSql that puts the datadir on S3 rather than local to an instance. 
> I suppose they still keep indices(GIST et al) on the local instance.
> 
> (I still think it an interesting exercise to see what could be done 
> connecting PostGIS to AWS SimpleDB services.)
> 
>  
> 
> So thinking out loud here is a possible architecture-
> 
> Basic permanent setup
> 
> put raster in S3 - this may require some customization of Geoserver,
> 
> build a datadir in a PostGIS and backup to S3
> 
> create

Re: [OSGeo-Discuss] OS Spatial environment 'sizing'

2008-02-19 Thread Lucena, Ivan

Hi Randy, Bruce,

That is a nice piece of advise Randy. I am sorry to intrude the 
conversation but I would like to ask how that "heavy raster" 
manipulation would be treated by PostgreSQL/PostGIS, managed or unmanaged?


Best regards,

Ivan

Randy George wrote:

Hi Bruce,

 

On the “scale relatively quickly” front, you should look 
at Amazon’s EC2/S3 services. I’ve recently worked with it and find it an 
attractive platform for scaling http://www.cadmaps.com/gisblog


 

The stack I like is Ubuntu+Java+ Postgresql/PostGIS + Apache2 mod_jk 
Tomcat + Geoserver + custom SVG or XAML clients run out of Tomcat


 

If you use the larger instances the cost is higher but 
it sounds like you plan on some heavy raster services (WMS,WCS) and lots 
of memory will help.


Small EC2 instance provides $0.10/hr:

1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute 
Unit), 160 GB of instance storage, 32-bit platform


 


Large EC2 instances provide $0.40/hr:

7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 
Compute Units each), 850 GB of instance storage, 64-bit platform


 


Extra large EC2 instances $0.80/hr:

15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute 
Units each), 1690 GB of instance storage, 64-bit platform


 

Note: that the instances do not need to be permanent. Some people 
(WeoGeo) have been using a couple of failover small instances and then 
starting new large instances for specific requirements. The idea is to 
start and stop instances as required rather than having ongoing 
infrastructure costs. It only takes a minute or so to start an ec2 
instance. If you are running a corporate service there may be parts of 
the day with very little use so you just schedule your heavy duty 
instances for peak times. If you can connect your raster to S3 buckets 
rather than instance storage you have built in replicated backup.


 

I know that Java JAI can easily eat up memory and is core to Geoserver 
WMS/WCS so you probably want to look at large memory footprint for any 
platform with lots of raster service. I’m partial to Geoserver because 
of its Java foundation.  I think I would try to keep the Apache2 mod_jk 
Tomcat Geoserver on a separate server instance from PostGIS. This might 
avoid problems for instance startup since your database would need to be 
loaded separately. The instance ami resides in a 10G partition the 
balance of data will probably reside on a /mnt partition separate from 
ec2-run-instances. You may be able to avoid datadir problems by adding 
something like Elastra to the mix. Elastra beta is a wrapper for 
PostgreSql that puts the datadir on S3 rather than local to an instance. 
I suppose they still keep indices(GIST et al) on the local instance.


(I still think it an interesting exercise to see what could be done 
connecting PostGIS to AWS SimpleDB services.)


 


So thinking out loud here is a possible architecture–

Basic permanent setup

put raster in S3 – this may require some customization of Geoserver,

build a datadir in a PostGIS and backup to S3

create a private ami for Postgresql/PostGIS

create a private ami for the load balancer instance

create a private ami with your service stack for both a small and large 
instance for flexibility,


   Startup services

start a balancer instance

point your DNS CNAME to this balancer instance

start a PostGis instance (you could have more than one if necessary but 
it would be easier to just scale to a larger instance type if the load 
demands it)


have a scripted download from an S3 BU to your PostGIS datadir (I’m 
assuming a relatively static data resource)


   Variable services

start service stack instance and connect to PostGIS

update balancer to see new instance – this could be tricky

repeat previous  two steps as needed

at night scale back – cron scaling for a known cycle or use a controller 
like weoceo to detect and respond to load fluctuation


 

By the way the public AWS ami with the best resources that I have found 
is Ubuntu 7.10 Gutsy. The debian dependency tools are much easier to use 
and the resources are plentiful.


 

I’ve been toying with using an AWS stack adapted for serving some larger 
Postgis vector sets such as fully connected census demographic data and 
block polygons here in US. The idea would be to populate the data 
directly from the census SF* and TIGER with a background Java bot. There 
are some potentially novel 3D viewing approaches possible with xaml. 
Anyway lots of fun to have access to virtual systems like this.


 


As you can see I’m excited anyway.

 


randy

 

 

*From:* [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] *On Behalf Of 
[EMAIL PROTECTED]

*Sent:* Monday, February 18, 2008 6:35 PM
*To:* OSGeo Discussions
*Subject:* [OSGeo-Discuss] OS Spatial environment 'sizing'

 



IMO:


Hello everyone,

I'm trying to get a feel for server 'sizing' for a **hypothetical** 
Corporate environ

Re: [OSGeo-Discuss] OS Spatial environment 'sizing'

2008-02-19 Thread Cameron Shorter

Randy, what an informative email.
It is almost a "Howto for OSGeo hardware and performance tuning". I'm 
not aware of anyone who has written something similar (although I admit 
I have not looked).


I'd love to see it incorporated into an easily referenced resource - 
maybe a chapter in

http://wiki.osgeo.org/index.php/Educational_Content_Inventory

Also, a link from http://wiki.osgeo.org/index.php/Case_Studies .

What do you think?

Randy George wrote:


Hi Bruce,

On the “scale relatively quickly” front, you should look at Amazon’s 
EC2/S3 services. I’ve recently worked with it and find it an 
attractive platform for scaling http://www.cadmaps.com/gisblog


The stack I like is Ubuntu+Java+ Postgresql/PostGIS + Apache2 mod_jk 
Tomcat + Geoserver + custom SVG or XAML clients run out of Tomcat


If you use the larger instances the cost is higher but it sounds like 
you plan on some heavy raster services (WMS,WCS) and lots of memory 
will help.


Small EC2 instance provides $0.10/hr:

1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 
Compute Unit), 160 GB of instance storage, 32-bit platform


Large EC2 instances provide $0.40/hr:

7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 
Compute Units each), 850 GB of instance storage, 64-bit platform


Extra large EC2 instances $0.80/hr:

15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 
Compute Units each), 1690 GB of instance storage, 64-bit platform


Note: that the instances do not need to be permanent. Some people 
(WeoGeo) have been using a couple of failover small instances and then 
starting new large instances for specific requirements. The idea is to 
start and stop instances as required rather than having ongoing 
infrastructure costs. It only takes a minute or so to start an ec2 
instance. If you are running a corporate service there may be parts of 
the day with very little use so you just schedule your heavy duty 
instances for peak times. If you can connect your raster to S3 buckets 
rather than instance storage you have built in replicated backup.


I know that Java JAI can easily eat up memory and is core to Geoserver 
WMS/WCS so you probably want to look at large memory footprint for any 
platform with lots of raster service. I’m partial to Geoserver because 
of its Java foundation. I think I would try to keep the Apache2 mod_jk 
Tomcat Geoserver on a separate server instance from PostGIS. This 
might avoid problems for instance startup since your database would 
need to be loaded separately. The instance ami resides in a 10G 
partition the balance of data will probably reside on a /mnt partition 
separate from ec2-run-instances. You may be able to avoid datadir 
problems by adding something like Elastra to the mix. Elastra beta is 
a wrapper for PostgreSql that puts the datadir on S3 rather than local 
to an instance. I suppose they still keep indices(GIST et al) on the 
local instance.


(I still think it an interesting exercise to see what could be done 
connecting PostGIS to AWS SimpleDB services.)


So thinking out loud here is a possible architecture–

Basic permanent setup

put raster in S3 – this may require some customization of Geoserver,

build a datadir in a PostGIS and backup to S3

create a private ami for Postgresql/PostGIS

create a private ami for the load balancer instance

create a private ami with your service stack for both a small and 
large instance for flexibility,


Startup services

start a balancer instance

point your DNS CNAME to this balancer instance

start a PostGis instance (you could have more than one if necessary 
but it would be easier to just scale to a larger instance type if the 
load demands it)


have a scripted download from an S3 BU to your PostGIS datadir (I’m 
assuming a relatively static data resource)


Variable services

start service stack instance and connect to PostGIS

update balancer to see new instance – this could be tricky

repeat previous two steps as needed

at night scale back – cron scaling for a known cycle or use a 
controller like weoceo to detect and respond to load fluctuation


By the way the public AWS ami with the best resources that I have 
found is Ubuntu 7.10 Gutsy. The debian dependency tools are much 
easier to use and the resources are plentiful.


I’ve been toying with using an AWS stack adapted for serving some 
larger Postgis vector sets such as fully connected census demographic 
data and block polygons here in US. The idea would be to populate the 
data directly from the census SF* and TIGER with a background Java 
bot. There are some potentially novel 3D viewing approaches possible 
with xaml. Anyway lots of fun to have access to virtual systems like 
this.


As you can see I’m excited anyway.

randy

*From:* [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] *On Behalf Of 
[EMAIL PROTECTED]

*Sent:* Monday, February 18, 2008 6:35 PM
*To:* OSGeo Discussions
*Subject:* [OSGeo-Discuss] OS Spatial environment 'sizing'


IMO:


RE: [OSGeo-Discuss] OS Spatial environment 'sizing'

2008-02-19 Thread Randy George
Hi Bruce,

 

On the "scale relatively quickly" front, you should look at
Amazon's EC2/S3 services. I've recently worked with it and find it an
attractive platform for scaling http://www.cadmaps.com/gisblog

 

The stack I like is Ubuntu+Java+ Postgresql/PostGIS + Apache2 mod_jk Tomcat
+ Geoserver + custom SVG or XAML clients run out of Tomcat 

 

If you use the larger instances the cost is higher but it
sounds like you plan on some heavy raster services (WMS,WCS) and lots of
memory will help.

Small EC2 instance provides $0.10/hr:

1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute
Unit), 160 GB of instance storage, 32-bit platform

 

Large EC2 instances provide $0.40/hr:

7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute
Units each), 850 GB of instance storage, 64-bit platform

 

Extra large EC2 instances $0.80/hr:

15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute
Units each), 1690 GB of instance storage, 64-bit platform

 

Note: that the instances do not need to be permanent. Some people (WeoGeo)
have been using a couple of failover small instances and then starting new
large instances for specific requirements. The idea is to start and stop
instances as required rather than having ongoing infrastructure costs. It
only takes a minute or so to start an ec2 instance. If you are running a
corporate service there may be parts of the day with very little use so you
just schedule your heavy duty instances for peak times. If you can connect
your raster to S3 buckets rather than instance storage you have built in
replicated backup.

 

I know that Java JAI can easily eat up memory and is core to Geoserver
WMS/WCS so you probably want to look at large memory footprint for any
platform with lots of raster service. I'm partial to Geoserver because of
its Java foundation.  I think I would try to keep the Apache2 mod_jk Tomcat
Geoserver on a separate server instance from PostGIS. This might avoid
problems for instance startup since your database would need to be loaded
separately. The instance ami resides in a 10G partition the balance of data
will probably reside on a /mnt partition separate from ec2-run-instances.
You may be able to avoid datadir problems by adding something like Elastra
to the mix. Elastra beta is a wrapper for PostgreSql that puts the datadir
on S3 rather than local to an instance. I suppose they still keep
indices(GIST et al) on the local instance. 

(I still think it an interesting exercise to see what could be done
connecting PostGIS to AWS SimpleDB services.)

 

So thinking out loud here is a possible architecture- 

Basic permanent setup

put raster in S3 - this may require some customization of Geoserver, 

build a datadir in a PostGIS and backup to S3

create a private ami for Postgresql/PostGIS

create a private ami for the load balancer instance

create a private ami with your service stack for both a small and large
instance for flexibility, 

   Startup services

start a balancer instance

point your DNS CNAME to this balancer instance

start a PostGis instance (you could have more than one if necessary but it
would be easier to just scale to a larger instance type if the load demands
it)

have a scripted download from an S3 BU to your PostGIS datadir (I'm assuming
a relatively static data resource)

   Variable services

start service stack instance and connect to PostGIS

update balancer to see new instance - this could be tricky

repeat previous  two steps as needed 

at night scale back - cron scaling for a known cycle or use a controller
like weoceo to detect and respond to load fluctuation

 

By the way the public AWS ami with the best resources that I have found is
Ubuntu 7.10 Gutsy. The debian dependency tools are much easier to use and
the resources are plentiful.

 

I've been toying with using an AWS stack adapted for serving some larger
Postgis vector sets such as fully connected census demographic data and
block polygons here in US. The idea would be to populate the data directly
from the census SF* and TIGER with a background Java bot. There are some
potentially novel 3D viewing approaches possible with xaml. Anyway lots of
fun to have access to virtual systems like this. 

 

As you can see I'm excited anyway.

 

randy

 

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Monday, February 18, 2008 6:35 PM
To: OSGeo Discussions
Subject: [OSGeo-Discuss] OS Spatial environment 'sizing'

 


IMO: 


Hello everyone, 

I'm trying to get a feel for server 'sizing' for a **hypothetical**
Corporate environment to support OS Spatial apps. 



Assume that: 

- this is a dedicated environment to allow the use of OS Spatial
applications to serve Corporate OGC Services. 

- the applications of interest are GeoServer, Deegree, GeoNetwork,
MapServer, MapGuide and Postgres/PostGIS. 

- the environment may need to scale relatively quickly

[OSGeo-Discuss] Green light: OSGeo Hacking event in a monastry near Bolsena (Italy)

2008-02-19 Thread Jeroen Ticheler

Dear all,

I'm happy to inform you that the hacking event in Bolsena will take  
place! Enough people have confirmed their participation for me to  
decide to make the booking :-)


Those participating, you will need to make and advance payment of 100  
Euro to me to guarantee your place. Please contact me privately by  
email so we can arrange that. I've no problems if you later need to  
change as long as you make sure there's someone taking your place :-)


For what the food concerns, there will be a cook who will serve  
breakfast, lunch and dinner for 30 Euro per person per day.


We'll discuss other technicalities as they come up, i.e. assistance if  
you need to rent a car, organize to travel together, public transport  
etc... Please feel free to use the WIKI page for that or create a sub  
page.


There's still space for more people. I will soon add a floor plan so  
that rooms can be organized. Some rooms are single bed, some have two  
beds so people have to share the room and there are a couple of rooms  
with double beds.


You can sign up here:
http://wiki.osgeo.org/index.php/OSGeo_Hacking_event

Very much looking forward to this!
Cheers,
Jeroen



Dear all,
I've been thinking of getting an OSGeo "hacking" or code sprint  
event setup in an Italian monastry. Friends of mine take care of the  
place which is overlooking Lago Bolsena and offers space to 25  
people in small bedrooms :-) . Its probably one of the coolest  
places for such an event. Quiet, isolated and serene. There's a good  
wireless and wired internet connection, although I wouldn't bet my  
hand on it if we all start to download satellite images. It should  
be perfect for SVN, IRC and mail.


The cost would be about 200 Euro per person for the week (food not  
included, we can organize that separately). There's a large kitchen  
and large dining space. Obviously since the weather most likely  
permits, we would eat outside overlooking the lake.


Have a look yourself, including looking at the photo gallery:

http://www.conventobolsena.org/

Where: http://tinyurl.com/2t6zby

Are people interested in such a thing? Looking at availability of  
the place, best would be June or possibly in May (not always  
available!).


If there's enough interest, I can start getting people to sign up  
and get a booking done. There would be 7 days time, so it can also  
be divided into smaller time chunks that are occupied by different  
projects.


Let me know, greetings from Rome,
Jeroen



___
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss