On Wednesday 09 October 2002 06:03 pm, Michael Peligro wrote:
> On Wednesday 09 October 2002 11:56 am, Federico Sevilla III wrote:
> > I do not yet manage very large databases, but have noticed that
>
> Our Progress 7.2 database can balloon to about 2 gigabytes every three
> months.
>
> I'll try to do the math for approximation of the number of records the
> Progress database can hold. Maybe it can give you an idea whether
> PostgreSQL can stand the same stress.

Your transactions seem simple enough (in terms of operations performed) and I 
do think Postgres can handle the load.  It's easy enough to set up so you can 
do actual testing and benchmarking yourself.  A simple sql script filled with 
a million records piped through an input file via "psql <db> -f inputfile" 
will easily populate your db.  Then a simple 'explain analyze <sql command>' 
on the psql prompt can give you an idea of how much time it takes to perform 
your operation.

A simple query on a fairly small table of mine (~30K entries) on a busy box 
shows the following results:

explain analyze select * from sample_table;
NOTICE:  QUERY PLAN:

Seq Scan on sample_table  (cost=0.00..730.75 rows=29875 width=97) (actual 
time=0.04..723.01 rows=29875 loops=1)
Total runtime: 772.50 msec

I'm not saying this is close to your scenario.  I'm just pointing out that you 
can see fairly easily if it will scale to your needs by spending a few 
minutes to set it up and test.  And you'll come up with the most 
authoritative answer at that -- from you. =)

-- 
Deds Castillo
Infiniteinfo Philippines
http://www.infiniteinfo.com
Hiroshima '45, Chernobyl '86, Windows 95/98/2K
_
Philippine Linux Users Group. Web site and archives at http://plug.linux.org.ph
To leave: send "unsubscribe" in the body to [EMAIL PROTECTED]

Fully Searchable Archives With Friendly Web Interface at http://marc.free.net.ph

To subscribe to the Linux Newbies' List: send "subscribe" in the body to 
[EMAIL PROTECTED]

Reply via email to