On 1/28/2005 2:49 PM, Christopher Browne wrote:
But there's nothing wrong with the idea of using pg_dump --data-only
against a subscriber node to get you the data without putting a load
on the origin. And then pulling the schema from the origin, which
oughtn't be terribly expensive there.
And
fsync on.
Alex Turner
NetEconomist
On Fri, 28 Jan 2005 11:19:44 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
With the right configuration you can get very serious throughput. The
new system is processing over 2500 insert transactions per second. We
don't need more RAM with this
Le Vendredi 21 Janvier 2005 19:18, Marty Scholes a écrit :
The indexes can be put on a RAM disk tablespace and that's the end of
index problems -- just make sure you have enough memory available. Also
make sure that the machine can restart correctly after a crash: the
tablespace is dropped
On 01/28/2005-10:59AM, Alex Turner wrote:
At this point I will interject a couple of benchmark numbers based on
a new system we just configured as food for thought.
System A (old system):
Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID
1, one 3 Disk RAID 5 on 10k RPM
On 01/28/2005-05:57PM, Alex Turner wrote:
Your system A has the absolute worst case Raid 5, 3 drives. The more
drives you add to Raid 5 the better it gets but it will never beat Raid
10. On top of it being the worst case, pg_xlog is not on a separate
spindle.
True for writes, but
On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:
IIRC it hates pg_dump mainly on master. If you are able to run pg_dump
from slave, it should be ok.
For the sake of the archives, that's not really a good idea. There
is some work afoot to solve it, but at the moment dumping from
At this point I will interject a couple of benchmark numbers based on
a new system we just configured as food for thought.
System A (old system):
Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID
1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original
Price: $6500
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:
IIRC it hates pg_dump mainly on master. If you are able to run pg_dump
from slave, it should be ok.
For the sake of the archives, that's not really a good idea. There
is some work
PFC wrote:
So, here is something annoying with the current approach : Updating rows
in a table bloats ALL indices, not just those whose indexed values have
been actually updated. So if you have a table with many indexed fields and
you often update some obscure timestamp field, all the
Hervé Piedvache wrote:
My point being is that there is no free solution. There simply isn't.
I don't know why you insist on keeping all your data in RAM, but the
mysql cluster requires that ALL data MUST fit in RAM all the time.
I don't insist about have data in RAM but when you use
On Fri, 28 Jan 2005 11:54:57 -0500, Christopher Weimann
[EMAIL PROTECTED] wrote:
On 01/28/2005-10:59AM, Alex Turner wrote:
At this point I will interject a couple of benchmark numbers based on
a new system we just configured as food for thought.
System A (old system):
Compaq Proliant
William Yu [EMAIL PROTECTED] writes:
1 beefy server w/ 32GB RAM = $16K
I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
all the trickyness of partitioning a DB over multiple servers. Yes your data
will grow to a point where even the XXGB can't cache everything.
Ühel kenal päeval (teisipäev, 25. jaanuar 2005, 10:41-0500), kirjutas
Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
Why is removing index entries essential ?
Because once you re-use the tuple slot, any leftover index entries would
be pointing to the wrong rows.
That much I understood
Hannu Krosing [EMAIL PROTECTED] writes:
But can't clearing up the index be left for later ?
Based on what? Are you going to store the information about what has to
be cleaned up somewhere else, and if so where?
Indexscan has to check the data tuple anyway, at least for visibility.
would
http://borg.postgresql.org/docs/8.0/interactive/storage-page-layout.html
If you vacuum as part of the transaction it's going to be more efficient
of resources, because you have more of what you need right there (ie:
odds are that you're on the same page as the old tuple). In cases like
that it
Ühel kenal päeval (esmaspäev, 24. jaanuar 2005, 11:52+0900), kirjutas
Tatsuo Ishii:
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's
Ühel kenal päeval (pühapäev, 23. jaanuar 2005, 15:40-0500), kirjutas Tom
Lane:
Simon Riggs [EMAIL PROTECTED] writes:
Changing the idea slightly might be better: if a row update would cause
a block split, then if there is more than one row version then we vacuum
the whole block first, then
Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 16:00+0100), kirjutas
Hervé Piedvache:
Will both do what you want. Replicator is easier to setup but
Slony is free.
No ... as I have said ... how I'll manage a database getting a table of may
be
250 000 000 records ? I'll need incredible
Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 11:02-0500), kirjutas
Rod Taylor:
Slony has some other issues with databases 200GB in size as well
(well, it hates long running transactions -- and pg_dump is a regular
long running transaction)
IIRC it hates pg_dump mainly on master. If you
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's impossible on its face, except for the special case where the
same
Hannu Krosing [EMAIL PROTECTED] writes:
Why is removing index entries essential ?
Because once you re-use the tuple slot, any leftover index entries would
be pointing to the wrong rows.
regards, tom lane
---(end of
On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:
The real issue with any such scheme is that you are putting maintenance
costs into the critical paths of foreground processes that are executing
user queries. I think that one of the primary advantages of the
Postgres storage design is
On Sat, 2005-01-22 at 16:10 -0500, Tom Lane wrote:
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's impossible on its face, except for the
Simon Riggs [EMAIL PROTECTED] writes:
Changing the idea slightly might be better: if a row update would cause
a block split, then if there is more than one row version then we vacuum
the whole block first, then re-attempt the update.
Block split? I think you are confusing tables with indexes.
On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:
There was some discussion in Toronto this week about storing bitmaps
that would tell VACUUM whether or not there was any need to visit
individual pages of each table. Getting rid of useless scans through
not-recently-changed areas of
Tatsuo,
I'm not clear what pgPool only needs to monitor update switching by
*connection* not by *table* means. In your example:
(1) 00:00 User A updates My Profile
(2) 00:01 My Profile UPDATE finishes executing.
(3) 00:02 User A sees My Profile re-displayed
(6) 00:04 My Profile:UserA
The real issue with any such scheme is that you are putting maintenance
costs into the critical paths of foreground processes that are executing
user queries. I think that one of the primary advantages of the
Postgres storage design is that we keep that work outside the critical
path and
Tatsuo,
I'm not clear what pgPool only needs to monitor update switching by
*connection* not by *table* means. In your example:
(1) 00:00 User A updates My Profile
(2) 00:01 My Profile UPDATE finishes executing.
(3) 00:02 User A sees My Profile re-displayed
(6) 00:04 My
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's impossible on its face, except for the special case where the
same transaction inserts
Dawid Kuroczko [EMAIL PROTECTED] writes:
Quick thought -- would it be to possible to implement a 'partial VACUUM'
per analogiam to partial indexes?
No.
But it gave me another idea. Perhaps equally infeasible, but I don't see why.
What if there were a map of modified pages. So every time any
On Sat, Jan 22, 2005 at 12:13:00 +0900,
Tatsuo Ishii [EMAIL PROTECTED] wrote:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
You are probably vacuuming too often. You want to wait
On Sat, 2005-01-22 at 12:41 -0600, Bruno Wolff III wrote:
On Sat, Jan 22, 2005 at 12:13:00 +0900,
Tatsuo Ishii [EMAIL PROTECTED] wrote:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead
From http://developer.postgresql.org/todo.php:
Maintain a map of recently-expired rows
This allows vacuum to reclaim free space without requiring a sequential
scan
On Sat, Jan 22, 2005 at 12:20:53PM -0500, Greg Stark wrote:
Dawid Kuroczko [EMAIL PROTECTED] writes:
Quick thought -- would
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's impossible on its face, except for the special case where the
same transaction inserts and
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Hervé
Piedvache) transmitted:
Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
Is there any solution with PostgreSQL matching these needs ... ?
You want: http://www.slony.info/
Do we have to backport
In the last exciting episode, [EMAIL PROTECTED] (Hervé Piedvache) wrote:
Le Jeudi 20 Janvier 2005 16:05, Joshua D. Drake a écrit :
Christopher Kings-Lynne wrote:
Or you could fork over hundreds of thousands of dollars for Oracle's
RAC.
No please do not talk about this again ... I'm
Quoth Ron Mayer [EMAIL PROTECTED]:
Merlin Moncure wrote:
...You need to build a bigger, faster box with lots of storage...
Clustering ... B: will cost you more, not less
Is this still true when you get to 5-way or 17-way systems?
My (somewhat outdated) impression is that up to about 4-way
After a long battle with technology, [EMAIL PROTECTED] (Hervé Piedvache), an
earthling, wrote:
Joshua,
Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a écrit :
Hervé Piedvache wrote:
My company, which I actually represent, is a fervent user of PostgreSQL.
We used to make all our
Tatsuo,
Yes. However it would be pretty easy to modify pgpool so that it could
cope with Slony-I. I.e.
1) pgpool does the load balance and sends query to Slony-I's slave and
master if the query is SELECT.
2) pgpool sends query only to the master if the query is other than
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select doesn't cause an update,
and you can't be
Matt Clark wrote:
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select doesn't cause an update,
Joshua D. Drake wrote:
Matt Clark wrote:
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select
Yes, I wasn't really choosing my examples particularly carefully, but I
think the conclusion stands: pgpool (or anyone/thing except for the
server) cannot in general tell from the SQL it is handed by the client
whether an update will occur, nor which tables might be affected.
That's not to say
Tatsuo,
Suppose table A gets updated on the master at time 00:00. Until 00:03
pgpool needs to send all queries regarding A to the master only. My
question is, how can pgpool know a query is related to A?
Well, I'm a little late to head off tangental discussion about this, but
The
This is probably a lot easier than you would think. You say that your
DB will have lots of data, lots of updates and lots of reads.
Very likely the disk bottleneck is mostly index reads and writes, with
some critical WAL fsync() calls. In the grand scheme of things, the
actual data is likely
Technically, you can also set up a rule to do things on a select with
DO
ALSO. However putting update statements in there would be considered
(at
least by me) very bad form. Note that this is not a trigger because
it
does not operate at the row level [I know you knew that already :-)].
] Behalf Of Tatsuo Ishii
Sent: Thursday, January 20, 2005 5:40 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering
On January 20, 2005 06:49 am, Joshua D. Drake wrote
Peter, Tatsuo:
would happen with SELECT queries that, through a function or some
other mechanism, updates data in the database? Would those need to be
passed to pgpool in some special way?
Oh, yes, that reminds me. It would be helpful if pgPool accepted a control
string ... perhaps one in a
Tatsuo,
Suppose table A gets updated on the master at time 00:00. Until 00:03
pgpool needs to send all queries regarding A to the master only. My
question is, how can pgpool know a query is related to A?
Well, I'm a little late to head off tangental discussion about this, but
IMO the bottle neck is not WAL but table/index bloat. Lots of updates
on large tables will produce lots of dead tuples. Problem is, There'
is no effective way to reuse these dead tuples since VACUUM on huge
tables takes longer time. 8.0 adds new vacuum delay
paramters. Unfortunately this does not
Peter, Tatsuo:
would happen with SELECT queries that, through a function or some
other mechanism, updates data in the database? Would those need to be
passed to pgpool in some special way?
Oh, yes, that reminds me. It would be helpful if pgPool accepted a control
string ... perhaps
On Thu, 20 Jan 2005 15:03:31 +0100, Hervé Piedvache [EMAIL PROTECTED] wrote:
We were at this moment thinking about a Cluster solution ... We saw on the
Internet many solution talking about Cluster solution using MySQL ... but
nothing about PostgreSQL ... the idea is to use several servers to
Is there any solution with PostgreSQL matching these needs ... ?
You want: http://www.slony.info/
Do we have to backport our development to MySQL for this kind of problem ?
Is there any other solution than a Cluster for our problem ?
Well, Slony does replication which is basically what you want :)
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another possibility would be slony, though
I'm not sure it's to the point you need it at yet, depends on if you can
handle some delay before an insert makes
Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
Is there any solution with PostgreSQL matching these needs ... ?
You want: http://www.slony.info/
Do we have to backport our development to MySQL for this kind of problem
? Is there any other solution than a Cluster for our
Le Jeudi 20 Janvier 2005 15:38, Christopher Kings-Lynne a écrit :
Sorry but I don't agree with this ... Slony is a replication solution ...
I don't need replication ... what will I do when my database will grow up
to 50 Gb ... I'll need more than 50 Gb of RAM on each server ???
This
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another possibility would be slony, though
I'm not sure it's to the point you need it at yet,
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
I need a Cluster solution not a
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another possibility would be slony, though
Hervé Piedvache wrote:
Dear community,
My company, which I actually represent, is a fervent user of PostgreSQL.
We used to make all our applications using PostgreSQL for more than 5 years.
We usually do classical client/server applications under Linux, and Web
interface (php, perl, C/C++). We
On Jan 20, 2005, at 9:36 AM, Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution
... I
don't need replication ... what will I do when my database will grow
up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
Slony doesn't use much ram. The
Stephen Frost wrote:
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another
Sorry but I don't agree with this ... Slony is a replication solution ...
I don't need replication ... what will I do when my database will grow up
to 50 Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
I need a Cluster solution not a
Le Jeudi 20 Janvier 2005 15:48, Jeff a écrit :
On Jan 20, 2005, at 9:36 AM, Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution
... I
don't need replication ... what will I do when my database will grow
up to 50
Gb ... I'll need more than 50 Gb
Joshua,
Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a écrit :
Hervé Piedvache wrote:
My company, which I actually represent, is a fervent user of PostgreSQL.
We used to make all our applications using PostgreSQL for more than 5
years. We usually do classical client/server applications
Le Jeudi 20 Janvier 2005 15:51, Christopher Kings-Lynne a écrit :
Sorry but I don't agree with this ... Slony is a replication solution
... I don't need replication ... what will I do when my database will
grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server
??? This
No please do not talk about this again ... I'm looking about a PostgreSQL
solution ... I know RAC ... and I'm not able to pay for a RAC certify
hardware configuration plus a RAC Licence.
What you want does not exist for PostgreSQL. You will either
have to build it yourself or pay somebody to
* Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote:
PostgreSQL has replication, but not partitioning (which is what you want).
It doesn't have multi-server partitioning.. It's got partitioning
within a single server (doesn't it? I thought it did, I know it was
discussed w/ the guy from Cox
So what we would like to get is a pool of small servers able to make one
virtual server ... for that is called a Cluster ... no ?
I know they are not using PostgreSQL ... but how a company like Google do to
get an incredible database in size and so quick access ?
You could use dblink with
Christopher Kings-Lynne wrote:
Or you could fork over hundreds of thousands of dollars for Oracle's
RAC.
No please do not talk about this again ... I'm looking about a
PostgreSQL solution ... I know RAC ... and I'm not able to pay for a
RAC certify hardware configuration plus a RAC Licence.
Le Jeudi 20 Janvier 2005 16:05, Joshua D. Drake a écrit :
Christopher Kings-Lynne wrote:
Or you could fork over hundreds of thousands of dollars for Oracle's
RAC.
No please do not talk about this again ... I'm looking about a
PostgreSQL solution ... I know RAC ... and I'm not able to
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
I know they are not using PostgreSQL ... but how a company like Google do to
get an incredible database in size and so quick access ?
They segment their data across multiple machines and have an algorithm
which tells the application layer which
Hervé Piedvache wrote:
No ... as I have said ... how I'll manage a database getting a table of may be
250 000 000 records ? I'll need incredible servers ... to get quick access or
index reading ... no ?
So what we would like to get is a pool of small servers able to make one
virtual server ...
then I was thinking. Couldn't he use
multiple databases
over multiple servers with dblink?
It is not exactly how I would want to do it, but it would provide what
he needs I think???
Yes seems to be the only solution ... but I'm a little disapointed about
this ... could you explain me why
No please do not talk about this again ... I'm looking about a PostgreSQL
solution ... I know RAC ... and I'm not able to pay for a RAC certify
hardware configuration plus a RAC Licence.
Are you totally certain you can't solve your problem with a single server
solution?
How about:
Price out
Google uses something called the google filesystem, look it up in
google. It is a distributed file system.
Dave
Herv Piedvache wrote:
Joshua,
Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a crit :
Herv Piedvache wrote:
My company, which I actually represent,
Le Jeudi 20 Janvier 2005 16:14, Steve Wampler a écrit :
Once you've got the data partitioned, the question becomes one of
how to inhance performance/scalability. Have you considered RAIDb?
No but I'll seems to be very interesting ... close to the explanation of
Joshua ... but automaticly done
Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
Google uses something called the google filesystem, look it up in
google. It is a distributed file system.
Yes that's another point I'm working on ... make a cluster of server using
GFS ... and making PostgreSQL running with it ...
But I
Le Jeudi 20 Janvier 2005 16:16, Merlin Moncure a écrit :
No please do not talk about this again ... I'm looking about a PostgreSQL
solution ... I know RAC ... and I'm not able to pay for a RAC certify
hardware configuration plus a RAC Licence.
Are you totally certain you can't solve your
Hervé Piedvache wrote:
Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
Google uses something called the google filesystem, look it up in
google. It is a distributed file system.
Yes that's another point I'm working on ... make a cluster of server using
GFS ... and making PostgreSQL running
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some internal
partitioning is being
Christopher Kings-Lynne wrote:
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some
: Hervé
Piedvache [EMAIL PROTECTED], pgsql-performance@postgresql.org
[EMAIL PROTECTED]Subject: Re: [PERFORM]
PostgreSQL clustering VS MySQL clustering
tgresql.org
On Thu, 2005-01-20 at 15:36 +0100, Hervé Piedvache wrote:
Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
Is there any solution with PostgreSQL matching these needs ... ?
You want: http://www.slony.info/
Do we have to backport our development to MySQL for this kind
What you want is some kind of huge pararell computing , isn't it? I have heard
from many groups of Japanese Pgsql developer did it but they are talking in
japanese website and of course in Japanese.
I can name one of them Asushi Mitani and his website
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
I think maybe a SAN in conjunction with tablespaces might be the answer.
Still need one honking server.
That's interesting- can a PostgreSQL partition be acress multiple
tablespaces?
Stephen
signature.asc
Description: Digital signature
Steve Wampler [EMAIL PROTECTED] writes:
Hervé Piedvache wrote:
No ... as I have said ... how I'll manage a database getting a table of may
be 250 000 000 records ? I'll need incredible servers ... to get quick
access
or index reading ... no ?
Probably by carefully partitioning
Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
Have you
On Thu, 20 Jan 2005 16:32:27 +0100, Hervé Piedvache wrote:
Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
Google uses something called the google filesystem, look it up in
google. It is a distributed file system.
Yes that's another point I'm working on ... make a cluster of server
The problem is very large ammounts of data that needs to be both read
and updated. If you replicate a system, you will need to
intelligently route the reads to the server that has the data in RAM
or you will always be hitting DIsk which is slow. This kind of routing
AFAIK is not possible with
On January 20, 2005 06:49 am, Joshua D. Drake wrote:
Stephen Frost wrote:
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You
On January 20, 2005 06:51 am, Christopher Kings-Lynne wrote:
Sorry but I don't agree with this ... Slony is a replication solution
... I don't need replication ... what will I do when my database will
grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server
??? This solution is
On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
[EMAIL PROTECTED] wrote:
Another Option to consider would be pgmemcache. that way you just build the
farm out of lots of large memory, diskless boxes for keeping the whole
database in memory in the whole cluster. More information on it
On January 20, 2005 10:42 am, Mitch Pirtle wrote:
On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
[EMAIL PROTECTED] wrote:
Another Option to consider would be pgmemcache. that way you just build
the farm out of lots of large memory, diskless boxes for keeping the
whole database in
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
Could you explain us what do you have in mind for that solution? I mean,
forget the PostgreSQL (or any other database) restrictions and explain us
how this hardware would be. Where the data would be stored?
I've something in mind
On Thu, 20 Jan 2005 12:13:17 -0700, Steve Wampler [EMAIL PROTECTED] wrote:
Mitch Pirtle wrote:
But that's not enough, because you're going to be running separate
postgresql backends on the different hosts, and there are
definitely consistency issues with trying to do that. So far as
I know
Hervé Piedvache [EMAIL PROTECTED] writes:
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
Could you explain us what do you have in mind for that solution? I mean,
forget the PostgreSQL (or any other database) restrictions and explain us
how this hardware would be. Where the
Two way xeon's are as fast as a single opteron, 150M rows isn't a big
deal.
Clustering isn't really the solution, I fail to see how clustering
actually helps since it has to slow down file access.
Dave
Herv Piedvache wrote:
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a crit :
Dealing about the hardware, for the moment we have only a bi-pentium Xeon
2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ...
so
we are thinking about a new solution with maybe several servers (server
design may vary from one to other) ... to get a kind of cluster to get
Hervé Piedvache wrote:
Dealing about the hardware, for the moment we have only a bi-pentium Xeon
2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so
we are thinking about a new solution with maybe several servers (server
design may vary from one to other) ... to get a
1 - 100 of 113 matches
Mail list logo