On 12-9-2011 0:44 Anthony Presley wrote:
A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
hoping to set them up with PG 9.0.2, running replicated. These machines
have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the
HP SA P400i with 512MB of BBWC. PG is
In relation to my previous thread (about SW RAID vs. HW RAID on a P400), I
was able to narrow down the filesystem speed and in general, our new system
(running PG 9.1) is about 3x - 5x faster on the IO.
In looking at the query plans in more depth, it appears that PG 9.0 and 9.1
are both
On Tue, Sep 13, 2011 at 00:26, Robert Klemme shortcut...@googlemail.com wrote:
In the case of PG this particular example will work:
1. TX inserts new PK row
2. TX tries to insert same PK row = blocks
1. TX commits
2. TX fails with PK violation
2. TX does the update (if the error is caught)
On Tue, Sep 13, 2011 at 5:13 PM, Marti Raudsepp ma...@juffo.org wrote:
On Tue, Sep 13, 2011 at 00:26, Robert Klemme shortcut...@googlemail.com
wrote:
In the case of PG this particular example will work:
1. TX inserts new PK row
2. TX tries to insert same PK row = blocks
1. TX commits
2. TX
Hi,
(see attachment)
under high concurency you may expect that your data is already in.
In such a case you better do nothing at all:
begin
select dat=a_dat from t where id=a_id into test:
if test is null then
begin
insert into t (id, dat) values (a_id, a_dat);
On Tue, Sep 13, 2011 at 19:34, Robert Klemme shortcut...@googlemail.com wrote:
I don't think so. You only need to catch the error (see attachment).
Or does this create a sub transaction?
Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
SAVEPOINT it can roll back to in case
I do not need to do insert updates from many threads. I want to do it from
one thread.
My current MySQL architecture is that I have a table with same layout as the
main one, to hold new and updated objects.
When there is enough objects, I begin a big INSERT SELECT ... ON DUPLICATE
KEY UPDATE and
Hi Kevin,
(sorry for late reply, PG forums seem to have problems with my e-mail client,
now trying web mail)
First, thanks for taking the time. I wish I could write back with quick, terse
questions to your detailed reply - but I'm sorry, this is still going to be a
wordy post.
Interesting debate.
2011/9/13 Marti Raudsepp ma...@juffo.org:
Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
SAVEPOINT it can roll back to in case of an error.
Are you sure? In theory I always understood that there are no
subtransactions.
In fact when looking at the docs
I'm just beginning the process of benchmarking and tuning a new server.
Something I really haven't done before. I'm using Greg's book as a guide.
I started with bonnie++ (1.96) and immediately got anomalous results (I
think).
Hardware is as follows:
2x quad core xeon 5504 2.0Ghz, 2x4MB cache
On Tue, Sep 13, 2011 at 12:57 PM, Stefan Keller sfkel...@gmail.com wrote:
Are you sure? In theory I always understood that there are no
subtransactions.
subtransaction is just another way of saying save points / rollback.
--
Sent via pgsql-performance mailing list
On 13.09.2011 20:11, Marti Raudsepp wrote:
On Tue, Sep 13, 2011 at 19:34, Robert Klemmeshortcut...@googlemail.com wrote:
I don't think so. You only need to catch the error (see attachment).
Or does this create a sub transaction?
Yes, every BEGIN/EXCEPTION block creates a subtransaction --
On 13.09.2011 20:57, Stefan Keller wrote:
Interesting debate.
Indeed.
2011/9/13 Marti Raudseppma...@juffo.org:
Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
SAVEPOINT it can roll back to in case of an error.
Are you sure? In theory I always understood that there are
Robert Klemme wrote:
On 12.09.2011 19:22, Andy Colson wrote:
There are transaction isolation levels, but they are like playing
with fire. (in my opinion).
You make them sound like witchcraft. But they are clearly defined
- even standardized.
Yeah, for decades. Developing concurrency
On Mon, Sep 12, 2011 at 11:26:10PM +0200, Robert Klemme wrote:
You make them sound like witchcraft. But they are clearly defined -
even standardized. Granted, different RDBMS might implement them in
different ways - here's PG's view of TX isolation:
Carlo Stonebanks wrote:
max_connections = 300
Too high. Both throughput and latency should improve with correct
use of a connection pooler.
Even for 300 stateful applications that can remain connected for
up to a week, continuously distilling data (imports)?
Absolutely.
A good
The doc at http://www.postgresql.org/docs/current/interactive/indexes-types.html
says: Caution: Hash index operations are not presently WAL-logged, so
hash indexes might need to be rebuilt with REINDEX after a database
crash. They are also not replicated over streaming or file-based
replication.
On Tue, Sep 13, 2011 at 12:13 PM, Samuel Gendler
sgend...@ideasculptor.comwrote:
I'm just beginning the process of benchmarking and tuning a new server.
Something I really haven't done before. I'm using Greg's book as a guide.
I started with bonnie++ (1.96) and immediately got anomalous
On 14 September 2011 00:04, Stefan Keller sfkel...@gmail.com wrote:
Has this been verified on a recent release? I can't believe that hash
performs so bad over all these points. Theory tells me otherwise and
http://en.wikipedia.org/wiki/Hash_table seems to be a success.
Hash indexes have been
Peter Geoghegan pe...@2ndquadrant.com writes:
On 14 September 2011 00:04, Stefan Keller sfkel...@gmail.com wrote:
Has this been verified on a recent release? I can't believe that hash
performs so bad over all these points. Theory tells me otherwise and
http://en.wikipedia.org/wiki/Hash_table
On Tue, Sep 13, 2011 at 1:22 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 12-9-2011 0:44 Anthony Presley wrote:
A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
hoping to set them up with PG 9.0.2, running replicated. These machines
have (2) 5410
Ok, connection pooler it is. As I understand it, even if there are no idle
connections available we'll get the benefit of putting a turnstile on the
butcher's door.
I also ordered the book as soon as you mentioned - the title alone was enough
to sell me on it! The book won't be for the
On 09/14/2011 02:56 AM, Carlo Stonebanks wrote:
Even for 300 stateful applications that can remain connected for up to a
week, continuously distilling data (imports)?
If they're all doing active work all that time you can still benefit
from a pooler.
Say your server can service 50
Craig Ringer wrote:
I agreed with almost your entire post, but there is one sentence
with which I take issue.
However, it will also increase latency for service for those
workers because they may have to wait a while before their
transaction runs, even though their transaction will
24 matches
Mail list logo