In relation to my previous thread (about SW RAID vs. HW RAID on a P400), I
was able to narrow down the filesystem speed and in general, our new system
(running PG 9.1) is about 3x - 5x faster on the IO.
In looking at the query plans in more depth, it appears that PG 9.0 and 9.1
are both preferring
On Tue, Sep 13, 2011 at 00:26, Robert Klemme wrote:
> In the case of PG this particular example will work:
> 1. TX inserts new PK row
> 2. TX tries to insert same PK row => blocks
> 1. TX commits
> 2. TX fails with PK violation
> 2. TX does the update (if the error is caught)
That goes against th
On Tue, Sep 13, 2011 at 5:13 PM, Marti Raudsepp wrote:
> On Tue, Sep 13, 2011 at 00:26, Robert Klemme
> wrote:
>> In the case of PG this particular example will work:
>> 1. TX inserts new PK row
>> 2. TX tries to insert same PK row => blocks
>> 1. TX commits
>> 2. TX fails with PK violation
>> 2
Hi,
> (see attachment)
under high concurency you may expect that your data is already in.
In such a case you better do nothing at all:
begin
select dat=a_dat from t where id=a_id into test:
if test is null then
begin
insert into t (id, dat) values (a_id, a_dat);
except
On Tue, Sep 13, 2011 at 19:34, Robert Klemme wrote:
> I don't think so. You only need to catch the error (see attachment).
> Or does this create a sub transaction?
Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
SAVEPOINT it can roll back to in case of an error.
> Yes, I men
I do not need to do insert updates from many threads. I want to do it from
one thread.
My current MySQL architecture is that I have a table with same layout as the
main one, to hold new and updated objects.
When there is enough objects, I begin a big INSERT SELECT ... ON DUPLICATE
KEY UPDATE and
Hi Kevin,
(sorry for late reply, PG forums seem to have problems with my e-mail client,
now trying web mail)
First, thanks for taking the time. I wish I could write back with quick, terse
questions to your detailed reply - but I'm sorry, this is still going to be a
wordy post.
>> ma
Interesting debate.
2011/9/13 Marti Raudsepp :
> Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
> SAVEPOINT it can roll back to in case of an error.
Are you sure? In theory I always understood that there are no
"subtransactions".
In fact when looking at the docs there is cha
I'm just beginning the process of benchmarking and tuning a new server.
Something I really haven't done before. I'm using Greg's book as a guide.
I started with bonnie++ (1.96) and immediately got anomalous results (I
think).
Hardware is as follows:
2x quad core xeon 5504 2.0Ghz, 2x4MB cache
1
On Tue, Sep 13, 2011 at 12:57 PM, Stefan Keller wrote:
> Are you sure? In theory I always understood that there are no
> "subtransactions".
"subtransaction" is just another way of saying save points / rollback.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To mak
On 13.09.2011 20:11, Marti Raudsepp wrote:
On Tue, Sep 13, 2011 at 19:34, Robert Klemme wrote:
I don't think so. You only need to catch the error (see attachment).
Or does this create a sub transaction?
Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
SAVEPOINT it can roll
On 13.09.2011 20:57, Stefan Keller wrote:
Interesting debate.
Indeed.
2011/9/13 Marti Raudsepp:
Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a
SAVEPOINT it can roll back to in case of an error.
Are you sure? In theory I always understood that there are no
"subtransacti
Robert Klemme wrote:
> On 12.09.2011 19:22, Andy Colson wrote:
>> There are transaction isolation levels, but they are like playing
>> with fire. (in my opinion).
> You make them sound like witchcraft. But they are clearly defined
> - even standardized.
Yeah, for decades. Developing concurr
On Mon, Sep 12, 2011 at 11:26:10PM +0200, Robert Klemme wrote:
> You make them sound like witchcraft. But they are clearly defined -
> even standardized. Granted, different RDBMS might implement them in
> different ways - here's PG's view of TX isolation:
>
> http://www.postgresql.org/docs/8.4/i
Carlo Stonebanks wrote:
>> max_connections = 300
> Too high. Both throughput and latency should improve with correct
> use of a connection pooler.
> Even for 300 stateful applications that can remain connected for
> up to a week, continuously distilling data (imports)?
Absolutely.
A good c
The doc at http://www.postgresql.org/docs/current/interactive/indexes-types.html
says: "Caution: Hash index operations are not presently WAL-logged, so
hash indexes might need to be rebuilt with REINDEX after a database
crash. They are also not replicated over streaming or file-based
replication. F
On Tue, Sep 13, 2011 at 12:13 PM, Samuel Gendler
wrote:
> I'm just beginning the process of benchmarking and tuning a new server.
> Something I really haven't done before. I'm using Greg's book as a guide.
> I started with bonnie++ (1.96) and immediately got anomalous results (I
> think).
>
> H
On 14 September 2011 00:04, Stefan Keller wrote:
> Has this been verified on a recent release? I can't believe that hash
> performs so bad over all these points. Theory tells me otherwise and
> http://en.wikipedia.org/wiki/Hash_table seems to be a success.
Hash indexes have been improved since 20
Peter Geoghegan writes:
> On 14 September 2011 00:04, Stefan Keller wrote:
>> Has this been verified on a recent release? I can't believe that hash
>> performs so bad over all these points. Theory tells me otherwise and
>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.
> Hash ind
On Tue, Sep 13, 2011 at 1:22 AM, Arjen van der Meijden <
acmmail...@tweakers.net> wrote:
>
> On 12-9-2011 0:44 Anthony Presley wrote:
>
>> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
>> hoping to set them up with PG 9.0.2, running replicated. These machines
>> have (2) 5
Ok, connection pooler it is. As I understand it, even if there are no idle
connections available we'll get the benefit of putting a turnstile on the
butcher's door.
I also ordered the book as soon as you mentioned - the title alone was enough
to sell me on it! The book won't be for the erra
On 09/14/2011 02:56 AM, Carlo Stonebanks wrote:
Even for 300 stateful applications that can remain connected for up to a
week, continuously distilling data (imports)?
If they're all doing active work all that time you can still benefit
from a pooler.
Say your server can service 50 connectio
Craig Ringer wrote:
I agreed with almost your entire post, but there is one sentence
with which I take issue.
> However, it will also increase latency for service for those
> workers because they may have to wait a while before their
> transaction runs, even though their transaction will compl
2011/9/14 Tom Lane :
> (...) I think that
> the current state of affairs is still what depesz said, namely that
> there might be cases where they'd be a win to use, except the lack of
> WAL support is a killer. I imagine somebody will step up and do that
> eventually.
Should I open a ticket?
Ste
24 matches
Mail list logo