Ali,
> You can save the source as partition.c and use:
>
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fpic -DREFINT_VERBOSE -I.
> -I. -I"/usr/pgsql-9.2/include/server
> Date: Thu, 17 Jan 2013 15:38:14 +0100
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: alipou...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
>
> 2012/12/27 Charles
2012/12/27 Charles Gomes
> So far that's what I got http://www.widesol.com/~charles/pgsql/partition.c
> I had some issues as He uses HeapTuples and on 9.2 I see a Slot.
>
Hi Charles,
I copied your C code partition.c and am trying to test it.
For compiling you suggest :
...
gcc -I "./" -fpic -c
Lane; Charles Gomes; Ondrej Ivanič; pgsql-performance@postgresql.org
> Assunto: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
>
>
>
>
>
> On Thursday, December 20, 2012, Scott Marlowe wrote:
>
>
> 3: Someone above mentioned rules being faster than trig
: [PERFORM] Performance on Bulk Insert to Partitioned Table
On Thursday, December 20, 2012, Scott Marlowe wrote:
3: Someone above mentioned rules being faster than triggers. In my
experience they're WAY slower than triggers but maybe that was just on
the older pg versions (8.3 and lower) we
On Thursday, December 20, 2012, Scott Marlowe wrote:
>
> 3: Someone above mentioned rules being faster than triggers. In my
> experience they're WAY slower than triggers but maybe that was just on
> the older pg versions (8.3 and lower) we were doing this on. I'd be
> interested in seeing some b
2012/12/28 Stephen Frost :
> 2012/12/28 Vitalii Tymchyshyn :
>> Why so? Basic form "case lvalue when rvalue then out ... end" is much like
>> switch.
>
> Sorry, to be honest, I missed that distinction and didn't expect that to
> work as-is, yet apparently it does. Does it currently perform the sam
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> I had thought that too, but the catch is that the target expressions do not
> need to be constants when the function is created. Indeed, they can even
> be volatile.
Right, any optimization in this regard would only work in certain
instances- eg: when
2012/12/28 Vitalii Tymchyshyn :
> Why so? Basic form "case lvalue when rvalue then out ... end" is much like
> switch.
Sorry, to be honest, I missed that distinction and didn't expect that to
work as-is, yet apparently it does. Does it currently perform the same
as an if/elsif tree or is it imple
On Friday, December 28, 2012, Vitalii Tymchyshyn wrote:
> There is switch-like sql case:
> 39.6.2.4. Simple CASE
>
> CASE search-expression
> WHEN expression [, expression [ ... ]] THEN
> statements
> [ WHEN expression [, expression [ ... ]] THEN
> statements
> ... ]
> [ EL
2012/12/28 Vitalii Tymchyshyn :
> Why so? Basic form "case lvalue when rvalue then out ... end" is much like
> switch.
> The "case when condition then out ... end" is different, more complex beast,
> but first one is essentially a switch. If it is now trnasformed into
> "case when lvalue = rvalue1
It's a pity. Why does not it listed in "Compatibility" section of create
trigger documentation? I think, this makes "for each statement" triggers
not compatible with SQL99.
2012/12/28 Pavel Stehule
> Hello
>
> >
> > Also, for bulk insert, have you tried "for each statement" triggers
> instead
>
Why so? Basic form "case lvalue when rvalue then out ... end" is much like
switch.
The "case when condition then out ... end" is different, more complex
beast, but first one is essentially a switch. If it is now trnasformed into
"case when lvalue = rvalue1 then out1 when lvalue=rvalue2 then out2 ..
Hello
>
> Also, for bulk insert, have you tried "for each statement" triggers instead
> of "for each row"?
> This would look like a lot of inserts and would not be fast in
> single-row-insert case, but can give you benefit for huge inserts.
> It should look like
> insert into quotes_2012_09_10 sel
Vitalii,
* Vitalii Tymchyshyn (tiv...@gmail.com) wrote:
> There is switch-like sql case:
[...]
> It should work like C switch statement.
It does and it doesn't. It behaves generally like a C switch statement,
but is much more flexible and therefore can't be optimized like a C
switch statement ca
BTW: If "select count(*) from new" is fast, you can even choose the
strategy in trigger depending on insert size.
2012/12/28 Vitalii Tymchyshyn
> There is switch-like sql case:
> 39.6.2.4. Simple CASE
>
> CASE search-expression
> WHEN expression [, expression [ ... ]] THEN
> statement
There is switch-like sql case:
39.6.2.4. Simple CASE
CASE search-expression
WHEN expression [, expression [ ... ]] THEN
statements
[ WHEN expression [, expression [ ... ]] THEN
statements
... ]
[ ELSE
statements ]
END CASE;
It should work like C switch statement.
Al
ve Emmanuel for posting his trigger code that I will hack into my own :P
Thanks Emmanuel.
--------
Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
From: itparan...@gmail.com
Date: Mon, 24 Dec 2012 21:11:07 +0400
CC: jeff.ja...@gmail.c
2012/12/27 Stephen Frost :
> * Jeff Janes (jeff.ja...@gmail.com) wrote:
>> If the main goal is to make it faster, I'd rather see all of plpgsql get
>> faster, rather than just a special case of partitioning triggers. For
>> example, right now a CASE statement with 100 branches is about
>> the sam
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> If the main goal is to make it faster, I'd rather see all of plpgsql get
> faster, rather than just a special case of partitioning triggers. For
> example, right now a CASE statement with 100 branches is about
> the same speed as an equivalent list of
m/~charles/pgsql/partition.c
I had some issues as He uses HeapTuples and on 9.2 I see a Slot.
> From: pavel.steh...@gmail.com
> Date: Thu, 27 Dec 2012 19:46:12 +0100
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> To
2012/12/27 Jeff Janes :
> On Wednesday, December 26, 2012, Pavel Stehule wrote:
>>
>> 2012/12/27 Jeff Janes :
>> >
>> > More automated would be nice (i.e. one operation to make both the check
>> > constraints and the trigger, so they can't get out of sync), but would
>> > not
>> > necessarily mean
On Monday, December 24, 2012, Charles Gomes wrote:
> By the way, I've just re-wrote the code to target the partitions
> individually and I've got almost 4 times improvement.
> Shouldn't it be faster to process the trigger, I would understand if there
> was no CPU left, but there is lots of cpu to
On Wednesday, December 26, 2012, Pavel Stehule wrote:
> 2012/12/27 Jeff Janes :
> >
> > More automated would be nice (i.e. one operation to make both the check
> > constraints and the trigger, so they can't get out of sync), but would
> not
> > necessarily mean faster.
>
>
Native implementation
> Date: Wed, 26 Dec 2012 23:03:33 -0500
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: jeff.ja...@gmail.com
> To: charle...@outlook.com
> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
>
>
2012/12/27 Jeff Janes :
> On Monday, December 24, 2012, Charles Gomes wrote:
>>
>>
>
>
>>
>> >
>> > I think your performance bottleneck is almost certainly the dynamic
>> > SQL. Using C to generate that dynamic SQL isn't going to help much,
>> > because it is still
On Monday, December 24, 2012, Charles Gomes wrote:
>
>
> >
> > I think your performance bottleneck is almost certainly the dynamic
> > SQL. Using C to generate that dynamic SQL isn't going to help much,
> > because it is still the SQL engine that has to parse, p
om: cecc...@gmail.com
> To: charle...@outlook.com
> CC: itparan...@gmail.com; jeff.ja...@gmail.com; ondrej.iva...@gmail.com;
> pgsql-performance@postgresql.org; m...@frogthinker.org;
> robertmh...@gmail.com; st...@enterprisedb.com
> Subject: Re: [PERFORM] Performance on Bulk Insert
;> To: jeff.ja...@gmail.com
> >> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
> >> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> >> Date: Mon, 24 Dec 2012 10:51:12 -0500
> >>
> >>
...@gmail.com
>> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
>> Date: Mon, 24 Dec 2012 10:51:12 -0500
>>
>> ________
>>> Date: Sun, 23 Dec 2012
way to speedup unless the insert code is
partition aware.
> From: charle...@outlook.com
> To: jeff.ja...@gmail.com
> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitione
> Date: Sun, 23 Dec 2012 14:55:16 -0800
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: jeff.ja...@gmail.com
> To: charle...@outlook.com
> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
&g
On Thursday, December 20, 2012, Charles Gomes wrote:
> True, that's the same I feel, I will be looking to translate the trigger
> to C if I can find good examples, that should accelerate.
>
I think your performance bottleneck is almost certainly the dynamic SQL.
Using C to generate that dynamic
On Thursday, December 20, 2012, Charles Gomes wrote:
> Without hyperthreading CPU still not a bottleneck, while I/O is only 10%
> utilization.
>
> top - 14:55:01 up 27 min, 2 users, load average: 0.17, 0.19, 0.14
> Tasks: 614 total, 17 running, 597 sleeping, 0 stopped, 0 zombie
> Cpu(s): 73
id use partitioning with a trigger in C and I don't have the know how on it
without examples.
________
> Date: Thu, 20 Dec 2012 19:24:09 -0800
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: jeff.ja...@gmail.com
> T
rows.
> From: t...@sss.pgh.pa.us
> To: charle...@outlook.com
> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> Date: Thu, 20 Dec 2012 18:39:07 -0500
>
> Charles Gomes writes:
> &
On Thursday, December 20, 2012, Charles Gomes wrote:
> Jeff,
>
> The 8288 writes are fine, as the array has a BBU, it's fine. You see about
> 4% of the utilization.
>
BBU is great for latency, but it doesn't do much for throughput, unless it
is doing write combining behind the scenes. Is it HDD
On Thu, Dec 20, 2012 at 4:39 PM, Tom Lane wrote:
> Charles Gomes writes:
>> Using rules would be totally bad as I'm partitioning daily and after one
>> year having 365 lines of IF won't be fun to maintain.
>
> You should probably rethink that plan anyway. The existing support for
> partitioning
Charles Gomes writes:
> Using rules would be totally bad as I'm partitioning daily and after one year
> having 365 lines of IF won't be fun to maintain.
You should probably rethink that plan anyway. The existing support for
partitioning is not meant to support hundreds of partitions; you're
goi
fun to maintain.
> Date: Fri, 21 Dec 2012 09:50:49 +1100
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: ondrej.iva...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
> Hi,
>
> On 21 December 2012 04:29, C
012 14:31:44 -0800
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: jeff.ja...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
> On Thu, Dec 20, 2012 at 9:29 AM, Charles Gomes wrote:
> > Hello guys
> >
> &
Hi,
On 21 December 2012 04:29, Charles Gomes wrote:
> When I target the MASTER table on all the inserts and let
> the trigger decide what partition to choose from it takes 4 hours.
>
> If I target the partitioned table directly during the
> insert I can get 4 times better performance. It takes 1
o share I would
love to start from it and share with other people so everyone can benefit.
> Date: Thu, 20 Dec 2012 15:02:34 -0500
> From: sfr...@snowman.net
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
> Subject: Re: [
On Thu, Dec 20, 2012 at 9:29 AM, Charles Gomes wrote:
> Hello guys
>
>
>
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
>
>
>
> When I target the MASTER table on all the inserts and let
> the trigger decide what partition to choose from it takes 4 hours.
>
> If I target the part
owing down ?
> From: charle...@outlook.com
> To: scott.marl...@gmail.com
> CC: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> Date: Thu, 20 Dec 2012 13:55:29 -0500
>
> Non
Charles,
* Charles Gomes (charle...@outlook.com) wrote:
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
Do you end up having multiple threads writing to the same, underlying,
tables..? If so, I've seen that problem before. Look at pg_locks while
things are running and see if t
igger in C ?
> Date: Thu, 20 Dec 2012 10:39:25 -0700
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: scott.marl...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
> On Thu, Dec 20, 2012 at
On Thu, Dec 20, 2012 at 10:29 AM, Charles Gomes wrote:
> Hello guys
>
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
>
> When I target the MASTER table on all the inserts and let
> the trigger decide what partition to choose from it takes 4 hours.
>
> If I target the partitioned
Hello guys
I’m doing 1.2 Billion inserts into a table partitioned in
15.
When I target the MASTER table on all the inserts and let
the trigger decide what partition to choose from it takes 4 hours.
If I target the partitioned table directly during the
insert I can get 4 times better perfor
49 matches
Mail list logo