Hello
2010/10/28 Divakar Singh :
> So another question pops up: What method in PostgreSQL does the stored proc
> use when I issue multiple insert (for loop for 100 thousand records) in the
> stored proc?
nothing special - but it run as inprocess inside server backend. The
are no data casting, the
On Wed, Oct 27, 2010 at 21:08, Divakar Singh wrote:
> So another question pops up: What method in PostgreSQL does the stored proc
> use when I issue multiple insert (for loop for 100 thousand records) in the
> stored proc?
It uses prepared statements (unless you are using execute). There is
also
On Wed, Oct 27, 2010 at 6:55 PM, Robert Haas wrote:
> On Wed, Oct 27, 2010 at 12:41 AM, Rob Wultsch wrote:
>> On Tue, Oct 26, 2010 at 7:25 AM, Robert Haas wrote:
>>> On Tue, Oct 26, 2010 at 10:13 AM, Rob Wultsch wrote:
The double write buffer is one of the few areas where InnoDB does more
Robert Haas writes:
> Gee, I wonder if it would possible for PG to automatically do an
> asynchronous commit of any transaction which touches only temp tables.
Hmm ... do we need a commit at all in such a case? If our XID has only
gone into temp tables, I think we need to write to clog, but we d
So another question pops up: What method in PostgreSQL does the stored proc use
when I issue multiple insert (for loop for 100 thousand records) in the stored
proc?
It takes half the time compared to the consecutive "insert" using libpq.
In the backend, does it use COPY or prepared statement? or
Greg Smith writes:
heard privately from two people who have done similar experiments on
Linux and found closer to 8GB to be the point where performance started
So on a machine with 72GB is 8GB still the recommended value?
Usually have only 10 to 20 connections.
--
Sent via pgsql-performanc
On Wed, Oct 27, 2010 at 6:13 AM, Ivan Voras wrote:
> On 10/26/10 17:41, Merlin Moncure wrote:
>> On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci
>> wrote:
temp tables are not wal logged or
synced. Periodically they can be flushed to a permanent table.
>>>
>>>
>>> What do you
On Wed, Oct 27, 2010 at 12:41 AM, Rob Wultsch wrote:
> On Tue, Oct 26, 2010 at 7:25 AM, Robert Haas wrote:
>> On Tue, Oct 26, 2010 at 10:13 AM, Rob Wultsch wrote:
>>> The double write buffer is one of the few areas where InnoDB does more
>>> IO (in the form of fsynch's) than PG. InnoDB also has
On 10/28/2010 02:38 AM, Trenta sis wrote:
Hi,
I have a Linux Server (Debian) with Postgres 8.3 and I have problems
with a massive update, about 40 updates/inserts.
If I execute about 10 it seems all ok, but when I execute 40, I
have the same problem with or without a transaction (I
Brad Nicholson writes:
> On 10/27/2010 4:10 PM, Tom Lane wrote:
>> AFAICT from googling, AIX does have posix_fadvise, though maybe it
>> doesn't do anything useful ...
> If there is an easy way to check if it does do anything useful? If so,
> I can check it out.
If you don't see any performanc
On 10/27/2010 4:10 PM, Tom Lane wrote:
Greg Smith writes:
André Volpato wrote:
I disabled effective_io_concurrency at AIX but it made no changes on bitmap
index times.
Brad's point is that it probably doesn't do anything at all on AIX, and
is already disabled accordingly.
AFAICT from googl
Jon Nelson writes:
> I'd like to zoom out a little bit and, instead of focusing on the
> specifics, ask more general questions:
> - does the table being temporary effect anything? Another lister
> emailed me and wondered if ANALYZE on a temporary table might behave
> differently.
Well, the autov
On Wed, Oct 27, 2010 at 4:45 PM, Tom Lane wrote:
> Jon Nelson writes:
>> The sequence goes exactly like this:
>
>> BEGIN;
>> CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).
>> COPY (approx 8 million rows, ~900 MB)[1]
>> UPDATE (2.8 million of the rows)
>> UPDATE (7 rows)
>> UPDATE (2
Kenneth Marshall writes:
> Just keeping the hope alive for faster compression.
Is there any evidence that that's something we should worry about?
I can't recall ever having seen a code profile that shows the
pg_lzcompress.c code high enough to look like a bottleneck compared
to other query costs.
Jon Nelson writes:
> The sequence goes exactly like this:
> BEGIN;
> CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).
> COPY (approx 8 million rows, ~900 MB)[1]
> UPDATE (2.8 million of the rows)
> UPDATE (7 rows)
> UPDATE (250 rows)
> UPDATE (3500 rows)
> UPDATE (3100 rows)
> a bunch
- Mensagem original -
| André Volpato wrote:
| > |
| > | If it is being spent in the bitmap index scan, try setting
| > | effective_io_concurrency to 0 for Linux, and see what effect that
| > | has.
| >
| > I disabled effective_io_concurrency at AIX but it made no changes on
| > bitmap inde
Kenneth Marshall, 27.10.2010 22:41:
Different algorithms have been discussed before. A quick search turned
up:
quicklz - GPL or commercial
fastlz - MIT works with BSD okay
zippy - Google - no idea about the licensing
lzf - BSD-type
lzo - GPL or commercial
zlib - current algorithm
Of these lzf c
On Wed, Oct 27, 2010 at 09:52:49PM +0200, Pierre C wrote:
>> Even if somebody had a
>> great idea that would make things smaller without any other penalty,
>> which I'm not sure I believe either.
>
> I'd say that the only things likely to bring an improvement significant
> enough to warrant the (q
On Wed, Oct 27, 2010 at 2:43 PM, Tom Lane wrote:
> Jon Nelson writes:
>> The most recent experiment shows me that, unless I create whatever
>> indexes I would like to see used *before* the large (first) update,
>> then they just don't get used. At all.
>
> You're making a whole lot of assertions
"Pierre C" wrote:
> in-page compression
How would that be different from the in-page compression done by
TOAST now? Or are you just talking about being able to make it
more aggressive?
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
Greg Smith writes:
> André Volpato wrote:
>> I disabled effective_io_concurrency at AIX but it made no changes on bitmap
>> index times.
> Brad's point is that it probably doesn't do anything at all on AIX, and
> is already disabled accordingly.
AFAICT from googling, AIX does have posix_fadvi
Ivan Voras wrote:
FWIW, yes - once the IO is fast enough or not necessary (e.g. the
read-mostly database fits in RAM), RAM bandwidth *is* the next
bottleneck and it really, really can be observed in actual loads.
This is exactly what I've concluded, after many rounds of correlating
memory spe
Scott Carey writes:
> Why does hashjoin behave poorly when the inner relation is not
> uniformly distributed and the outer is?
Because a poorly distributed inner relation leads to long hash chains.
In the very worst case, all the keys are on the same hash chain and it
degenerates to a nested-loop
Even if somebody had a
great idea that would make things smaller without any other penalty,
which I'm not sure I believe either.
I'd say that the only things likely to bring an improvement significant
enough to warrant the (quite large) hassle of implementation would be :
- read-only / archi
On 2010-10-27 20:51, Merlin Moncure wrote:
Yes, I am quite aware of how the o/s page cache works. All else being
equal, I more compact database obviously would be preferred. However
'all else' is not necessarily equal. I can mount my database on bzip
volume, that must make it faster, right?
On Wed, Oct 27, 2010 at 08:00, Divakar Singh wrote:
> I am attaching my code below.
> Is any optimization possible in this?
> Do prepared statements help in cutting down the insert time to half for this
> kind of inserts?
In half? not for me. Optimization possible? Sure, using the code you
paste
Jon Nelson writes:
> The most recent experiment shows me that, unless I create whatever
> indexes I would like to see used *before* the large (first) update,
> then they just don't get used. At all.
You're making a whole lot of assertions here that don't square with
usual experience. I think the
On Wed, Oct 27, 2010 at 1:52 PM, Jon Nelson wrote:
> On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson wrote:
>> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
>>> set it to 500 and restarted postgres.
>>
>> did you re-analyze?
>
> Not recently. I tried that, initially, and there was no improv
André Volpato wrote:
|
| If it is being spent in the bitmap index scan, try setting
| effective_io_concurrency to 0 for Linux, and see what effect that has.
I disabled effective_io_concurrency at AIX but it made no changes on bitmap
index times.
Brad's point is that it probably doesn't do
On Oct 26, 2010, at 8:48 PM, Tom Lane wrote:
> Robert Haas writes:
>> I'm also a bit suspicious of the fact that the hash condition has a
>> cast to text on both sides, which implies, to me anyway, that the
>> underlying data types are not text. That might mean that the query
>> planner doesn't
On Wed, Oct 27, 2010 at 12:28 PM, Scott Marlowe wrote:
> On Wed, Oct 27, 2010 at 12:03 PM, Josh Berkus wrote:
>> On 10/26/10 6:14 PM, Scott Marlowe wrote:
>>> There was an earlier thread with
>>> Greg and I in it where we posted the memory bandwidth numbers for that
>>> machine and it was insan
On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson wrote:
> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
>> set it to 500 and restarted postgres.
>
> did you re-analyze?
Not recently. I tried that, initially, and there was no improvement.
I'll try it again now that I've set the stats to 500.
Trenta sis wrote:
>
> Hi,
>
> I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
> massive update, about 40 updates/inserts.
Updates or Inserts?
> If I execute about 10 it seems all ok, but when I execute 40, I have
> the same problem with or without a tr
On Wed, Oct 27, 2010 at 2:42 PM, Jesper Krogh wrote:
> On 2010-10-27 20:28, Merlin Moncure wrote:
>>
>> Postgres indexes are pretty compact, and oracle (internals I am not
>> familiar with) also has to do MVCC type management, so I am suspecting
>> your measurement is off (aka, operator error) or
On 10-10-27 02:14 PM, Divakar Singh wrote:
yes this is a very clearly visible problem.
The difference b/w oracle and PG increases with more rows.
when oracle takes 3 GB, PG takes around 6 GB.
I only use varchar.
I will try to use your tips on "smart table layout, toast compression".
Assuming thes
On 2010-10-27 20:28, Merlin Moncure wrote:
Postgres indexes are pretty compact, and oracle (internals I am not
familiar with) also has to do MVCC type management, so I am suspecting
your measurement is off (aka, operator error) or oracle is cheating
somehow by optimizing away storage requirements
Hi,
I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
massive update, about 40 updates/inserts.
If I execute about 10 it seems all ok, but when I execute 40, I have
the same problem with or without a transaction (I need to do with a
transaction) increase memor
On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
> set it to 500 and restarted postgres.
did you re-analyze?
On Wed, Oct 27, 2010 at 2:14 PM, Divakar Singh wrote:
> yes this is a very clearly visible problem.
> The difference b/w oracle and PG increases with more rows.
> when oracle takes 3 GB, PG takes around 6 GB.
> I only use varchar.
> I will try to use your tips on "smart table layout, toast compres
On Wed, Oct 27, 2010 at 12:03 PM, Josh Berkus wrote:
> On 10/26/10 6:14 PM, Scott Marlowe wrote:
>> There was an earlier thread with
>> Greg and I in it where we posted the memory bandwidth numbers for that
>> machine and it was insane how much data all 48 cores could pump into /
>> out of memor
On Wed, Oct 27, 2010 at 12:59 PM, Jon Nelson wrote:
> On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala
> wrote:
>> On 10/27/2010 1:29 PM, Jon Nelson wrote:
>> How big is your default statistics target? The default is rather small, it
>> doesn't produce very good or usable histograms.
>
> Currently
yes this is a very clearly visible problem.
The difference b/w oracle and PG increases with more rows.
when oracle takes 3 GB, PG takes around 6 GB.
I only use varchar.
I will try to use your tips on "smart table layout, toast compression".
Assuming these suggested options do not have any performan
On Wed, Oct 27, 2010 at 2:06 PM, Mladen Gogala
wrote:
> Scott, I find this very hard to believe. If you are inserting into a
> temporary table and then into the target table, you will do 2 inserts
> instead of just one. What you are telling me is that it is faster for me to
> drive from NYC to Was
On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala
wrote:
> On 10/27/2010 1:29 PM, Jon Nelson wrote:
> How big is your default statistics target? The default is rather small, it
> doesn't produce very good or usable histograms.
Currently, default_statistics_target is 50.
I note that if I create a i
On 10/27/2010 1:48 PM, Scott Carey wrote:
It is almost always significantly faster than a direct bulk load into a table.
* The temp table has no indexes, the final table usually does, bulk operations
on indexes are faster than per row operations.
* The final table might require both updates and
On Tue, Oct 26, 2010 at 11:10 PM, Divakar Singh wrote:
> Dear All,
> Thanks for your inputs on the insert performance part.
> Any suggestion on storage requirement?
> VACUUM is certainly not an option, because this is something related to
> maintenance AFTER insertion.
> I am talking about the pla
On 10/26/10 6:14 PM, Scott Marlowe wrote:
> There was an earlier thread with
> Greg and I in it where we posted the memory bandwidth numbers for that
> machine and it was insane how much data all 48 cores could pump into /
> out of memory at the same time.
Well, the next step then is to do some
On Oct 26, 2010, at 2:54 PM, Mladen Gogala wrote:
> On 10/26/2010 5:27 PM, Jon Nelson wrote:
>> start loop:
>> populate rows in temporary table
>> insert from temporary table into permanent table
>> truncate temporary table
>> loop
>>
>> I do something similar, where I COPY data to a tem
On 10/27/2010 1:29 PM, Jon Nelson wrote:
I have an app which imports a lot of data into a temporary table, does
a number of updates, creates some indexes, and then does a bunch more
updates and deletes, and then eventually inserts some of the columns
from the transformed table into a permanent ta
Hi Steve and other friends,
Some information you would be interested in:
I did some further tests using libpq in my code.
I used a stored proc to insert 100 thousand rows in a table, it took 25 sec
(almost same as time taken by Oracle PL/SQL and OCI interface).
Same inserts through libpq take 70
Hello
>
> The 8.4 machines have more memory than the 8.3.7 and are in general much
> better machines.
> 8.4 settings
> Shared_buffers 18GB
> effective_cache_size 18GB
>
> Machines have 72GB of RAM
> Tried turning off sequential scan on the 8.4.5 and that did not help.
>
> Any ideas/suggestions?
>
CentOS 5.4 and 5.5
Query
SELECT sum(usramt) as givensum,
sum (case when usedid > 0 then usramt else 0 end) as usedsum
FROM argrades
WHERE userstatus in (5) and
membid in (select distinct members.membid from members, cards
where members.membid =
On 10-10-26 05:04 PM, André Volpato wrote:
- Mensagem original -
| On 10-10-25 03:26 PM, André Volpato wrote:
|> | On Mon, Oct 25, 2010 at 2:21 PM, André Volpato
|> | wrote:
(...)
|> |> These times keep repeating after the second run, and I can
|> |> ensure AIX isn´t touching t
On Wed, Oct 27, 2010 at 6:13 AM, Ivan Voras wrote:
> On 10/26/10 17:41, Merlin Moncure wrote:
>> On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci
>> wrote:
temp tables are not wal logged or
synced. Periodically they can be flushed to a permanent table.
>>>
>>>
>>> What do you
On 10/26/10 17:41, Merlin Moncure wrote:
> On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci
> wrote:
>>> temp tables are not wal logged or
>>> synced. Periodically they can be flushed to a permanent table.
>>
>>
>> What do you mean with "Periodically they can be flushed to
>> a permanen
One last note. Our vendor at the time we ordered our quad 12 core
machines could only provide that mobo in a 1U chassis. Consequently
we bought all external arrays for that machine. Since you're looking
at a dual 8 core machine, you should be able to get a mobo like that
in almost any chassis yo
On Wed, Oct 27, 2010 at 1:37 AM, Yeb Havinga wrote:
> Scott Marlowe wrote:
>>
>> There was an earlier thread with
>> Greg and I in it where we posted the memory bandwidth numbers for that
>> machine and it was insane how much data all 48 cores could pump into /
>> out of memory at the same time.
>
Scott Marlowe wrote:
There was an earlier thread with
Greg and I in it where we posted the memory bandwidth numbers for that
machine and it was insane how much data all 48 cores could pump into /
out of memory at the same time.
Yeah, it was insane. Building a economical 'that generation optero
58 matches
Mail list logo