scott.marlowe wrote:
>On Wed, 14 Jan 2004, Adam Alkins wrote:
>
>
>
>>scott.marlowe wrote:
>>
>>
>>
>>>A few tips from an old PHP/Apache/Postgresql developer.
>>>
>>>1: Avoid pg_pconnect unless you are certain you have load tested the
>>>system and it will behave properly. pg_pconnect ofte
We've found these tools
http://scsirastools.sourceforge.net/ and
http://www.seagate.com/support/seatools/ (for seagate drives)
to check the settings of scsi disks and to change settings for seagate
drives.
What are people using for IDE disks?
Are you all using hdparm on linux
http://freshmeat.net
"Anjan Dave" <[EMAIL PROTECTED]> writes:
> Question is, does the 80MB buffer allocation correspond to ~87MB per
> postmaster instance? (with about 100 instances of postmaster, that will
> be about 100 x 80MB =3D 8GB??)
Most likely, top is counting some portion of the shared memory block
against ea
On Thursday 15 January 2004 22:49, Anjan Dave wrote:
> Gurus,
>
> I have defined the following values on a db:
>
> shared_buffers = 10240 # 10240 = 80MB
> max_connections = 100
> sort_mem = 1024 # 1024KB is 1MB per operation
> effective_cache_size = 262144 # equals to 2GB
Title: Message
Gurus,
I have defined the
following values on a db:
shared_buffers =
10240 # 10240 = 80MB
max_connections = 100
sort_mem =
1024
# 1024KB is 1MB per operation
effective_cache_size = 262144 # equals
to 2GB for 8k pages
Rest of the values
are
On 16/01/2004, at 2:44 AM, Tom Lane wrote:
...
As noted elsewhere, it's highly likely that this has nothing to do with
the OS, and everything to do with write caching in the disks being
used.
I assume you are benchmarking small individual transactions (one insert
per xact). In such scenarios it'
Guten Tag Richard Huxton,
Am Donnerstag, 15. Januar 2004 um 17:10 schrieben Sie:
RH> On Thursday 15 January 2004 13:13, pginfo wrote:
>> Hi,
>>
>> I am using pg 7.4.1 and have created a trigger over table with 3 M rows.
>> If I start masive update on this table, pg executes this trigger on
>> eve
"Rigmor Ukuhe" <[EMAIL PROTECTED]> writes:
> query: select "NP_ID" from a WHERE "NP_ID" > '0' [is slow]
>
> query: select "NP_ID" from a WHERE "NP_ID" > '1' [is fast]
>
> There are about 37K rows and only about 100 of then are not "NP_ID" = 0
Yeah, it's scanning over all the zero values when you
On Thursday 15 January 2004 13:13, pginfo wrote:
> Hi,
>
> I am using pg 7.4.1 and have created a trigger over table with 3 M rows.
> If I start masive update on this table, pg executes this trigger on
> every row and dramaticaly slows the system.
> Exists in pg any way to define the trigger execut
Almoust identical querys are having very different exec speed (Postgresql
7.2.4).
query: select "NP_ID" from a WHERE "NP_ID" > '0'
Index Scan using NP_ID_a on a (cost=0.00..13.01 rows=112 width=4) (actual
time=16.89..18.11 rows=93 loops=1)
Total runtime: 18.32 msec
--
Syd <[EMAIL PROTECTED]> writes:
> However, with postgres 7.4 on Mac OSX 10.2.3, we're getting an amazing
> 500 inserts per second.
> We can only put this down to the OS.
As noted elsewhere, it's highly likely that this has nothing to do with
the OS, and everything to do with write caching in the
Hi,
I am using pg 7.4.1 and have created a trigger over table with 3 M rows.
If I start masive update on this table, pg executes this trigger on
every row and dramaticaly slows the system.
Exists in pg any way to define the trigger execution only if I have
changes on some fields?
For example I
> On a variety of hardware with Redhat, and versions of postgres, we're
> not getting much better than 50 inserts per second. This is prior to
> moving WAL to another disk, and fsync is on.
>
> However, with postgres 7.4 on Mac OSX 10.2.3, we're getting an amazing
> 500 inserts per second.
>
> We c
I've read most of the threads on insert speed in this list and wanted
to share some interesting observations and a question.
We've been benchmarking some dbs to implement Bayesian processing on an
email server. This involves frequent insert and updates to the
following table:
create table baye
14 matches
Mail list logo