am Wed, dem 11.07.2007, um 22:19:58 +0200 mailte Andreas Kretschmer folgendes:
> > Also, part of the problem here looks to be an overestimate of the number
> > of rows matching ab = 347735. It might help to increase the statistics
> > target for that column.
>
> I will try this tomorrow and info
All this talk of WAL writing lately has me wondering something I haven't
spent enough time looking at the source to figure out myself this
week...any good rules of thumb out there for estimating WAL volume? I'm
used to just measuring it via benchmarking but it strikes me a formula
would be nic
Greg Smith writes:
During peak operation there will be about 5 to 20 updates per second
with a handfull of reads.
There really is no reason you need to be concerned about WAL from a
performance perspective if this is your expected workload.
I was able to get the second controller with batt
On Wed, 11 Jul 2007, Francisco Reyes wrote:
I am re-working the specs of the machine to try and get a 4port 3ware to have
the battery backup.
That's really not necessary, it just would be better (and obviously more
expensive). The warnings you've been getting here have been to let you
know
"Adriaan van Os" <[EMAIL PROTECTED]> writes:
> Recently, I have been doing extensive profiling of a version 8.1.4 Postgres DB
> with about 175 tables and 5 GB of data (the server running on Fedora Linux and
> the clients on Windows XP). Surprisingly, one of the bottlenecks is TRUNCATE
> TABLE and
Adriaan van Os <[EMAIL PROTECTED]> writes:
> Surprisingly, one of the bottlenecks is TRUNCATE TABLE and that
> command is really slow as compared to other operations.
When you don't quantify that statement at all, it's hard to make an
intelligent comment on it, but TRUNCATE per se shouldn't be slo
Recently, I have been doing extensive profiling of a version 8.1.4 Postgres DB with about 175
tables and 5 GB of data (the server running on Fedora Linux and the clients on Windows XP).
Surprisingly, one of the bottlenecks is TRUNCATE TABLE and that command is really slow as compared
to other op
Ok thanks.
iostat confirmed it's an IO bottleneck.
Will add some discs to the RAID unit.
Used 4 Raptor discs in Raid 10 until now.
best regards,
patric
Tom Lane wrote:
Patric de Waha <[EMAIL PROTECTED]> writes:
Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.
Marc Cousin <[EMAIL PROTECTED]> writes:
> Nevertheless, shouldn't the third estimate be smaller or equal to the sum of
> the two others ?
The planner's estimation for subplan conditions is pretty primitive
compared to joinable conditions. When you add the OR, it's no longer
possible to treat the
Joshua D. Drake writes:
Without a BBU you are guaranteed at some point to have catastrophic
failure unless you turn off write cache, which would then destroy your
performance.
I am re-working the specs of the machine to try and get a 4port 3ware to
have the battery backup.
On 7/11/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 7/11/07, A. Kretschmer <[EMAIL PROTECTED]> wrote:
> am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
> > >Okay, i got a really different plan, but i expected _NOT_ a
> > >performance-boost like this. I expected the op
Tom Lane <[EMAIL PROTECTED]> schrieb:
Thanks you and Alex for the response.
> > PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5
> > (Debian 1:3.3.5-13)
>
> You need a newer one.
I know ;-)
>
> This is simply a stupid choice on the part of choose_bitmap_and() ---
> it's
Andreas Kretschmer <[EMAIL PROTECTED]> writes:
> Okay, i know, not really a recent version:
> PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian
> 1:3.3.5-13)
You need a newer one.
> -> BitmapAnd (cost=1217.69..1217.69 rows=39 width=0) (actual
> time=163.68
On 7/11/07, A. Kretschmer <[EMAIL PROTECTED]> wrote:
am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
> >Okay, i got a really different plan, but i expected _NOT_ a
> >performance-boost like this. I expected the opposite.
> >
> >
> >It's not a really problem, i just playe
Francisco Reyes wrote:
Alan Hodgson writes:
I would spring for a 4-port with a BBU, though, and then put the WAL
on the drives with the OS.
The machine is already over budget. :-(
I will check the price difference but unlikely I will get approval.
Without a BBU you are guaranteed at some po
Alan Hodgson writes:
I would spring for a 4-port with a BBU, though, and then put the WAL on the
drives with the OS.
The machine is already over budget. :-(
I will check the price difference but unlikely I will get approval.
---(end of broadcast)---
am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
> >Okay, i got a really different plan, but i expected _NOT_ a
> >performance-boost like this. I expected the opposite.
> >
> >
> >It's not a really problem, i just played with this. But i'm confused
> >about this...
> >
>
>
On Wed, 11 Jul 2007, Alan Hodgson wrote:
Presumably he'll have the 2-port configured for write-through operation.
This is the real key to his question. In order to get acceptable
performance for the operating system, Francisco may very well need the OS
disks to be configured in write-back m
On 7/11/07, Andreas Kretschmer <[EMAIL PROTECTED]> wrote:
Hi,
Okay, i know, not really a recent version:
PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian
1:3.3.5-13)
I have a fresh ANALYZED table with some indexes.
scholl=*# set enable_bitmapscan=1;
SET
scholl=*#
On 11-Jul-07, at 2:35 PM, Greg Smith wrote:
On Wed, 11 Jul 2007, Jim Nasby wrote:
I suppose an entirely in-memory database might be able to swamp a
2 drive WAL as well.
You can really generate a whole lot of WAL volume on an EMC SAN if
you're doing UPDATEs fast enough on data that is mos
On Wed, 11 Jul 2007, Jim Nasby wrote:
I suppose an entirely in-memory database might be able to swamp a 2
drive WAL as well.
You can really generate a whole lot of WAL volume on an EMC SAN if you're
doing UPDATEs fast enough on data that is mostly in-memory. Takes a
fairly specific type of
Can you list others that seemed out of place?
Thanks.
Regards,
Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Heikki Linnakangas wrote:
May I ask you why you set max_prepared_transactions to 450, while
you're apparently not using prepared transactions, accordin
Hi,
Okay, i know, not really a recent version:
PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian
1:3.3.5-13)
I have a fresh ANALYZED table with some indexes.
scholl=*# set enable_bitmapscan=1;
SET
scholl=*# explain analyse select sum(flaeche) from bde_meldungen where
On Wed, Jul 11, 2007 at 01:39:39PM -0400, Chris Browne wrote:
> load causes. A fallout of this is that those disks are likely to be
> worked harder than the disk used for storing "plain old data," with
> the result that if you devote disk to WAL, you'll likely burn thru
> replacement drives faster
On Jul 11, 2007, at 12:39 PM, Chris Browne wrote:
- Split off a set (6?) for WAL
In my limited testing, 6 drives for WAL would be complete overkill in
almost any case. The only example I've ever seen where WAL was able
to swamp 2 drives was the DBT testing that Mark Wong was doing at
OSD
[EMAIL PROTECTED] (Dave Cramer) writes:
> On 11-Jul-07, at 10:05 AM, Gregory Stark wrote:
>
>> "Dave Cramer" <[EMAIL PROTECTED]> writes:
>>
>>> Assuming we have 24 73G drives is it better to make one big
>>> metalun and carve
>>> it up and let the SAN manage the where everything is, or is it
>>> b
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> Heikki Linnakangas wrote:
>> May I ask you why you set max_prepared_transactions to 450, while
>> you're apparently not using prepared transactions, according to this
>> quote:
> Yep.. one of the things that we didn't revert back and got strayed ou
On Wed, Jul 11, 2007 at 12:10:55PM -0400, Tom Lane wrote:
> Well, on some platforms (ie consumer-grade PCs) explain analyze can be a
> lot slower than just running the query,
Yes, I suppose I exaggerated when I said "can't get any faster", but
given that the OP was talking on the order of hours f
I don't this so, because DB2 is running on a Sun Sparc T1 processor
(http://www.sun.com/processors/UltraSPARC-T1/) that's implements much
more features, like thread level parelelism, than AMD Opteron.
the DB2 installation:
J2EE AppServer HW (SUT hardware)
Hardware Vendor: Sun Microsystems
On Wed, Jul 11, 2007 at 05:35:33PM +0200, Patric de Waha wrote:
> Mainly updates on 1 tuple.
Are you vacuuming that table enough?
> And more or less complex SELECT statements.
>I noticed that the overall performance of postgres is decreasing
> when one or more long
> readers are present
Patric de Waha <[EMAIL PROTECTED]> writes:
>Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.
When you don't even mention your disk hardware, that's a bad sign.
In a database server the disk is usually more important than the CPU.
>Why do long readers influence the rest o
Heikki Linnakangas wrote:
May I ask you why you set max_prepared_transactions to 450, while
you're apparently not using prepared transactions, according to this
quote:
Recoverable 2-phase transactions were used to coordinate the
interaction between
the database server and JMS server usi
"Andrew Sullivan" <[EMAIL PROTECTED]> writes:
> On Wed, Jul 11, 2007 at 08:21:40AM -0700, smiley2211 wrote:
>>
>> How can I get the time it takes a query to execute - explain analyze is
>> taking over 5 hours to complete
>
> You can't get it any faster than what explain analyse does: it runs
> th
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Wed, Jul 11, 2007 at 08:21:40AM -0700, smiley2211 wrote:
>> How can I get the time it takes a query to execute - explain analyze is
>> taking over 5 hours to complete
> You can't get it any faster than what explain analyse does: it runs
> the query.
On Wednesday 11 July 2007 08:36, Andrew Sullivan <[EMAIL PROTECTED]>
wrote:
> Put the WAL where the battery is. Even if it's slower (and I don't
> know whether it will be), I assume that having the right data more
> slowly is better than maybe not having the data at all, quickly.
>
Presumably he
Jignesh K. Shah wrote:
Hello all,
I think this result will be useful for performance discussions of
postgresql against other databases.
http://www.spec.org/jAppServer2004/results/res2007q3/
More on Josh Berkus's blog:
http://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-fi
Hi,
I'm having a weird problem on a query :
I've simplified it to get the significant part (see end of message).
The point is I've got a simple
SELECT field FROM table WHERE 'condition1'
Estimated returned rows : 5453
Then
SELECT field FROM table WHERE 'condition2'
Estimated returned rows : 705
Th
We have a few tables that we need to pull relatively accurate aggregate
counts from, and we found the performance of SELECT COUNT(*) to be
unacceptable. We solved this by creating triggers on insert and delete to
update counts in a secondary table which we join to when we need the count
informati
Its hard to do direct comparison since that one used a different
commercial application server than the PostgreSQL test.. As we do more
tests with the help of the community, hopefully we can understand where
the CPU cycles are spent and see how to make them more efficient...
Stay tuned!!
-Jig
On Wed, Jul 11, 2007 at 08:21:40AM -0700, smiley2211 wrote:
>
> How can I get the time it takes a query to execute - explain analyze is
> taking over 5 hours to complete
You can't get it any faster than what explain analyse does: it runs
the query. How else would you get the answer?
> ...can I
On Wed, Jul 11, 2007 at 10:48:04AM -0400, Francisco Reyes wrote:
> The question is, would I be better off putting WAL on the second, OS,
> controller or in the 8 port controller? Specially since the 2 port will not
> have battery (3ware does not have 2 ports with battery).
Put the WAL where the
Hi,
I've two questions for which I not really found answers in the web.
Intro:
I've a Website with some traffic.
2 Million queries a day, during daylight.
Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.
Mainly updates on 1 tuple. And more or less complex SELECT stat
On Wed, Jul 11, 2007 at 09:03:27AM -0400, Dave Cramer wrote:
> Problem with dedicating the spindles to each array is that we end up
> wasting space. Are the SAN's smart enough to do a better job if I
> create one large metalun and cut it up ?
In my experience, this largely depends on your SAN
How can I get the time it takes a query to execute - explain analyze is
taking over 5 hours to complete...can I use \timing??? I don't get any time
when using the \timing option...
Thanks...Marsha
--
View this message in context:
http://www.nabble.com/TIMING-A-QUERY-tf4062567.html#a1154239
In my sporadic benchmark testing, the only consistent 'trick' I found
was that the best thing I could do for performance sequential
performance was allocating a bunch of mirrored pair LUNs and stripe
them with software raid. This made a huge difference (~2X) in sequential
performance, and a littl
Setting spec for a postgresql server.
The hard drive distribution is going to be
8 x 750GB Seagate on a 3ware 9650SE RAID 6
2 x 160GB Seagate on a 3ware 2 port
The question is, would I be better off putting WAL on the second, OS,
controller or in the 8 port controller? Specially since the 2 port
am I wrong or DB2 9.1 is faster on less powerfull hardware ?
Le lundi 09 juillet 2007 à 11:57 -0400, Jignesh K. Shah a écrit :
> Hello all,
>
> I think this result will be useful for performance discussions of
> postgresql against other databases.
>
> http://www.spec.org/jAppServer2004/results/
On 11-Jul-07, at 10:05 AM, Gregory Stark wrote:
"Dave Cramer" <[EMAIL PROTECTED]> writes:
Assuming we have 24 73G drives is it better to make one big
metalun and carve
it up and let the SAN manage the where everything is, or is it
better to
specify which spindles are where.
This is qui
On Tue, Jul 10, 2007 at 08:17:05PM +0530, Gauri Kanekar wrote:
> Is there anyway so as to indicate the Query Analyser not to use the
> plan which it is using regularly, and use a new plan ?
You can't dictate the query plan but you can influence the planner's
decisions with various configuration se
"Dave Cramer" <[EMAIL PROTECTED]> writes:
> Assuming we have 24 73G drives is it better to make one big metalun and carve
> it up and let the SAN manage the where everything is, or is it better to
> specify which spindles are where.
This is quite a controversial question with proponents of both
We do something similar here. We use Netapp and I carve one aggregate
per data volume. I generally keep the pg_xlog on the same "data" LUN,
but I don't mix other databases on the same aggregate.
In the NetApp world because they use RAID DP (dual parity) you have a
higher wastage of drives,
Assuming we have 24 73G drives is it better to make one big metalun
and carve it up and let the SAN manage the where everything is, or is
it better to specify which spindles are where.
Currently we would require 3 separate disk arrays.
one for the main database, second one for WAL logs, thir
52 matches
Mail list logo