Pretty much everyone thinks their requirements are exceptional. It's
funny how infrequently that's actually true. The techniques that favor
index-use aren't that unique: collect better stats, set basic parameters
correctly, adjust random_page_cost, investigate plans that don't do what
you wa
Fernando Hevia wrote:
I justified my first choice in that WAL writes are sequentially and OS
pretty much are too, so a RAID 1 probably would hold ground against a
12 disc RAID 10 with random writes.
The problem with this theory is that when PostgreSQL does WAL writes and
asks to sync the data
2010/1/15 Pierre Frédéric Caillaud :
> On Thu, 14 Jan 2010 22:28:07 +0100, fka...@googlemail.com
> wrote:
>
>> Pierre Frédéric Caillaud:
>>
>>> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
>>>
>>> Big CPU and slow disk...
>>>
>>> You should add another disk just for the WAL -- disks a
Carlo Stonebanks wrote:
1) Which RAID level would you recommend
It looks like you stepped over a critical step, which is "will the
server have a good performing RAID card?". Your whole upgrade could
underperform if you make a bad mistake on that part. It's really
important to nail that dow
On 15/01/2010 6:35 AM, Dave Crooke wrote:
I'll bite
1. In general, RAID-10 is the only suitable RAID configuration for a
database. The decision making comes in how many drives, and splitting
stuff up into LUNs (like putting pg_xlog on its own LUN).
The biggest problem with Postgres on
On Thu, 14 Jan 2010 22:28:07 +0100, fka...@googlemail.com
wrote:
Pierre Frédéric Caillaud:
> 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
Big CPU and slow disk...
You should add another disk just for the WAL -- disks are pretty cheap
these days.
Writing the WAL on a se
I'll bite
1. In general, RAID-10 is the only suitable RAID configuration for a
database. The decision making comes in how many drives, and splitting stuff
up into LUNs (like putting pg_xlog on its own LUN).
2. None of the above - you're asking the wrong question really. PostgreSQL
is open
My bad - I thought I had, so it has been re-posted with a (v2) disclaimer in
the title... like THAT will stop the flaming!
Thanks for your patience!
"Craig James" wrote in message
news:4b4f8a49.7010...@emolecules.com...
Carlo Stonebanks wrote:
Guys, I want to thank you for all of the advice
My client just informed me that new hardware is available for our DB server.
. Intel Core 2 Quads Quad
. 48 GB RAM
. 4 Disk RAID drive (RAID level TBD)
I have put the ugly details of what we do with our DB below, as well as the
postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.
On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
wrote:
> My client just informed me that new hardware is available for our DB server.
>
> . Intel Core 2 Quads Quad
> . 48 GB RAM
> . 4 Disk RAID drive (RAID level TBD)
>
> I have put the ugly details of what we do with our DB below, as well as th
Carlo Stonebanks wrote:
Guys, I want to thank you for all of the advice - my client has just
made a surprise announcement that he would like to set start from
scratch with a new server, so I am afraid that all of this great advice
has to be seen in the context of whatever decision is made on th
> -Mensaje original-
> De: fka...@googlemail.com
> Nevertheless: If your explanation covers all what can be said
> about it then replacing the hard disk by a faster one should
> increase the performance here (I'll try to check that out).
>
Moving the pg_xlog directory to the OS driv
Pierre Frédéric Caillaud:
> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
>
> Big CPU and slow disk...
>
> You should add another disk just for the WAL -- disks are pretty cheap
> these days.
> Writing the WAL on a second disk is the first thing to do on a
> configuration like
Florian Weimer:
> > Do you have any further idea why 16MB/s seems to be the limit here?
>
> BYTEA deserialization is very slow, and this could be a factor here.
> Have you checked that you are in fact I/O bound?
Could you elaborate that a bit? It sounds interesting but I
do not get what you mean
Aidan Van Dyk:
> So, your SATA disk can do 43MB/s of sequential writes, but you're example
> is doing:
> 1) Sequential writes to WAL
> 2) Random writes to your index
> 3) Sequential writes to table heap
> 4) Sequential writes to table' toast heap
> 5) Any other OS-based FS overhead
Ok, I see. Tha
On Thu, Jan 14, 2010 at 1:03 PM, Fernando Hevia wrote:
> Hi all,
>
> I've just received this new server:
> 1 x XEON 5520 Quad Core w/ HT
> 8 GB RAM 1066 MHz
> 16 x SATA II Seagate Barracuda 7200.12
> 3ware 9650SE w/ 256MB BBU
>
> It will run an Ubuntu 8.04 LTS Postgres 8.4 dedicated server. Its da
Regarding the EA-200 card, here are the specs.
It seems it has support for SAS disks, so it is most probably that we are
using the embedded/default controller.
http://h18000.www1.hp.com/products/quickspecs/12460_div/12460_div.html
http://h18000.www1.hp.com/products/quickspecs/12460_div/12460_div.p
Regarding the hardware the system is running on:
It's an HP Proliant DL-180 G5 server.
Here are the specs... our actual configuration only has one CPU, and 16G of
RAM.
The model of the 2 disks I will post later today, when I get to the server.
I was with many things, sorry.
http://h18000.www1.hp
Thanks a lot for your reply.
Hannu Krosing:
> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
>
> try inserting the same data using 4 parallel connections or even 8
> parallel ones.
Interesting idea -- I forgot to mention though that 2-3
cores will be occupied soon with other tasks.
Felix
--
Thanks a lot for the detailed reply.
Matthew Wakeling:
> On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
> > This takes about 50s, so, 800MB/50s = 16MB/s.
> >
> > However the harddisk (sata) could write 43 MB/s in the worst
> > case! Why is write performance limited to 16 MB/s?
>
> Several re
Thank You for your reply.
Ivan Voras:
> Are you doing it locally or over a network? If you are accessing the
> server over a network then it could be the location of the bottleneck.
All is done locally (for now).
Felix
--
Sent via pgsql-performance mailing list (pgsql-performance@postgre
Hi all,
I've just received this new server:
1 x XEON 5520 Quad Core w/ HT
8 GB RAM 1066 MHz
16 x SATA II Seagate Barracuda 7200.12
3ware 9650SE w/ 256MB BBU
It will run an Ubuntu 8.04 LTS Postgres 8.4 dedicated server. Its database
will be getting between 100 and 1000 inserts per second (those
On Thu, 14 Jan 2010 14:17:13 -0500, "Carlo Stonebanks"
wrote:
> My client just informed me that new hardware is available for our DB
> server.
>
> . Intel Core 2 Quads Quad
> . 48 GB RAM
> . 4 Disk RAID drive (RAID level TBD)
>
> I have put the ugly details of what we do with our DB below, as we
Guys, I want to thank you for all of the advice - my client has just made a
surprise announcement that he would like to set start from scratch with a
new server, so I am afraid that all of this great advice has to be seen in
the context of whatever decision is made on that. I am out there,
hat-
My client just informed me that new hardware is available for our DB server.
. Intel Core 2 Quads Quad
. 48 GB RAM
. 4 Disk RAID drive (RAID level TBD)
I have put the ugly details of what we do with our DB below, as well as the
postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3
Kevin Grittner wrote:
Greg Smith wrote:
There's a whole list of FAQs that are documented on the wiki but
not in the main FAQ yet leftover from before the main FAQ was
hosted there. You can see them all at
http://wiki.postgresql.org/wiki/Frequently_Asked_Questions
I just haven't had time
Greg Smith wrote:
> There's a whole list of FAQs that are documented on the wiki but
> not in the main FAQ yet leftover from before the main FAQ was
> hosted there. You can see them all at
> http://wiki.postgresql.org/wiki/Frequently_Asked_Questions
>
> I just haven't had time to merge those a
Andreas Kretschmer wrote:
> > they are being done over all rows. In this particular example you
> > happen to get the same result, but that's just because "avg(foo) over
> > partition by foo" is a dumb example --- it will necessarily just yield
> > identically foo. In more realistic computations
Andy Colson wrote:
So if there is very little io, or if there is way way too much, then
the scheduler really doesn't matter. So there is a slim middle ground
where the io is within a small percent of the HD capacity where the
scheduler might make a difference?
That's basically how I see it.
Kevin Grittner wrote:
Greg Smith wrote:
The content was already there, just not linked into the main FAQ
yet:
http://wiki.postgresql.org/wiki/Slow_Counting
For a question asked this frequently, it should probably be in the
FAQ. I'll add a link from there to the more thorough writ
On 1/14/2010 12:07 PM, Greg Smith wrote:
Andy Colson wrote:
On 1/13/2010 11:36 PM, Craig Ringer wrote:
Yes. My 3ware 8500-8 on a Debian Sarge box was so awful that launching a
terminal would go from a 1/4 second operation to a 5 minute operation
under heavy write load by one writer. I landed up
Greg Smith wrote:
> The content was already there, just not linked into the main FAQ
> yet:
> http://wiki.postgresql.org/wiki/Slow_Counting
For a question asked this frequently, it should probably be in the
FAQ. I'll add a link from there to the more thorough write-up.
-Kevin
--
Sent via
Andy Colson wrote:
On 1/13/2010 11:36 PM, Craig Ringer wrote:
Yes. My 3ware 8500-8 on a Debian Sarge box was so awful that launching a
terminal would go from a 1/4 second operation to a 5 minute operation
under heavy write load by one writer. I landed up having to modify the
driver to partially
Tom Lane wrote:
> Andreas Kretschmer writes:
> > Tom Lane wrote:
> >> I see no bug here. Your second query asks for a much more complicated
> >> computation, it's not surprising it takes longer.
>
> > But sorry, I disagree. It is the same query with the same result. I can't
> > see
> > how t
Kevin Grittner wrote:
Matthew Wakeling wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
The content was already there, just not linked into the main FAQ yet:
http://wiki
Andreas Kretschmer writes:
> Tom Lane wrote:
>> I see no bug here. Your second query asks for a much more complicated
>> computation, it's not surprising it takes longer.
> But sorry, I disagree. It is the same query with the same result. I can't see
> how the queries should return different re
Tom Lane wrote:
> Andreas Kretschmer writes:
> > No question, this is a silly query, but the problem is the 2nd query: it
> > is obviously not possible for the planner to put the where-condition
> > into the subquery.
>
> Well, yeah: it might change the results of the window functions.
> I see
Andreas Kretschmer writes:
> No question, this is a silly query, but the problem is the 2nd query: it
> is obviously not possible for the planner to put the where-condition
> into the subquery.
Well, yeah: it might change the results of the window functions.
I see no bug here. Your second query
Hi,
version: 8.4.2
I have a table called values:
test=*# \d values
Table "public.values"
Column | Type | Modifiers
+-+---
id | integer |
value | real|
Indexes:
"idx_id" btree (id)
The table contains 10 random rows and is analysed.
And i hav
"high CPU usage" It might very well be "high IO usage".
Try this :
Copy (using explorer, the shell, whatever) a huge file.
This will create load similar to ALTER TABLE.
Measure throughput, how much is it ?
If your server blows up just like it did on ALTER TAB
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
Some more hints what I do:
I use PQexecParams() and the INSERT ... $001 notation to NOT
create a real escapted string from the data additionally but
use a pointer to the 8MB data buf
Ivan Voras wrote:
> Maybe you could add a short note why an estimation like from the
> pg_class table is usually enough.
OK. Will do.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
Kevin Grittner wrote:
Matthew Wakeling wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
Maybe you could add a short note why an estimation like from the
pg_class table is usually
So, pgBouncer is pretty good. It doesn't appear to be as good as
limiting TCON and using pconnect, but since we can't limit TCON in a
production environment, we may not have a choice.
Actually, you can : use lighttpd and php/fastcgi.
Lighttpd handles the network stuff, and funnels/q
Matthew Wakeling wrote:
> This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your su
In response to tom :
> Hi,
>
> === Problem ===
>
> i have a db-table "data_measurand" with about 6000 (60 Millions)
> rows and the following query takes about 20-30 seconds (with psql):
>
> mydb=# select count(*) from data_measurand;
> count
> --
> 60846187
> (1 row)
>
>
> ==
> Do you have any further idea why 16MB/s seems to be the limit here?
BYTEA deserialization is very slow, and this could be a factor here.
Have you checked that you are in fact I/O bound?
You can speed things up by sending the data in binary, by passing
approriate parameters to PQexecParams().
-
On Thu, 14 Jan 2010, tom wrote:
i have a db-table "data_measurand" with about 6000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
--
60846187
(1 row)
Sounds pretty reasonable to me. Looking at you
* fka...@googlemail.com [100114 09:29]:
> This takes about 50s, so, 800MB/50s = 16MB/s.
>
> However the harddisk (sata) could write 43 MB/s in the worst
> case! Why is write performance limited to 16 MB/s?
> I altered the binary column to STORAGE EXTERNAL.
>
> Some experiments with postgresq
On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
This takes about 50s, so, 800MB/50s = 16MB/s.
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
Several reasons:
The data needs to be written first to the WAL, in order to provide
Hi,
=== Problem ===
i have a db-table "data_measurand" with about 6000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
--
60846187
(1 row)
=== Question ===
- What can i do to improve the perf
fka...@googlemail.com wrote:
Hello together,
I need to increase the write performance when inserting
bytea of 8MB. I am using 8.2.4 on windows with libpq.
This takes about 50s, so, 800MB/50s = 16MB/s.
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performanc
On 1/13/2010 11:36 PM, Craig Ringer wrote:
Robert Haas wrote:
I'm kind of surprised that there are disk I/O subsystems that are so
bad that a single thread doing non-stop I/O can take down the whole
server. Is that normal?
No.
Does it happen on non-Windows operating
systems?
Yes. My 3war
Hello together,
I need to increase the write performance when inserting
bytea of 8MB. I am using 8.2.4 on windows with libpq.
The test setting is simple:
I write 100x times a byte array (bytea) of 8 MB random data
into a table having a binary column (and oids and 3 other
int columns, oids are in
Bob Dusek writes:
> So, pgBouncer is pretty good. It doesn't appear to be as good as
> limiting TCON and using pconnect, but since we can't limit TCON in a
> production environment, we may not have a choice.
You can still use pconnect() with pgbouncer, in transaction mode, if
your application is
55 matches
Mail list logo