On 09/11/2011 09:44 AM, Claudio Freire wrote:
And Andy is right, you'll have a lot less space. If raid 10 doesn't
give you enough room, just leave two spare drives for a raid 0
temporary partition. That will be at least twice as fast as doing
temporary tables on the raid 6.
Alternatively,
On 09/11/2011 12:02 PM, Marti Raudsepp wrote:
Which brings me to another important point: don't do lots of small
write transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides
being inefficient, they introduce a big maintenance burden.
I'd like to second this. Before a notable
On 09/11/2011 09:27 AM, Claudio Freire wrote:
I have used slony to do database migration. It is a pain to set up,
but it saves you hours of downtime.
I've had to shoot this option down in two separate upgrade scenarios in
two different companies. Theoretically it's possible, but slony is
On Sun, Sep 11, 2011 at 6:17 PM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 12.9.2011 00:44, Anthony Presley napsal(a):
We've currently got PG 8.4.4 running on a whitebox hardware set up,
with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM
SATA drives, using the onboard IDE
On Tue, Aug 30, 2011 at 12:55 AM, Venkat Balaji venkat.bal...@verse.in wrote:
Thanks to all for your very helpful replies !
As Greg Smith rightly said, i faced a problem of missing connections between
the runs. I even ran the cron every less than a second, but, still that
would become too many
On 11.09.2011 22:10, Scott Marlowe wrote:
Another data point. We had a big Oracle installation at my last job,
and OLAP queries were killing it midday, so I built a simple
replication system to grab rows from the big iron Oracle SUN box and
shove into a single core P IV 2.xGHz machine with 4
On 11.09.2011 19:02, Marti Raudsepp wrote:
On Sun, Sep 11, 2011 at 17:23, Andy Colsona...@squeakycode.net wrote:
On 09/11/2011 08:59 AM, Igor Chudov wrote:
By the way, does that INSERT UPDATE functionality or something like this exist
in Postgres?
You have two options:
1) write a function
On 9/12/2011 12:15 PM, Robert Klemme wrote:
On 11.09.2011 19:02, Marti Raudsepp wrote:
On Sun, Sep 11, 2011 at 17:23, Andy Colsona...@squeakycode.net wrote:
On 09/11/2011 08:59 AM, Igor Chudov wrote:
By the way, does that INSERT UPDATE functionality or something like
this exist in Postgres?
The recent data warehouse thread made me think about how I use
work_mem for some of my big queries. So I tried SET work_mem = '4GB'
for a session and got
ERROR: 4194304 is outside the valid range for parameter work_mem (64
.. 2097151)
A bit of searching turned up the Allow sorts to use
On 9/12/2011 12:33 PM, Robert Schnabel wrote:
The recent data warehouse thread made me think about how I use
work_mem for some of my big queries. So I tried SET work_mem = '4GB' for
a session and got
ERROR: 4194304 is outside the valid range for parameter work_mem (64
.. 2097151)
A bit of
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =
8GB.
Exactly. Find a big query somewhere in
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =
8GB.
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =
8GB.
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On Mon, Sep 12, 2011 at 11:04 AM, Robert Klemme
shortcut...@googlemail.com wrote:
On 11.09.2011 22:10, Scott Marlowe wrote:
Another data point. We had a big Oracle installation at my last job,
and OLAP queries were killing it midday, so I built a simple
replication system to grab rows from
I think , you may add a ramdisk as tablespace for temporary tables.
This should work similar to bigger work_mem.
2011/9/12, Robert Schnabel schnab...@missouri.edu:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use.
On 09/12/2011 02:48 PM, Scott Marlowe wrote:
I put it to you that your hardware has problems if you have a pg db
that's corrupting from having too much vacuum activity.
What? No. We optimized by basically forcing autovacuum to never run
during our active periods. We never actually
On Mon, Sep 12, 2011 at 10:22 AM, Shaun Thomas stho...@peak6.com wrote:
On 09/11/2011 12:02 PM, Marti Raudsepp wrote:
Which brings me to another important point: don't do lots of small
write transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides
being inefficient, they introduce a big
On Mon, Sep 12, 2011 at 23:04, Shaun Thomas stho...@peak6.com wrote:
I was alluding to the fact that if a DBA had his system running for a week
at our transaction level, and PG didn't have forced auto vacuum, and their
maintenance lapsed even slightly, they could end up with a corrupt database.
On Mon, Sep 12, 2011 at 2:04 PM, Shaun Thomas stho...@peak6.com wrote:
On 09/12/2011 02:48 PM, Scott Marlowe wrote:
I put it to you that your hardware has problems if you have a pg db
that's corrupting from having too much vacuum activity.
What? No. We optimized by basically forcing
On 09/12/2011 03:44 PM, Scott Marlowe wrote:
The PostgreSQL team works REALLY hard to prevent any kind of
corruption scenario from rearing its ugly head, so when the word
corruption pops up I start to wonder about the system (hardware
wise) someone is using,
You've apparently never used
On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel
schnab...@missouri.edu wrote:
The recent data warehouse thread made me think about how I use work_mem
for some of my big queries. So I tried SET work_mem = '4GB' for a session
and got
ERROR: 4194304 is outside the valid range for parameter
On Mon, Sep 12, 2011 at 2:55 PM, Shaun Thomas stho...@peak6.com wrote:
On 09/12/2011 03:44 PM, Scott Marlowe wrote:
The PostgreSQL team works REALLY hard to prevent any kind of
corruption scenario from rearing its ugly head, so when the word
corruption pops up I start to wonder about the
On 12.09.2011 19:22, Andy Colson wrote:
On 9/12/2011 12:15 PM, Robert Klemme wrote:
On 11.09.2011 19:02, Marti Raudsepp wrote:
On Sun, Sep 11, 2011 at 17:23, Andy Colsona...@squeakycode.net wrote:
On 09/11/2011 08:59 AM, Igor Chudov wrote:
By the way, does that INSERT UPDATE functionality or
On 9/12/2011 3:58 PM, Scott Marlowe wrote:
On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel
schnab...@missouri.edu wrote:
The recent data warehouse thread made me think about how I use work_mem
for some of my big queries. So I tried SET work_mem = '4GB' for a session
and got
ERROR: 4194304
apologies for such a late response to this thread, but there is domething
I think is _really_ dangerous here.
On Thu, 18 Aug 2011, Aidan Van Dyk wrote:
On Thu, Aug 18, 2011 at 1:35 AM, Craig Ringer ring...@ringerc.id.au wrote:
On 18/08/2011 11:48 AM, Ogden wrote:
Isn't this very dangerous?
Robert Schnabel schnab...@missouri.edu writes:
On 9/12/2011 3:58 PM, Scott Marlowe wrote:
On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel
schnab...@missouri.edu wrote:
The recent data warehouse thread made me think about how I use work_mem
for some of my big queries. So I tried SET
I have a production PostGres v8.2 database on luinx and a test PostGres V9.0
database on a test linux server
I am going to do migration but do not want to do that before making sure the
performance of the new test Postgres 9.0 database performance is as good as
the current production Postgres 8.2
On Mon, Sep 12, 2011 at 6:57 PM, da...@lang.hm wrote:
The barrier is the linux fs/block way of saying these writes need
to be on persistent media before I can depend on them. On typical
spinning media disks, that means out of the disk cache (which is not
persistent) and on platters. The
On Mon, 12 Sep 2011, Aidan Van Dyk wrote:
On Mon, Sep 12, 2011 at 6:57 PM, da...@lang.hm wrote:
The barrier is the linux fs/block way of saying these writes need
to be on persistent media before I can depend on them. On typical
spinning media disks, that means out of the disk cache (which
So, today, I did the following:
- Swapped out the 5410's (2.3Ghz) for 5470's (3.33Ghz)
- Set the ext4 mount options to be noatime,barrier=0,data=writeback
- Installed PG 9.1 from the yum repo
Item one:
With the accelerator cache set to 0/100 (all 512MB for writing), loading
the db /
* Robert Schnabel (schnab...@missouri.edu) wrote:
And getting back to the to-do list entry and reading the related
posts, it appears that even if you could set work_mem that high it
would only use 2GB anyway. I guess that was the second part of my
question. Is that true?
Yes and no.
* Robert Schnabel (schnab...@missouri.edu) wrote:
And getting back to the to-do list entry and reading the related
posts, it appears that even if you could set work_mem that high it
would only use 2GB anyway. I guess that was the second part of my
question. Is that true?
Errr, and to get
On Mon, Sep 12, 2011 at 8:47 PM, da...@lang.hm wrote:
XFS FAQ goes over much of it, starting at Q24:
http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_with_the_write_cache_on_journaled_filesystems.3F
So, for pure performance, on a battery-backed controller, nobarrier is
the
You may want to try pgreplay ... we've tried it for a similar scenario, and
so far, it's pretty promising.
I do wish it was able to be loaded from a pgfouine formatted log file, or
from another db ... but that's OK.
--
Anthony Presley
On Mon, Sep 12, 2011 at 6:28 PM, Hany ABOU-GHOURY
Hi Anthony,
I will try that thank you very much for your help
Cheers
Hany
On Tue, Sep 13, 2011 at 2:49 PM, Anthony Presley anth...@resolution.comwrote:
You may want to try pgreplay ... we've tried it for a similar scenario, and
so far, it's pretty promising.
I do wish it was able to be
On 12/09/11 15:10, mark wrote:
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Anthony Presley
Sent: Sunday, September 11, 2011 4:45 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?
38 matches
Mail list logo