Have you tried the archive search tool for this very mailing list?
There's a wealth of information imparted all the time, and tuning is
generally about knowledge of what's happening, not blindly following
the output of some program.
On Mar 17, 2008, at 8:46 AM, sathiya psql wrote:
i
On Tue, 9 Dec 2008, Robert Haas wrote:
I don't believe the thesis. The gap between disk speeds and memory
speeds may narrow over time, but I doubt it's likely to disappear
altogether any time soon, and certainly not for all users.
I think the not for all users is the critical part. In 2
On Wed, 11 Feb 2009, Scott Carey wrote:
On a large partitioned database, ordinary vacuum is a very very difficult
option.
Most of the time on such a system, most tables are dormant with respect to
writes and never need to be vacuumed. A 'daily vacuum' is likely to take a
full day to run on
On Mon, 20 Apr 2009, da...@lang.hm wrote:
one huge advantage of putting the sql into the configuration is the ability
to work around other users of the database.
+1 on this. We've always found tools much easier to work with when they
could be adapted to our schema, as opposed to changing our
Scott Otis wrote:
I am seeking advice on what the best setup for the following would be.
My company provides a hosted web calendaring solution for school
districts. For each school district we have a separate database.
Each database has 57 tables.
Over the next couple of
On Nov 24, 2009, at 9:23 AM, Matthew Wakeling wrote:
We're about to purchase a new server to store some of our old
databases, and I was wondering if someone could advise me on a RAID
card. We want to make a 6-drive SATA RAID array out of 2TB drives,
and it will be RAID 5 or 6 because there
We're looking to upgrade our database hardware so that it can sustain
us while we re-architect some of the more fundamental issues with our
applications. The first thing to spend money on is usually disks, but
our database currently lives almost entirely on flash storage, so
that's already
(Apologies if this ends up coming through multiple times - my first attempts
seem to have gotten stuck.)
We recently took the much needed step in moving from 8.1.19 to 8.4.2. We took
the downtime opportunity to also massively upgrade our hardware. Overall, this
has been the major improvement
On Feb 15, 2010, at 7:59 AM, Kevin Grittner wrote:
Could you show the query, along with table definitions (including
indexes)?
Oh, yeah, I suppose that would help. :)
http://wood.silentmedia.com/bench/query_and_definitions
(I'd paste them here for posterity but I speculate the reason my
Awesome, that did the trick. Thanks Tom! So I understand better, why is my case
not the normal, better case?
(I assume the long-term fix is post-9.0, right?)
On Feb 15, 2010, at 9:26 AM, Tom Lane wrote:
Ben Chobot be...@silentmedia.com writes:
On Feb 15, 2010, at 7:59 AM, Kevin Grittner
On Feb 15, 2010, at 11:59 AM, Rose Zhou wrote:
Good day,
I have a PostgreSQL 8.4 database installed on WinXP x64 with very heavy
writing and updating on a partitioned table. Sometimes within one minute,
there are tens of file with size=1,048,576kb (such as
I'm having problems with another one of my queries after moving from 8.1.19 to
8.4.2. On 8.1.19, the plan looked like this:
http://wood.silentmedia.com/bench/8119
That runs pretty well. On 8.4.2, the same query looks like this:
http://wood.silentmedia.com/bench/842_bad
If I turn off mergejoin
On Feb 16, 2010, at 1:29 PM, Ben Chobot wrote:
I'm having problems with another one of my queries after moving from 8.1.19
to 8.4.2. On 8.1.19, the plan looked like this:
http://wood.silentmedia.com/bench/8119
That runs pretty well. On 8.4.2, the same query looks like this:
http
On Feb 15, 2010, at 1:25 PM, Rose Zhou wrote:
Thanks Ben:
I will adjust the auto vacuum parameters. It is on now, maybe not frequently
enough.
How to get the disk space back to OS? Will a Vacuum Full Verbose get the disk
space back to OS?
Yes, but it might bloat your indexes. Do
On Feb 23, 2010, at 2:23 PM, Kevin Grittner wrote:
Here are the values from our two largest and busiest systems (where
we found the pg_xlog placement to matter so much). It looks to me
like a more aggressive bgwriter would help, yes?
cir= select * from pg_stat_bgwriter ;
-[ RECORD 1
On Feb 26, 2010, at 11:23 AM, Tory M Blue wrote:
On Fri, Feb 26, 2010 at 5:09 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Tory M Blue wrote:
2010-02-25 22:53:13 PST LOG: checkpoint starting: time
2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected
EOF on client
On Feb 27, 2010, at 2:29 PM, Chris wrote:
Hi, I'm having an issue where a postgres process is taking too much
memory when performing many consecutive inserts and updates from a PHP
[snip]
In your postgresql.conf file, what are the settings for work_mem and
shared_buffers?
We've enjoyed our FusionIO drives very much. They can do 100k iops without
breaking a sweat. Just make sure you shut them down cleanly - it can up to 30
minutes per card to recover from a crash/plug pull test.
I also have serious questions about their longevity and failure mode when the
flash
On Mar 8, 2010, at 12:50 PM, Greg Smith wrote:
Ben Chobot wrote:
We've enjoyed our FusionIO drives very much. They can do 100k iops without
breaking a sweat. Just make sure you shut them down cleanly - it can up to
30 minutes per card to recover from a crash/plug pull test.
Yeah...I
On Mar 10, 2010, at 6:22 PM, Paul McGarry wrote:
Hi there,
I'm after a little bit of advice on the shared_buffers setting (I have
read the various docs on/linked from the performance tuning wiki page,
some very helpful stuff there so thanks to those people).
I am setting up a 64bit Linux
Autovacuum is your friend for minimal downtime. It is configurable to let you
adjust how invasive it will be, and you can have different settings per table
if you wish.
As for the reindex, why do you think you will be reindexing regularly?
On Mar 15, 2010, at 10:30 PM, Meena_Ramkumar wrote:
On Mar 16, 2010, at 6:04 PM, Rob Wultsch wrote:
Lets say I have a large table bigTable to which I would like to add
two btree indexes. Is there a more efficient way to create indexes
than:
CREATE INDEX idx_foo on bigTable (foo);
CREATE INDEX idx_baz on bigTable (baz);
Or
CREATE INDEX
On Mar 17, 2010, at 7:41 AM, Brad Nicholson wrote:
As an aside, some folks in our Systems Engineering department here did
do some testing of FusionIO, and they found that the helper daemons were
inefficient and placed a fair amount of load on the server. That might
be something to watch of
I've got a sql language function which does a fairly simple select from a
table. If I give it one value, it performs quickly (half a ms). If I give it
another value, it does not (1.1 seconds). When I run the equivalent select
outside of the function, both values perform roughly the same (even
On May 24, 2010, at 4:25 AM, Konrad Garus wrote:
Do shared_buffers duplicate contents of OS page cache? If so, how do I
know if 25% RAM is the right value for me? Actually it would not seem
to be true - the less redundancy the better.
You can look into the pg_buffercache contrib module.
On Jun 30, 2010, at 11:12 AM, t...@exquisiteimages.com wrote:
I read a post
earlier today that mentioned in passing that it was better to have a
faster processor than more cores.
This really depends on your workload and how much you value latency vs.
throughput. If you tend to have a lot of
On Jul 8, 2010, at 12:37 PM, Ryan Wexler wrote:
One thing I don't understand is why BBU will result in a huge performance
gain. I thought BBU was all about power failures?
When you have a working BBU, the raid card can safely do write caching. Without
it, many raid cards are good about
On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
But none of this explains why a 4-disk raid 10 is slower than a 1 disk
system. If there is no write-back caching on the RAID, it should still be
similar to the one disk setup.
Many raid controllers are smart enough to always turn off write
On Jul 15, 2010, at 12:40 PM, Ryan Wexler wrote:
On Wed, Jul 14, 2010 at 7:50 PM, Ben Chobot be...@silentmedia.com wrote:
On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
But none of this explains why a 4-disk raid 10 is slower than a 1 disk
system. If there is no write-back caching
On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
Many raid controllers are smart enough to always turn off write caching on
the drives, and also disable the feature on their own buffer without a BBU.
Add a BBU, and the cache on the controller starts getting used, but *not*
the cache on the
On Jul 15, 2010, at 2:40 PM, Ryan Wexler wrote:
On Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot be...@silentmedia.com wrote:
On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
Many raid controllers are smart enough to always turn off write caching on
the drives, and also disable the feature
On Jul 15, 2010, at 8:16 PM, Scott Carey wrote:
On Jul 15, 2010, at 12:35 PM, Ben Chobot wrote:
On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
Many raid controllers are smart enough to always turn off write caching on
the drives, and also disable the feature on their own buffer without
On Jul 24, 2010, at 12:20 AM, Yeb Havinga wrote:
The problem in this scenario is that even when the SSD would show not data
loss and the rotating disk would for a few times, a dozen tests without
failure isn't actually proof that the drive can write it's complete buffer to
disk after power
On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:
I'm weighing options for a new server. In addition to PostgreSQL, this
machine will handle some modest Samba and Rsync load.
I will have enough RAM so the virtually all disk-read activity will be
cached. The average PostgreSQL read
On Nov 12, 2010, at 8:14 AM, Kyriacos Kyriacou wrote:
We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
database size is over 200Gb so upgrade is not an easy decision!
This is why we have slony, so you can slowly upgrade your 200Gb while you're
live and then only suffer a
On Dec 23, 2010, at 11:58 AM, Andy wrote:
Somewhat tangential to the current topics, I've heard that FusionIO uses
internal cache and hence is not crash-safe, and if the cache is turned off
performance will take a big hit. Is that your experience?
It does use an internal cache, but it
On Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:
most flash drives, especially mlc flash, use huge blocks anyways on
physical level. the numbers claimed here
(http://www.fusionio.com/products/iodrive/) (141k write iops) are
simply not believable without write buffering. i didn't see
On Feb 3, 2011, at 1:50 PM, Mladen Gogala wrote:
So, I will have to go back on my decision to use Postgres and re-consider
MySQL? I will rather throw away the effort invested in studying Postgres than
to risk an unfixable application downtime. I am not sure about the world
domination
On May 9, 2011, at 1:32 PM, Chris Hoover wrote:
1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
Be careful here. What if the entire card hiccups, instead of just a device on
it? (We've had that happen to us before.) Depending on how you've done your
raid 10, either all your
On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:
Our first try to solve this problem has been to convert these triggers into a
constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED flags.
This, we are finding, is forcing the trigger function to run after the
triggering
On Sep 27, 2011, at 6:37 PM, Craig Ringer wrote:
On 09/27/2011 12:54 PM, Ben Chobot wrote:
My memory is fuzzy but as I recall, a possible downside to using
deferred constraints was increased memory usage
That's right. PostgreSQL doesn't currently support spilling of pending
constraint
On Sep 30, 2011, at 12:07 PM, bricklen wrote:
I've been informed that this type of operation is called symmetric
difference[1], and can be represented by A ∆ B. A couple of
alternative names were proposed, array_symmetric_difference and
array_xor.
Does anyone have a preference for the
On Oct 3, 2011, at 6:52 AM, 姜头 wrote:
How can i get record by data block not by sql?
I want to read and write lots of data by data blocks and write record to a
appointed data block and read it.
so i can form a disk-resident tree by recording the block address. But i
don't know how to
On Nov 14, 2011, at 4:42 PM, Cody Caughlan wrote:
We have anywhere from 60-80 background worker processes connecting to
Postgres, performing a short task and then disconnecting. The lifetime
of these tasks averages 1-3 seconds.
[snip]
Is this something that I should look into or is it not
I have two queries in PG 9.1. One uses an index like I would like, the other
does not. Is this expected behavior? If so, is there any way around it?
postgres=# explain analyze select min(id) from delayed_jobs where
strand='sis_batch:account:15' group by strand;
On Nov 17, 2011, at 5:20 PM, Steve Atkins wrote:
I don't think you want the group by in that first query.
Heh, I tried to simply the example, but in reality that = becomes an in clause
of multiple values. So the group by is needed.
postgres=# explain analyze select min(id) from
On Nov 30, 2012, at 8:06 AM, Shaun Thomas wrote:
I say that because you mentioned you're using Ubuntu 12.04, and we were
having some problems with PG on that platform. With shared_buffers over
4GB, it starts doing really weird things to the memory subsystem.
Whatever it does causes the kernel
On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:
random_page_cost=1 might be not what you really want.
it would mean that random reads are as fast as as sequential reads, which
probably is true only for SSD
What randon_page_cost would be more appropriate for EC2 EBS Provisioned
volume that
On Apr 12, 2013, at 9:45 AM, Rodrigo Barboza wrote:
Hi guys.
I compiled my postrges server (9.1.4) with default segment size (16MB).
Should it be enough? Should I increase this size in compilation?
Unlike some default values in the configuration file, the compiled-in defaults
work well for
On Sep 9, 2015, at 3:43 AM, anil7385 wrote:
>
> Hi,
> We have a table which consists of 3 millions of records and when we try to
> delete them and run VACUUM VERBOSE ANALYZE on it in production environment ,
> it takes 6/7 hours to process.
You make it sound like you are
On Aug 31, 2016, at 3:01 PM, Bobby Mozumder wrote:
>
> Is it possible to find the number of disk IOs performed for a query? EXPLAIN
> ANALYZE looks like it shows number of sequential rows scanned, but not number
> of IOs.
Postgres knows the number of rows it will need
51 matches
Mail list logo