Friendly greetings !
According to :
http://developer.postgresql.org/pgdocs/postgres/storage-page-layout.html
Every table and index is stored as an array of pages of a fixed size
(usually 8 kB, although a different page size can be selected when
compiling the server).
Is there any usage/interest
I have the opportunity to set up a new postgres server for our
production database. I've read several times in various postgres
lists about the importance of separating logs from the actual database
data to avoid disk contention.
Can someone suggest a typical partitioning scheme for a postgres
On Tue, Apr 28, 2009 at 10:56 AM, Whit Armstrong
armstrong.w...@gmail.com wrote:
I have the opportunity to set up a new postgres server for our
production database. I've read several times in various postgres
lists about the importance of separating logs from the actual database
data to avoid
Thanks, Scott.
Just to clarify you said:
postgres. So, my pg_xlog and all OS and logging stuff goes on the
RAID-10 and the main store for the db goes on the RAID-10.
Is that meant to be that the pg_xlog and all OS and logging stuff go
on the RAID-1 and the real database (the
I have a unloaded development server running 8.4b1 that is returning
from a 'select * from pg_locks' in around 5 ms. While the time itself
is not a big deal, I was curious and tested querying locks on a fairly
busy (200-500 tps sustained) running 8.2 on inferior hardware. This
returned (after
On Tue, Apr 28, 2009 at 11:56:25AM -0600, Scott Marlowe wrote:
On Tue, Apr 28, 2009 at 11:48 AM, Whit Armstrong
armstrong.w...@gmail.com wrote:
Thanks, Scott.
Just to clarify you said:
postgres. ?So, my pg_xlog and all OS and logging stuff goes on the
RAID-10 and the main store for
Whit Armstrong wrote:
I have the opportunity to set up a new postgres server for our
production database. I've read several times in various postgres
lists about the importance of separating logs from the actual database
data to avoid disk contention.
Can someone suggest a typical partitioning
On Tuesday 28 April 2009, Whit Armstrong armstrong.w...@gmail.com wrote:
Additionally are there any clear choices w/ regard to filesystem
types? Our choices would be xfs, ext3, or ext4.
xfs consistently delivers much higher sequential throughput than ext3 (up to
100%), at least on my
Kenneth Marshall wrote:
Additionally are there any clear choices w/ regard to filesystem
types? ?Our choices would be xfs, ext3, or ext4.
Well, there's a lot of people who use xfs and ext3. XFS is generally
rated higher than ext3 both for performance and reliability. However,
we run Centos 5
Craig James craig_ja...@emolecules.com wrote:
After a reading various articles, I thought that noop was the
right choice when you're using a battery-backed RAID controller.
The RAID controller is going to cache all data and reschedule the
writes anyway, so the kernal schedule is irrelevant
echo noop /sys/block/hdx/queue/scheduler
can this go into /etc/init.d somewhere?
or does that change stick between reboots?
-Whit
On Tue, Apr 28, 2009 at 2:16 PM, Craig James craig_ja...@emolecules.com wrote:
Kenneth Marshall wrote:
Additionally are there any clear choices w/ regard to
Whit Armstrong armstrong.w...@gmail.com wrote:
echo noop /sys/block/hdx/queue/scheduler
can this go into /etc/init.d somewhere?
We set the default for the kernel in the /boot/grub/menu.lst file. On
a kernel line, add elevator=xxx (where xxx is your choice of
scheduler).
-Kevin
--
On Tue, Apr 28, 2009 at 12:06 PM, Kenneth Marshall k...@rice.edu wrote:
On Tue, Apr 28, 2009 at 11:56:25AM -0600, Scott Marlowe wrote:
On Tue, Apr 28, 2009 at 11:48 AM, Whit Armstrong
armstrong.w...@gmail.com wrote:
Thanks, Scott.
Just to clarify you said:
postgres. ?So, my pg_xlog
I see.
Thanks for everyone for replying. The whole discussion has been very helpful.
Cheers,
Whit
On Tue, Apr 28, 2009 at 3:13 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Whit Armstrong armstrong.w...@gmail.com wrote:
echo noop /sys/block/hdx/queue/scheduler
can this go into
On Tue, Apr 28, 2009 at 12:37 PM, Whit Armstrong
armstrong.w...@gmail.com wrote:
echo noop /sys/block/hdx/queue/scheduler
can this go into /etc/init.d somewhere?
or does that change stick between reboots?
I just stick in /etc/rc.local
--
Sent via pgsql-performance mailing list
On Tue, Apr 28, 2009 at 12:40 PM, Kenneth Marshall k...@rice.edu wrote:
On Tue, Apr 28, 2009 at 01:30:59PM -0500, Kevin Grittner wrote:
Craig James craig_ja...@emolecules.com wrote:
After a reading various articles, I thought that noop was the
right choice when you're using a battery-backed
Merlin Moncure mmonc...@gmail.com writes:
I have a unloaded development server running 8.4b1 that is returning
from a 'select * from pg_locks' in around 5 ms. While the time itself
is not a big deal, I was curious and tested querying locks on a fairly
busy (200-500 tps sustained) running 8.2
On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Merlin Moncure mmonc...@gmail.com writes:
I have a unloaded development server running 8.4b1 that is returning
from a 'select * from pg_locks' in around 5 ms. While the time itself
is not a big deal, I was curious and tested
On Tue, Apr 28, 2009 at 5:42 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Merlin Moncure mmonc...@gmail.com writes:
I have a unloaded development server running 8.4b1 that is returning
from a 'select * from pg_locks' in around
Merlin Moncure mmonc...@gmail.com writes:
On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
[squint...] AFAICS the only *direct* cost component in pg_lock_status
is the number of locks actually held or awaited. If there's a
noticeable component that depends on
On 4/28/09 11:16 AM, Craig James craig_ja...@emolecules.com wrote:
Kenneth Marshall wrote:
Additionally are there any clear choices w/ regard to filesystem
types? ?Our choices would be xfs, ext3, or ext4.
Well, there's a lot of people who use xfs and ext3. XFS is generally
rated higher than
server information:
Dell PowerEdge 2970, 8 core Opteron 2384
6 1TB hard drives with a PERC 6i
64GB of ram
We're running a similar configuration: PowerEdge 8 core, PERC 6i, but we have
8 of the 2.5 10K 384GB disks.
When I asked the same question on this forum, I was advised to just put
are there any other xfs settings that should be tuned for postgres?
I see this post mentions allocation groups. does anyone have
suggestions for those settings?
http://archives.postgresql.org/pgsql-admin/2009-01/msg00144.php
what about raid stripe size? does it really make a difference? I
Thanks, Scott.
So far, I've followed a pattern similar to Scott Marlowe's setup. I
have configured 2 disks as a RAID 1 volume, and 4 disks as a RAID 10
volume. So, the OS and xlogs will live on the RAID 1 vol and the data
will live on the RAID 10 vol.
I'm running the memtest on it now, so we
On Tue, Apr 28, 2009 at 5:58 PM, Scott Carey sc...@richrelevance.com wrote:
1. If everything is on the same partition/file system, fsyncs from the
xlogs may cross-pollute to the data. Ext3 is notorious for this, though
data=writeback limits the effect you especially might not want
On 4/28/09 5:02 PM, Whit Armstrong armstrong.w...@gmail.com wrote:
are there any other xfs settings that should be tuned for postgres?
I see this post mentions allocation groups. does anyone have
suggestions for those settings?
On 4/28/09 5:10 PM, Whit Armstrong armstrong.w...@gmail.com wrote:
Thanks, Scott.
So far, I've followed a pattern similar to Scott Marlowe's setup. I
have configured 2 disks as a RAID 1 volume, and 4 disks as a RAID 10
volume. So, the OS and xlogs will live on the RAID 1 vol and the data
27 matches
Mail list logo