On 13/03/2013 19:23, Steve Crawford wrote:
On 03/13/2013 09:15 AM, John Lister wrote:
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do
when one inevitably fails in your system ?
Define much shorter? I accept they have a limited no
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do when one
inevitably fails in your system ?
Define much shorter? I accept they have a limited no of writes, but that
depends on load. You can actively monitor the drives "health" level
On 12/03/2013 21:41, Gregg Jaskiewicz wrote:
Whilst on the hardware subject, someone mentioned throwing ssd into
the mix. I.e. combining spinning HDs with SSD, apparently some raid
cards can use small-ish (80GB+) SSDs as external caches. Any
experiences with that ?
The new LSI/Dell cards do
On 06/12/2012 09:33, Andrea Suisani wrote:
which kind of ssd disks do you have ?
maybe they are of the same typeShaun Thomas is having problem with here:
http://archives.postgresql.org/pgsql-performance/2012-12/msg00030.php
Yeah i saw that post, I'm running the same version of ubuntu with the
3
On 05/12/2012 18:28, Shaun Thomas wrote:
Hey guys,
This isn't a question, but a kind of summary over a ton of investigation
I've been doing since a recent "upgrade". Anyone else out there with
"big iron" might want to confirm this, but it seems pretty reproducible.
This seems to affect the lates
on this box:
in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,
the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid
(sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array
(sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache
of 512 MB). (ubuntu 1
On 29/11/2012 17:33, Merlin Moncure wrote:
one thing that immediately jumps out here is that your wal volume
could be holding you up. so it's possible we may want to move wal to
the ssd volume. if you can scrounge up a 9.2 pgbench, we can gather
more evidence for that by running pgbench with t
On 29/11/2012 17:33, Merlin Moncure wrote:
Since we have some idle cpu% here we can probably eliminate pgbench as
a bottleneck by messing around with the -j switch. another thing we
want to test is the "-N" switch -- this doesn't update the tellers and
branches table which in high concurrency s
On 28/11/2012 19:21, Merlin Moncure wrote:
On Wed, Nov 28, 2012 at 12:37 PM, John Lister wrote:
Hi, I've just been benchmarking a new box I've got and running pgbench
yields what I thought was a slow tps count. It is dificult to find
comparisons online of other benchmark results, I
Hi, I've just been benchmarking a new box I've got and running pgbench
yields what I thought was a slow tps count. It is dificult to find
comparisons online of other benchmark results, I'd like to see if I have
the box set up reasonably well.
I know oracle, et al prohibit benchmark results, bu
On 24/07/2012 21:12, Claudio Freire wrote:
On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire wrote:
On Tue, Jul 24, 2012 at 3:36 PM, John Lister wrote:
Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and
PG 9.1, I've modified pg_ctlcluster to cause pg_ct
On Tue, Jul 18, 2012 at 2:38 AM, Claudio Freire wrote:
>It must have been said already, but I'll repeat it just in case:
>I think postgres has an easy solution. Spawn the postmaster with
>"interleave", to allocate shared memory, and then switch to "local" on
>the backends.
Do you have a suggesti
Hi, I was wondering if there are any recommended ways or tools for
calculating the planner cost constants? Also, do the absolute values
matter or is it simply the ratio between them? I'm about to configure a
new server and can probably do a rough job of calculating them based on
supposed speeds
We've reached to the point when we would like to try SSDs. We've got a
central DB currently 414 GB in size and increasing. Working set does not
fit into our 96GB RAM server anymore.
So, the main question is what to take. Here what we've got:
1) Intel 320. Good, but slower then current generation s
On 03/05/2012 16:46, Craig James wrote:
On Thu, May 3, 2012 at 6:42 AM, Jan Nielsen wrote:
Hi John,
On Thu, May 3, 2012 at 12:54 AM, John Lister
wrote:
I was wondering if it would be better to put the xlog on the same disk as
the OS? Apart from the occasional log writes I'd have thought
On 03/05/2012 03:10, Jan Nielsen wrote:
300GB RAID10 2x15k drive for OS on local storage
*/dev/sda1 RA*4096
*/dev/sda1 FS*ext4
*/dev/sda1 MO*
600GB RAID 10 8x15k drive for $PGDATA on SAN
*IO Scheduler sda* n
On 24/04/2012 20:32, Shaun Thomas wrote:
I'm not sure if you've done metrics or not, but XFS performance is
highly dependent on your init and mount options. I can give you some
guidelines there, but one of the major changes is that the Linux 3.X
kernels have some impressive performance improv
17 matches
Mail list logo