[PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Jeff
I've got a couple x25-e's in production now and they are working like a champ. (In fact, I've got another box being built with all x25s in it. its going to smoke!) Anyway, I was just reading another thread on here and that made me wonder about random_page_cost in the world of an ssd where

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Grzegorz Jaśkiewicz
On Wed, Mar 11, 2009 at 1:46 PM, Jeff wrote: > I've got a couple x25-e's in production now and they are working like a > champ.  (In fact, I've got another box being built with all x25s in it. its > going to smoke!) > > Anyway, I was just reading another thread on here and that made me wonder > ab

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Scott Carey
At 8k block size, you can do more iops sequential than random. A X25-M I was just playing with will do about 16K iops reads at 8k block size with 32 concurrent threads. That is about 128MB/sec. Sequential reads will do 250MB/sec. At 16k block size it does about 220MB/sec and at 32k block size t

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Andrej
2009/3/12 Scott Carey : > [...snip...].All tests start with 'cat 3 > /proc/sys/vm/drop_caches', and > work on > a 32GB data set (40% of the disk). What's the content of '3' above? -- Please don't top post, and don't use HTML e-Mail :} Make your quotes concise. http://www.american.edu/econ

[PERFORM] Full statement logging problematic on larger machines?

2009-03-11 Thread Frank Joerdens
Greetings. We're having trouble with full logging since we moved from an 8-core server with 16 GB memory to a machine with double that spec and I am wondering if this *should* be working or if there is a point on larger machines where logging and scheduling seeks of background writes - or something

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Scott Carey
Google > “linux drop_caches” first result: http://www.linuxinsight.com/proc_sys_vm_drop_caches.html To be sure a test is going to disk and not file system cache for everything in linux, run: ‘sync; cat 3 > /proc/sys/vm/drop_caches’ On 3/11/09 11:04 AM, "Andrej" wrote: 2009/3/12 Scott Carey : >

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread hubert depesz lubaczewski
On Wed, Mar 11, 2009 at 12:28:56PM -0700, Scott Carey wrote: > Google > “linux drop_caches” first result: > http://www.linuxinsight.com/proc_sys_vm_drop_caches.html > To be sure a test is going to disk and not file system cache for everything > in linux, run: > ‘sync; cat 3 > /proc/sys/vm/drop_cac

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Kevin Grittner
Scott Carey wrote: > On 3/11/09 11:04 AM, "Andrej" wrote: >> 2009/3/12 Scott Carey : >>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches' >> What's the content of '3' above? > Google > *linux drop_caches* first result: > http://www.linuxinsight.com/proc_sys_vm_drop_caches.html > > T

Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-11 Thread Scott Marlowe
On Wed, Mar 11, 2009 at 1:27 PM, Frank Joerdens wrote: > Greetings. We're having trouble with full logging since we moved from > an 8-core server with 16 GB memory to a machine with double that > spec and I am wondering if this *should* be working or if there is a > point on larger machines where

Re: [PERFORM] random_page_cost vs ssd?

2009-03-11 Thread Scott Carey
Echo. It was a typo. On 3/11/09 11:40 AM, "Kevin Grittner" wrote: Scott Carey wrote: > On 3/11/09 11:04 AM, "Andrej" wrote: >> 2009/3/12 Scott Carey : >>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches' >> What's the content of '3' above? > Google > *linux drop_caches* first resul

Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-11 Thread Tom Lane
Frank Joerdens writes: > Greetings. We're having trouble with full logging since we moved from > an 8-core server with 16 GB memory to a machine with double that > spec and I am wondering if this *should* be working or if there is a > point on larger machines where logging and scheduling seeks of

[PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Jignesh K. Shah
Hello All, As you know that one of the thing that constantly that I have been using benchmark kits to see how we can scale PostgreSQL on the UltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads) servers that Sun sells. During last PgCon 2008 http://www.pgcon.org/2008/schedul

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Kevin Grittner
>>> "Jignesh K. Shah" wrote: > Rerunning similar tests on a 64-thread UltraSPARC T2plus based > server config > (IO is not a problem... all in RAM .. no disks): > Time:Users:Type:TPM: Response Time > 60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006 > 120: 200: Medium Throughput: 228

Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-11 Thread Frank Joerdens
On Wed, Mar 11, 2009 at 8:46 PM, Tom Lane wrote: > Frank Joerdens writes: >> Greetings. We're having trouble with full logging since we moved from >> an 8-core server with 16 GB memory to a machine with double that >> spec and I am wondering if this *should* be working or if there is a >> point o

Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-11 Thread Guillaume Smet
On Wed, Mar 11, 2009 at 8:27 PM, Frank Joerdens wrote: > This works much better but once we are at about 80% of peak load - > which is around 8000 transactions per second currently - the server goes > into a tailspin in the manner described above and we have to switch off full > logging. First, d

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Jignesh K. Shah
On 03/11/09 18:27, Kevin Grittner wrote: "Jignesh K. Shah" wrote: Rerunning similar tests on a 64-thread UltraSPARC T2plus based server config (IO is not a problem... all in RAM .. no disks): Time:Users:Type:TPM: Response Time 60: 100: Medium Throughput: 10552.000 Avg Medi

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Tom Lane
"Kevin Grittner" writes: > I'm wondering about the testing methodology. Me too. This test case seems much too far away from real world use to justify diddling low-level locking behavior; especially a change that is obviously likely to have very negative effects in other scenarios. In particular

Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-11 Thread Tom Lane
Guillaume Smet writes: > I don't know if the logging integrated into PostgreSQL can bufferize > its output. Andrew? It uses fwrite(), and normally sets its output into line-buffered mode. For a high-throughput case like this it seems like using fully buffered mode might be an acceptable tradeoff.

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Scott Carey
On 3/11/09 3:27 PM, "Kevin Grittner" wrote: I'm a lot more interested in what's happening between 60 and 180 than over 1000, personally. If there was a RAID involved, I'd put it down to better use of the numerous spindles, but when it's all in RAM it makes no sense. If there is enough lock cont

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Jignesh K. Shah
Tom Lane wrote: "Kevin Grittner" writes: I'm wondering about the testing methodology. Me too. This test case seems much too far away from real world use to justify diddling low-level locking behavior; especially a change that is obviously likely to have very negative effects in oth

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Tom Lane
Scott Carey writes: > If there is enough lock contention and a common lock case is a short lived > shared lock, it makes perfect sense sense. Fewer readers are blocked waiting > on writers at any given time. Readers can 'cut' in line ahead of writers > within a certain scope (only up to the n

Re: [PERFORM] Proposal of tunable fix for scalability of 8.4

2009-03-11 Thread Jignesh K. Shah
Tom Lane wrote: Scott Carey writes: If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense. Fewer readers are blocked waiting on writers at any given time. Readers can 'cut' in line ahead of writers within a certain scope (