I've got a couple x25-e's in production now and they are working like
a champ. (In fact, I've got another box being built with all x25s in
it. its going to smoke!)
Anyway, I was just reading another thread on here and that made me
wonder about random_page_cost in the world of an ssd where
On Wed, Mar 11, 2009 at 1:46 PM, Jeff wrote:
> I've got a couple x25-e's in production now and they are working like a
> champ. (In fact, I've got another box being built with all x25s in it. its
> going to smoke!)
>
> Anyway, I was just reading another thread on here and that made me wonder
> ab
At 8k block size, you can do more iops sequential than random.
A X25-M I was just playing with will do about 16K iops reads at 8k block size
with 32 concurrent threads.
That is about 128MB/sec. Sequential reads will do 250MB/sec. At 16k block
size it does about 220MB/sec and at 32k block size t
2009/3/12 Scott Carey :
> [...snip...].All tests start with 'cat 3 > /proc/sys/vm/drop_caches', and
> work on
> a 32GB data set (40% of the disk).
What's the content of '3' above?
--
Please don't top post, and don't use HTML e-Mail :} Make your quotes concise.
http://www.american.edu/econ
Greetings. We're having trouble with full logging since we moved from
an 8-core server with 16 GB memory to a machine with double that
spec and I am wondering if this *should* be working or if there is a
point on larger machines where logging and scheduling seeks of
background writes - or something
Google > “linux drop_caches” first result:
http://www.linuxinsight.com/proc_sys_vm_drop_caches.html
To be sure a test is going to disk and not file system cache for everything in
linux, run:
‘sync; cat 3 > /proc/sys/vm/drop_caches’
On 3/11/09 11:04 AM, "Andrej" wrote:
2009/3/12 Scott Carey :
>
On Wed, Mar 11, 2009 at 12:28:56PM -0700, Scott Carey wrote:
> Google > “linux drop_caches” first result:
> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html
> To be sure a test is going to disk and not file system cache for everything
> in linux, run:
> ‘sync; cat 3 > /proc/sys/vm/drop_cac
Scott Carey wrote:
> On 3/11/09 11:04 AM, "Andrej" wrote:
>> 2009/3/12 Scott Carey :
>>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches'
>> What's the content of '3' above?
> Google > *linux drop_caches* first result:
> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html
>
> T
On Wed, Mar 11, 2009 at 1:27 PM, Frank Joerdens wrote:
> Greetings. We're having trouble with full logging since we moved from
> an 8-core server with 16 GB memory to a machine with double that
> spec and I am wondering if this *should* be working or if there is a
> point on larger machines where
Echo. It was a typo.
On 3/11/09 11:40 AM, "Kevin Grittner" wrote:
Scott Carey wrote:
> On 3/11/09 11:04 AM, "Andrej" wrote:
>> 2009/3/12 Scott Carey :
>>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches'
>> What's the content of '3' above?
> Google > *linux drop_caches* first resul
Frank Joerdens writes:
> Greetings. We're having trouble with full logging since we moved from
> an 8-core server with 16 GB memory to a machine with double that
> spec and I am wondering if this *should* be working or if there is a
> point on larger machines where logging and scheduling seeks of
Hello All,
As you know that one of the thing that constantly that I have been
using benchmark kits to see how we can scale PostgreSQL on the
UltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads)
servers that Sun sells.
During last PgCon 2008
http://www.pgcon.org/2008/schedul
>>> "Jignesh K. Shah" wrote:
> Rerunning similar tests on a 64-thread UltraSPARC T2plus based
> server config
> (IO is not a problem... all in RAM .. no disks):
> Time:Users:Type:TPM: Response Time
> 60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006
> 120: 200: Medium Throughput: 228
On Wed, Mar 11, 2009 at 8:46 PM, Tom Lane wrote:
> Frank Joerdens writes:
>> Greetings. We're having trouble with full logging since we moved from
>> an 8-core server with 16 GB memory to a machine with double that
>> spec and I am wondering if this *should* be working or if there is a
>> point o
On Wed, Mar 11, 2009 at 8:27 PM, Frank Joerdens wrote:
> This works much better but once we are at about 80% of peak load -
> which is around 8000 transactions per second currently - the server goes
> into a tailspin in the manner described above and we have to switch off full
> logging.
First, d
On 03/11/09 18:27, Kevin Grittner wrote:
"Jignesh K. Shah" wrote:
Rerunning similar tests on a 64-thread UltraSPARC T2plus based
server config
(IO is not a problem... all in RAM .. no disks):
Time:Users:Type:TPM: Response Time
60: 100: Medium Throughput: 10552.000 Avg Medi
"Kevin Grittner" writes:
> I'm wondering about the testing methodology.
Me too. This test case seems much too far away from real world use
to justify diddling low-level locking behavior; especially a change
that is obviously likely to have very negative effects in other
scenarios. In particular
Guillaume Smet writes:
> I don't know if the logging integrated into PostgreSQL can bufferize
> its output. Andrew?
It uses fwrite(), and normally sets its output into line-buffered mode.
For a high-throughput case like this it seems like using fully buffered
mode might be an acceptable tradeoff.
On 3/11/09 3:27 PM, "Kevin Grittner" wrote:
I'm a lot more interested in what's happening between 60 and 180 than
over 1000, personally. If there was a RAID involved, I'd put it down
to better use of the numerous spindles, but when it's all in RAM it
makes no sense.
If there is enough lock cont
Tom Lane wrote:
"Kevin Grittner" writes:
I'm wondering about the testing methodology.
Me too. This test case seems much too far away from real world use
to justify diddling low-level locking behavior; especially a change
that is obviously likely to have very negative effects in oth
Scott Carey writes:
> If there is enough lock contention and a common lock case is a short lived
> shared lock, it makes perfect sense sense. Fewer readers are blocked waiting
> on writers at any given time. Readers can 'cut' in line ahead of writers
> within a certain scope (only up to the n
Tom Lane wrote:
Scott Carey writes:
If there is enough lock contention and a common lock case is a short lived
shared lock, it makes perfect sense sense. Fewer readers are blocked waiting
on writers at any given time. Readers can 'cut' in line ahead of writers
within a certain scope (
22 matches
Mail list logo