On Sun, Jan 25, 2009 at 2:54 PM, Thomas Finneid wrote:
> Scott Marlowe wrote:
>>
>> On Sun, Jan 25, 2009 at 1:14 AM, Thomas Finneid wrote:
>>>
>>> Scott Marlowe wrote:
So I don't think you've found the cause of your problem with the smaller
index.
>
> Ok I understand, but why dont
Greg Smith wrote:
> I'm not sure what is going on with your system, but the advice showing
> up earlier in this thread is well worth heeding here: if you haven't
> thoroughly proven that your disk setup works as expected on simple I/O
> tests such as dd and bonnie++, you shouldn't be running pgbe
Scott Marlowe wrote:
On Sun, Jan 25, 2009 at 1:14 AM, Thomas Finneid wrote:
Scott Marlowe wrote:
So I don't think you've found the cause of your problem with the smaller
index.
Ok I understand, but why dont you think the index is the problem?
If so, I did the test with both indexes on exac
On Sun, Jan 25, 2009 at 2:21 PM, A B wrote:
> So, the eternal problem with what hardware to buy. I really miss a
> hardware buying guide for database servers now that I'm about to buy
> one..
> Some general guidelines mixed with ranked lists of what hardware that
> is best, shouldn't that be on t
So, the eternal problem with what hardware to buy. I really miss a
hardware buying guide for database servers now that I'm about to buy
one..
Some general guidelines mixed with ranked lists of what hardware that
is best, shouldn't that be on the wiki?.
THis is of course very difficult to advice a
On Sun, 25 Jan 2009, M. Edward (Ed) Borasky wrote:
I started out running pgbench on the same machine but just
moved the driver to another one trying to get better results.
That normally isn't necessary until you get to the point where you're
running thousands of transactions per second. The
[snip]
I'm actually doing some very similar testing and getting very similar
results. My disk is a single Seagate Barracuda 7200 RPM SATA (160 GB).
The OS is openSUSE 11.1 (2.6.27 kernel) with the "stock" PostgreSQL
8.3.5 RPM. I started out running pgbench on the same machine but just
moved the dr
Greg Smith wrote:
> On Thu, 22 Jan 2009, Alvaro Herrera wrote:
>
>> Also, I think you should set the "scale" in the prepare step (-i) at
>> least as high as the number of clients you're going to use. (I dimly
>> recall some recent development in this area that might mean I'm wrong.)
>
> The idea
On Sun, 25 Jan 2009, Gregory Stark wrote:
da...@lang.hm writes:
they currently have it do a backup immediatly on power loss (which is a safe
choice as the contents won't be changing without power), but it then powers off
(which is not good for startup time afterwords)
So if you have a situat
da...@lang.hm writes:
> they currently have it do a backup immediatly on power loss (which is a safe
> choice as the contents won't be changing without power), but it then powers
> off
> (which is not good for startup time afterwords)
So if you have a situation where it's power cycling rapidly e
On Sun, Jan 25, 2009 at 1:14 AM, Thomas Finneid wrote:
> Scott Marlowe wrote:
>>
>> I wrote a
>> simple test case for this and on a table with 100,000 entries already
>> in it, then inserting 10,000 in a transaction and 10,000 outside of a
>> transaction, I get insert rates of 0.1 ms and 0.5 ms re
Scott Marlowe wrote:
Also, what other kind of usage patterns are going on.
For this test there was nothing else going on, it was just that one
writer. The complete usage pattern is that there is one writer that
writes this data, about 2 rows per second, and then a small number
of reader
12 matches
Mail list logo