j...@commandprompt.com ("Joshua D. Drake") writes:
> On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
>> Greg Smith wrote:
>> > Note that not all of the Sandforce drives include a capacitor; I hope
>> > you got one that does! I wasn't aware any of the SF drives with a
>> > capacitor on them
g...@2ndquadrant.com (Greg Smith) writes:
> Yeb Havinga wrote:
>> * What filesystem to use on the SSD? To minimize writes and maximize
>> chance for seeing errors I'd choose ext2 here.
>
> I don't consider there to be any reason to deploy any part of a
> PostgreSQL database on ext2. The potential
On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:
> On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
> > On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
>
> > > regression=# select name, setting from pg_settings where name like
> > > '%cost';
> > > name | setting
On Wed, 2010-08-04 at 15:16 -0400, Greg Smith wrote:
> Hannu Krosing wrote:
> > There was ample space for keeping the indexes in linux cache (it has 1GB
> > cached currently) though the system may have decided to start writing it
> > to disk, so I suspect that most of the time was spent copying ran
Greg Smith wrote:
> Was trying to demonstrate the general ability of pg_stat_bgwriter
> snapshots at points in time to directly measure the buffer
> activity Hannu was theorizing about, not necessarily show a useful
> benchmark of any sort with that.
Ah, OK. Sorry I didn't pick up on that; I
Hannu Krosing wrote:
> This seems to indicate about 1 ms for moving pages over
> user/system boundary. (Intel Core2 Duo T7500 @ 2.20GHz, Ubuntu
> 9.10, 4GB RAM)
Using Greg's test script on a box with two cores like this:
Intel(R) Pentium(R) D CPU 3.40GHz
Linux kgrittn-desktop 2.6.31-22-gener
Hannu Krosing wrote:
Do you mean "written to disk", or written out from shared_buffers to
disk cache ?
The later turns into the former eventually, so both really. The kernel
will do some amount of write combining for you if you're lucky. But not
in all cases; it may decide to write somet
Kevin Grittner wrote:
Assuming this is real, and not just "in the noise" -- what conclusions would you
draw from this?
Was trying to demonstrate the general ability of pg_stat_bgwriter
snapshots at points in time to directly measure the buffer activity
Hannu was theorizing about, not necessar
On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:
> On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
> > On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
>
> > > regression=# select name, setting from pg_settings where name like
> > > '%cost';
> > > name | setting
Greg Smith wrote:
> What I do to quantify that is...well, the attached shows it better
> than I can describe; only works on 9.0 or later as it depends on a
> feature I added for this purpose there. It measures exactly how
> much buffer cache churn happened during a test, in this case
> creating
On Aug 3, 2010, at 9:27 AM, Merlin Moncure wrote:
>
> 2) I've heard that some SSD have utilities that you can use to query
> the write cycles in order to estimate lifespan. Does this one, and is
> it possible to publish the output (an approximation of the amount of
> work along with this would b
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:
> On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
>> After a week testing I think I can answer the question above: does it work
>> like it's supposed to under PostgreSQL?
>>
>> YES
>>
>> The drive I have tested is the $435,- 50GB OCZ Verte
On Jul 26, 2010, at 12:45 PM, Greg Smith wrote:
> Yeb Havinga wrote:
>> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
>> read/write test. (scale 300) No real winners or losers, though ext2
>> isn't really faster and the manual need for fix (y) during boot makes
>> it i
Hannu Krosing writes:
> There was ample space for keeping the indexes in linux cache (it has 1GB
> cached currently) though the system may have decided to start writing it
> to disk, so I suspect that most of the time was spent copying random
> index pages back and forth between shared buffers and
This time with attachment...
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com www.2ndQuadrant.us
fsync-stress.sh
Description: Bourne shell script
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Hannu Krosing wrote:
There was ample space for keeping the indexes in linux cache (it has 1GB
cached currently) though the system may have decided to start writing it
to disk, so I suspect that most of the time was spent copying random
index pages back and forth between shared buffers and disk ca
On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
> On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
> > regression=# select name, setting from pg_settings where name like '%cost';
> > name | setting
> > --+-
> > cpu_index_tuple_cost | 0.005
>
On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
> Hannu Krosing writes:
> > Of course there are more variables than just *_page_cost, so if you nail
> > down any other one, you may end with less than 1 for both page costs.
>
> > I have always used seq_page_cost = 1 in my thinking and adjusted
Hannu Krosing writes:
> Of course there are more variables than just *_page_cost, so if you nail
> down any other one, you may end with less than 1 for both page costs.
> I have always used seq_page_cost = 1 in my thinking and adjusted others
> relative to it.
Right, seq_page_cost = 1 is sort of
On Wed, 2010-08-04 at 09:14 -0400, Robert Haas wrote:
> On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing wrote:
> > In case of fully cached database it is closer to 1.
>
> In the case of a fully cached database I believe the correct answer
> begins with a decimal point.
The number 1 here was sugges
Tom Lane wrote:
> Ah. Well, the real fix for that is also in 8.4: we got rid of the
> separate ~=~ operator, so a text_pattern_ops index is now usable
> for plain =.
Nice!
Thanks,
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
"Kevin Grittner" writes:
> To be clear, though, the problem isn't that it didn't turn a LIKE
> with no wildcard characters into an equality test, it's that it
> would have been three orders of magnitude faster (because of an
> available index with an opclass specification) if it had treated an
> e
Tom Lane wrote:
> "Kevin Grittner" writes:
>> We have been using the C locale for everything at our site, but
>> there is occasionally talk of supporting characters outside the
>> ASCII7 set. In playing around with indexing, to see what the
>> impact of that would be, I stumbled across something
"Kevin Grittner" writes:
> We have been using the C locale for everything at our site, but
> there is occasionally talk of supporting characters outside the
> ASCII7 set. In playing around with indexing, to see what the impact
> of that would be, I stumbled across something which was mildly
> surp
We have been using the C locale for everything at our site, but
there is occasionally talk of supporting characters outside the
ASCII7 set. In playing around with indexing, to see what the impact
of that would be, I stumbled across something which was mildly
surprising.
In the C locale, if you wan
Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
That seems awfully small. You do realize that this setting does not
cause PostgreSQL to allocate any memory; it merely advises how much
disk space is likely to be cached. It should normally be set to the
sum of your share
On Mon, Jul 26, 2010 at 5:26 PM, Gerald Fontenay wrote:
> Thank you for your response. So if I query only my target child table, this
> should be "just like" any other single table wrt planning right? I have
> thousands of these tables. (I suppose that I'm only using inheritance for
> the sake of
On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing wrote:
> In case of fully cached database it is closer to 1.
In the case of a fully cached database I believe the correct answer
begins with a decimal point.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
-
On Mon, 2010-08-02 at 14:23 -0700, Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
>
> I experimented with lowering random_page_cost to 3 then 2.
In case of fully cached database it is closer to 1.
> 2) Why is the setting of work_mem something left to the admin and/or
>
On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote:
> se note that the 10% was on a slower CPU. On a more recent CPU the
> difference was 47%, based on tests that ran for an hour.
I am not surprised at all that reading and writing almost twice as much
data from/to disk takes 47% longer. If less
On 29 Jul 2010, at 23:58, Dino Vliet wrote:
> CREATE OR REPLACE FUNCTION agenttype1(a character)
> RETURNS integer AS
> LANGUAGE 'plpgsql' VOLATILE
> COST 100;
>
>
> CREATE OR REPLACE FUNCTION agenttype2(a character)
> RETURNS integer AS
> LANGUAGE 'plpgsql' VOLATILE
> COST 100;
>
>
> CREATE
We are running 8.3.10 64bit.
Compare the plans below.
They all do the same thing and delete from a table named work_active (about
500rows), which is a subset of work_unit (about 50m rows).
I want to introduce range-partitions on work_unit.id column (serial pk), and I
want constraint exclusio
On 07/29/10 2:58 PM, Dino Vliet wrote:
Dear postgresql list,
I have some troubles generating data for a analysis task at hand.
I have a table (table A) containing 5 million records and 28 number of
attributes. This table is 461MB big if I copy it to a csv file.
I want to create another
Dear postgresql list,
I have some troubles generating data
for a analysis task at hand.
I have a table (table A) containing 5
million records and 28 number of attributes. This table is 461MB big
if I copy it to a csv file.
I want to create another table (table
B) based o
Hallo,
Im running pg-8,pgpoolII on sol10-zone.
After update sol10u7, queries on coltype timestamp are very slow.
System: sparc, 2GB RAM
This DB is a greylist-DB to fight spam.
500 connections should be easy.
But 16 connection consum 10sec/query.
On another system (sparc) only 1 sec.i
s. detai
35 matches
Mail list logo