On Wed, Apr 27, 2011 at 8:40 PM, Kevin Grittner
wrote:
> Sok Ann Yap wrote:
>> Kevin Grittner wrote:
>
>>> Please show us your overall configuration and give a description
>>> of the hardware (how many of what kind of cores, how much RAM,
>>> what sort of storage system).
>
>> Here's the configu
Dear all
When database files are on a VxFS filesystem, performance can be
significantly improved by setting the VX_CONCURRENT cache advisory on
the file according to vxfs document,
my question is that have any tested by this?
#include
ioctl(fd, VX_SETCACHE, VX_CONCURRENT);
Regards
HSIEN WEN
On Thu, Apr 28, 2011 at 7:23 AM, Kevin Grittner
wrote:
> Sok Ann Yap wrote:
>
>> Anyway, the overhead of spawning 44 extra queries means that it is
>> still better off for me to stick with the original query and tune
>> PostgreSQL to choose index scan.
>
> Maybe, but what is *best* for you is to
On Wed, Apr 27, 2011 at 5:37 AM, Kevin Grittner
wrote:
> Sok Ann Yap wrote:
>
>> So, index scan wins by a very small margin over sequential scan
>> after the tuning. I am a bit puzzled because index scan is more
>> than 3000 times faster in this case, but the estimated costs are
>> about the same
Joseph Shraibman wrote:
In a 52 gig table I have a "select id from table limit 1 order
by id desc" returns instantly, but as soon as you declare a child table
it tries to seq scan all the tables.
This is probably the limitation that's fixed in PostgreSQL 9.1 by this
commit (following a few
On Wed, Apr 27, 2011 at 2:11 PM, Joseph Shraibman wrote:
> On 04/27/2011 04:32 PM, Robert Haas wrote:
> > In the first case, PostgreSQL evidently thinks that using the indexes
> > will be slower than just ignoring them. You could find out whether
> > it's right by trying it with enable_seqscan=of
Sok Ann Yap wrote:
> Anyway, the overhead of spawning 44 extra queries means that it is
> still better off for me to stick with the original query and tune
> PostgreSQL to choose index scan.
Maybe, but what is *best* for you is to tune PostgreSQL so that your
costs are accurately modeled, at w
Tomas Vondra wrote:
Hmmm, just wondering - what would be needed to build such 'workload
library'? Building it from scratch is not feasible IMHO, but I guess
people could provide their own scripts (as simple as 'set up a a bunch
of tables, fill it with data, run some queries') and there's a pile o
On 04/27/2011 04:32 PM, Robert Haas wrote:
> In the first case, PostgreSQL evidently thinks that using the indexes
> will be slower than just ignoring them. You could find out whether
> it's right by trying it with enable_seqscan=off.
My point is that this is just a problem with inherited tables.
On Wed, Apr 27, 2011 at 10:27 PM, Robert Haas wrote:
>
> What if the user is using an SSD or ramdisk?
>
> Admittedly, in many cases, we could probably get somewhat useful
> numbers this way. But I think it would be pretty expensive.
> gettimeofday() is one of the reasons why running EXPLAIN ANALY
Dne 27.4.2011 20:56, Kevin Grittner napsal(a):
> Greg Smith wrote:
>
>> The reason no work can be done in this area is because there are
>> no standardized benchmarks of query execution in PostgreSQL being
>> run regularly right now. Bringing up ideas for changing the
>> computation is easy; p
On Fri, Apr 1, 2011 at 2:41 AM, Joseph Shraibman wrote:
> When I do a query on a table with child tables on certain queries pg
> uses indexes and on others it doesn't. Why does this happen? For example:
>
>
> [local]:playpen=> explain analyze select * from vis where id > 10747 ;
>
On Tue, Apr 26, 2011 at 9:49 AM, Claudio Freire wrote:
> On Tue, Apr 26, 2011 at 7:30 AM, Robert Haas wrote:
>> On Apr 14, 2011, at 2:49 AM, Claudio Freire wrote:
>>> This particular factor is not about an abstract and opaque "Workload"
>>> the server can't know about. It's about cache hit rate,
On Tue, Apr 26, 2011 at 8:54 PM, Tomas Vondra wrote:
> I wonder if there are cases where an increase of random_page_cost would
> hurt performance, and another increase would improve it ... And I'm not
> talking about individual queries, I'm talking about overall performance.
I don't think there a
Greg Smith wrote:
> The reason no work can be done in this area is because there are
> no standardized benchmarks of query execution in PostgreSQL being
> run regularly right now. Bringing up ideas for changing the
> computation is easy; proving that such a change is positive on
> enough workl
t...@fuzzy.cz wrote:
Anyway I'm not an expert in this field, but AFAIK something like this
already happens - btw that's the purpose of effective_cache_size.
effective_cache_size probably doesn't do as much as you suspect. It is
used for one of the computations for whether an index is small en
Thanks for all valuable insights. I decided to drop the idea of adding
additional column and
will just rely on Date column for all ordering.
Tom - thanks for clear answer on the issue I was concerned about.
Maciek,Kevin -
thanks for ideas, hint on generate_series() - I will have to go through cpl
On Thu, Apr 28, 2011 at 12:17 AM, Kevin Grittner
wrote:
>
> Dhimant Patel wrote:
>
> > I am a new comer on postgres world and now using it for some
> > serious (at least for me) projects. I have a need where I am
> > running some analytical + aggregate functions on data where
> > ordering is don
Dhimant Patel wrote:
> I am a new comer on postgres world and now using it for some
> serious (at least for me) projects. I have a need where I am
> running some analytical + aggregate functions on data where
> ordering is done on Date type column.
>
> From my initial read on documentation I b
> This makes me wonder would it make any good to create additional column of
> Integer type and update it as data gets added and use this integer column for
> all ordering purposes for my sqls - or should I not hasitate using Date type
> straight into my sql for ordering?
Keep in mind what Mic
Dhimant Patel writes:
> From my initial read on documentation I believe internally a date type is
> represented by integer type of data. This makes me wonder would it make any
> good to create additional column of Integer type and update it as data gets
> added and use this integer column for all
Hi All,
I am a new comer on postgres world and now using it for some serious (at
least for me) projects. I have a need where I am running some analytical +
aggregate functions on data where ordering is done on Date type column.
>From my initial read on documentation I believe internally a date t
Sok Ann Yap wrote:
> Kevin Grittner wrote:
>> Please show us your overall configuration and give a description
>> of the hardware (how many of what kind of cores, how much RAM,
>> what sort of storage system).
> Here's the configuration (this is just a low end laptop):
> version | PostgreSQ
On Wed, Apr 27, 2011 at 9:22 AM, Claudio Freire wrote:
> The problem there, I think, is that the planner is doing a full join,
> instead of a semi-join.
Or, rather, computing cost as if it was a full join. I'm not sure why.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresq
On Wed, Apr 27, 2011 at 3:04 AM, Merlin Moncure wrote:
> The very first thing to check is effective_cache_size and to set it to
> a reasonable value.
>
The problem there, I think, is that the planner is doing a full join,
instead of a semi-join.
--
Sent via pgsql-performance mailing list (pgsql
25 matches
Mail list logo