On Fri, Jan 8, 2010 at 2:23 PM, Tom Lane wrote:
>> If the other plan does turn out to be faster (and I agree with Tom
>> that there is no guarantee of that), then one thing to check is
>> whether seq_page_cost and random_page_cost are set too high. If the
>> data is all cached, the default values
Robert Haas writes:
> On Fri, Jan 8, 2010 at 1:27 PM, Tom Lane wrote:
>> 11000 index probes aren't exactly free. If they take more than about
>> 1msec apiece, the planner picked the right plan.
> The OP could try setting enable_hashjoin to false (just for testing,
> never for production) and do
On Fri, Jan 8, 2010 at 1:27 PM, Tom Lane wrote:
> Kaloyan Iliev Iliev writes:
>> My question is why the planner didn't do the index scan first on ms_data
>> to reduce the rows to ~ 11000 and the use the PK index on
>> ms_commands_history.
>
> 11000 index probes aren't exactly free. If they take
Kaloyan Iliev Iliev writes:
> My question is why the planner didn't do the index scan first on ms_data
> to reduce the rows to ~ 11000 and the use the PK index on
> ms_commands_history.
11000 index probes aren't exactly free. If they take more than about
1msec apiece, the planner picked the ri
"Carlo Stonebanks" wrote:
> In order for me to validate that rows would have been updated, I
> had to run a SELECT with the same WHERE clause in PgAdminIII first
> to see how many rows would have qualified. But this was for
> testing purposes only. The SELECT statement does not exist in the
> c
Carlo Stonebanks wrote:
This is hosted on a new server the client set up so I am waiting for
the exact OS and hardware config. PG Version is PostgreSQL 8.3.6,
compiled by Visual C++ build 1400, OS appears to be Windows 2003 x64
Server.
More than anything, I am more concerned with the long-ter
Hi ,
I have a simple query with two tables.
ms_data ~ 450 rows
ms_commands_history ~ 50 rows
I have done analyze and there are indexes.
My question is why the planner didn't do the index scan first on ms_data
to reduce the rows to ~ 11000 and the use the PK index on
ms_commands_history
On Fri, Jan 08, 2010 at 12:38:46PM -0500, Carlo Stonebanks wrote:
>> I thought that post mentioned that the plan
>> was one statement in an iteration, and that the cache would have
>> been primed by a previous query checking whether there were any rows
>> to update. If that was the case, it might
I thought that post mentioned that the plan
was one statement in an iteration, and that the cache would have
been primed by a previous query checking whether there were any rows
to update. If that was the case, it might be worthwhile to look at
the entire flow of an iteration.
This is the only
Carlo Stonebanks wrote:
> Inerestingly, the total index size is 148GB, twice that of the table,
> which may be an indication of where the performance bottleneck is.
Maybe a sign for massive index-bloat?
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unint
crank it up more and delay the checkpoints as much as possible during
these updates. 64 segments is already 1024M.
We have 425M rows, total table size is 78GB, so we can imagine a worst case
UPDATE write is less than 200 bytes * number of rows specified in the update
(is that logic correct?).
On Fri, Jan 8, 2010 at 11:06 AM, Rui Carvalho wrote:
> Hi
>
> well it's pretty simple
>
> if you want to see if there are elements in common then instead of "any" use
> "&&"
> if you want to see if they are equal just use " = " that will five you true
> or false
you also have the option of expand
Hi
well it's pretty simple
if you want to see if there are elements in common then instead of "any" use
"&&"
if you want to see if they are equal just use " = " that will five you true
or false
you can check array functions in here
http://www.postgresql.org/docs/8.2/static/functions-array.htm
"Carlo Stonebanks" wrote:
> Already done in an earlier post
Perhaps I misunderstood; I thought that post mentioned that the plan
was one statement in an iteration, and that the cache would have
been primed by a previous query checking whether there were any rows
to update. If that was the cas
At 08:17 07/01/2010, Carlo Stonebanks wrote:
>Our DB has an audit table which is 500M rows and growing. (FYI the objects
>being audited are grouped semantically, not individual field values).
>
>Recently we wanted to add a new feature and we altered the table to add a new
>column. We are backfill
Hi,
I've got a Fusion IO disk (actually a HP StorageWorks IO Accelerator)
that I'm going to performance test with postgresql on windows. I'm
guessing people here may be interested in the results.
So, does there exist any simple way to find some... comparable numbers
for a server - should I use pg
16 matches
Mail list logo