Joseph Shraibman [EMAIL PROTECTED] writes:
My question: why can't it go through the first index, get a list of
pages in the table, then go through the second index, union the result
with the results from first index, and then go into the table?
See TODO list ...
Joseph Shraibman [EMAIL PROTECTED] writes:
I have a table that is never updated, only INSERTED into. Is there a way
I can prevent vacuum wasting time on this table
What makes you think vacuum is wasting much time on this table? AFAICS
it will only update any unfixed hint bits ...
On Fri, 30 Apr 2004, Gary Doades wrote:
I should have also pointed out that MSSQL reported that same index scan
as taking 65% of the overall query time. It was just faster. The
overall query took 103ms in MSSQL.
Are your results based on a single client accessing the database and no
Hi
I'd LOVE to contribute on this but I don't have vmstat and I'm not running
linux.
How can I help?
Regards
On Wed, 28 Apr 2004, Robert Creager wrote:
Date: Wed, 28 Apr 2004 18:57:53 -0600
From: Robert Creager [EMAIL PROTECTED]
To: Josh Berkus [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED],
On Thu, 29 Apr 2004 13:36:47 -0400, Rod Taylor [EMAIL PROTECTED] wrote:
The reason for the function is that the sort routines (hash aggregation
included) will not stop in mid-sort
Good point.
Servus
Manfred
---(end of broadcast)---
TIP 9: the
Rob,
I would like to see the same, as I have a system that exhibits the same
behavior
on a production db that's running 7.4.1.
If you checked the thread follow-ups, you'd see that *decreasing*
spins_per_delay was not beneficial. Instead, try increasing them, one step
at a time:
(take
I guess the real question is, why maintain index flags and not simply
drop the index entry altogether?
A more interesting case would be to have the backend process record
index tuples that it would invalidate (if committed), then on commit
send that list to a garbage collection process.
It's also entirely possible your indices are using inaccurate
statistical information. Have you ANALYZEd recently?
In this example the statistics don't matter. The plans used were the same for
MSSQL and Postgres. I was trying to eliminate the difference in plans
between the two, which
On 29 Apr 2004 at 13:54, Josh Berkus wrote:
Gary,
It's also quite possble the MSSQL simply has more efficient index scanning
implementation that we do.They've certainly had incentive; their storage
system sucks big time for random lookups and they need those fast indexes.
(just
Gary Doades [EMAIL PROTECTED] writes:
In this example the statistics don't matter.
Don't they?
A prior poster mentioned that he thought MSSQL tries to keep all its
indexes in memory. I wonder whether you are giving Postgres a fair
chance to do the same. What postgresql.conf settings are you
Vitaly,
I am in the middle of going through them all, till now disabling the
enable_mergejoin really helped.
In that case, your random_page_cost is probably too low. Check the ratio of
per-tuple times on index vs. seqscan seeks.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
Having picked out an index scan as being the highest time user I
concentrated on that in this case and compared the same index scan on
MSSQL. At least MSSQL reported it as an index scan on the same index
for the same number of rows.
I should have also pointed out that MSSQL reported
How does the analyzer/planner deal with rows clustered together? Does
it just assume that if this col is clustered on then the actual data
will be clustered? What if the data in the table happens to be close
together because it was inserted together originally?
Josh Berkus [EMAIL PROTECTED] writes:
Certainly the fact that MSSQL is essentially a single-user database makes
things easier for them.
Our recent testing (cf the Xeon thread) says that the interlocking we
do to make the world safe for multiple backends has a fairly high cost
(at least on some
Joseph Shraibman wrote:
I have a table that is never updated, only INSERTED into. Is there a way
I can prevent vacuum wasting time on this table besides vacuuming each
table in the db by itself and omitting this table?
How feasable would it be to have a marker somewhere in pg that is
updated
Or even better an offset into the datatable for the earliest deleted
row, so if you have a table where you update the row shortly after
insert and then never touch it vacuum can skip most of the table
(inserts are done at the end of the table, right?)
Inserts are done at the end of the
On Mon, 26 Apr 2004 08:08:16 -0700, Sailesh Krishnamurthy
[EMAIL PROTECTED] wrote:
A Bi-Level Bernoulli Scheme for Database Sampling
Peter Haas, Christian Koenig (SIGMOD 2004)
Does this apply to our problem? AFAIK with Bernoulli sampling you don't
know the sample size in advance.
Anyway,
17 matches
Mail list logo