Hi all,
When pgfoundry was opened, there was some talk about moving the projects
from gborg there. This has not, to date, happened. Is there any chance
of this happening now, even if only for the specific project? I feel
really bad about releasing a new version of ole db, with the news of the
Tom Lane wrote:
I'd go with PlannerState. QueryState for some reason sounds more like
execution-time state.
Well, not to me :) It just makes sense to me that QueryState as the
working state associated with a Query. Not sure it makes a big
difference, though.
Pulling the planner internal stuff
, 19/03/2005 12:57 -0500, Tom Lane :
It's already true that the individual buffers, as opposed to the buffer
descriptors, are allocated only as needed; which makes the overhead
of a large local_buffers setting pretty small if you don't actually do
much with temp tables in a given session.
Markus Bertheau wrote:
-- Start of PGP signed section.
? ???, 19/03/2005 ? 12:57 -0500, Tom Lane ?:
It's already true that the individual buffers, as opposed to the buffer
descriptors, are allocated only as needed; which makes the overhead
of a large local_buffers setting pretty small
On Sat, 2005-03-19 at 12:57 -0500, Tom Lane wrote:
That means we can go ahead with providing a GUC variable to make the
array size user-selectable. I was thinking of calling it either
local_buffers (in contrast to shared_buffers) or temp_buffers
(to emphasize the fact that they're used for
On Sat, 2005-03-19 at 18:53 -0500, Tom Lane wrote:
Currently, in places like heap_drop_with_catalog, we issue a
FlushRelationBuffers() call followed by smgrscheduleunlink().
The latter doesn't actually do anything right away, but schedules
a file unlink to occur after transaction commit.
It
--On Samstag, März 19, 2005 11:05:39 -0500 Tom Lane [EMAIL PROTECTED]
wrote:
Jaime Casanova [EMAIL PROTECTED] writes:
On Fri, 18 Mar 2005 23:31:26 -0500, Tom Lane [EMAIL PROTECTED] wrote:
Why do you not define the problem as when we decide a view is
updateable and create the needed rules for it,
Simon Riggs [EMAIL PROTECTED] writes:
ISTM that buffers belonging to the victim relation would not necessarily
stay in memory.
Right. They'd be unpinned and therefore candidates for being written
out and recycled. So we *might* write them before they are dropped.
That's still better than
Neil Conway [EMAIL PROTECTED] writes:
I've got most of this finished; I'll post a patch soon. One issue I ran
into is how to handle query_tree_mutator() and query_tree_walker(): they
both expect to be able to traverse a Query's in_info_list, which my
patch moves into the QueryState struct.
Markus Bertheau [EMAIL PROTECTED] writes:
It's already true that the individual buffers, as opposed to the buffer
descriptors, are allocated only as needed; which makes the overhead
of a large local_buffers setting pretty small if you don't actually do
much with temp tables in a given session.
I'm currently working on consolidating some of the content on the developer
site with the current web code cvs and am wondering what to do with
http://developer.postgresql.org/docs/pgsql/src/tools/backend/index.html.
This link actually comes right out of the postgresql sources, but it is
If only the CVS/Mailing lists are needed, and nothing that is in the
database, then this shouldn't be too hard ... go to pgfoundry, submit for
the new project ... once it is approved, create the various mailing lists
that you have on gborg, and then, *before* you do anything else on either,
I
Robert Treat [EMAIL PROTECTED] writes:
I'm currently working on consolidating some of the content on the developer
site with the current web code cvs and am wondering what to do with
http://developer.postgresql.org/docs/pgsql/src/tools/backend/index.html.
This link actually comes right out
After some further digging, I think we have 3 problems.
1. On Windows gettext wants to hijack printf and friends, as below. This
strikes me as rather unfriendly behaviour by a library header file.
Anyway, mercifully libintl.h is included in our source in exactly one
spot, so I think the thing to
On 7.4 and up you may have to set enable_hashagg = off to force a
Sort/GroupAggregate plan instead of HashAggregate.
O.k. on FC2 7.4.6 64bit I get:
-
HashAggregate (cost=80.00..82.50
Marc G. Fournier wrote:
If only the CVS/Mailing lists are needed, and nothing that is in the
database, then this shouldn't be too hard ... go to pgfoundry, submit
for the new project ... once it is approved, create the various mailing
lists that you have on gborg, and then, *before* you do
On Sun, 20 Mar 2005, Thomas Hallgren wrote:
Marc G. Fournier wrote:
If only the CVS/Mailing lists are needed, and nothing that is in the
database, then this shouldn't be too hard ... go to pgfoundry, submit for
the new project ... once it is approved, create the various mailing lists
that you
I think having to be on a specific server to get automatic updates on
the front page is the problem. Moving it not the correct solution. There
are may postgresql related projects that don't live on pgfoundry, or
even gborg.
Why is this a necessity?
Can't we set up some sort of interface to the
Marc G. Fournier wrote:
On Sun, 20 Mar 2005, Thomas Hallgren wrote:
Marc G. Fournier wrote:
Once I've copied both over, I'll get Chris to mark the gborg project
as being 'disabled' so that nobody will see it over there anymore ...
Ok, I submitted a request for the project under pgfoundry. Same
Shachar Shemesh wrote:
To summarize, just give me read only access to the old project's data
and I'm set.
I second that.
- thomas
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Joshua D. Drake [EMAIL PROTECTED] writes:
O.k. I got 7.3.9 to operate as expected on FC2 (64bit) and these are my
results:
enable_hashagg on:
HashAggregate (cost=80.00..82.50 rows=1000 width=404) (actual
time=209.746..209.750 rows=1 loops=1)
You got confused somewhere along the line,
Tom Lane wrote:
That's a bit nasty. I'm fairly sure that I added in_info_list to the
walker recursion because I had to; I don't recall the exact scenario,
but I think it needs to be possible to reassign relation numbers
within that data structure if we are doing it elsewhere in a query
tree.
It
Neil Conway [EMAIL PROTECTED] writes:
Here's one idea to fix this: when planning a Query, transform the Query
into a PlannedQuery. This would essentially be the same as the
QueryState we discussed earlier, except that we would also walk through
the Query and adjust references to nested
Tom Lane [EMAIL PROTECTED] writes
If your objection is that it's not being maintained, then that is no
solution. Once it's out of the source code CVS it is *guaranteed* to
not get updated to track source-code changes.
Is it possible that we insert some tags (like doc++ does) into source
Tom Lane [EMAIL PROTECTED] writes
It strikes me that the FlushRelationBuffers call is unnecessary and
causes useless I/O, namely writing out pages into a file that's
about to be deleted anyway. If we simply removed it then any buffers
belonging to the victim relation would stay in memory
One of the RBL lists you (Bruno and Tom) use seems to be poorly maintained. My
new IP address is listed in their database as a dynamic address (it's not, it
even reverse resolves). I've notified this list multiple times that this IP
address is listed improperly and never received any reply. RBL
On Wed, Mar 16, 2005 at 11:20:12PM -0500, Bruce Momjian wrote:
Basically we do open_datasync - fdatasync - fsync. This is
empirically what we found to be fastest on most operating systems, and
we default to the first one that exists on the operating system.
Notice we never default to
On postgresql-8.0.0 I've faced a *really* weird behavior.
A simple query (single table - simple function call - no index),
makes postgres process grow about as much as the memory size required
to keep ALL rows in memory.
The invoked procedure call doesn't leak.
It's IMMUTABLE.
Calls other
Qingqing Zhou [EMAIL PROTECTED] writes:
What if AtEOXact_Inval() fails (though the chance is slim)? Does that mean
that smgrDoPendingDeletes() - DropRelFileNodeBuffers can never get
executed, which means we can never dropped without write the buffers
belonging to the victim relation? So when
[EMAIL PROTECTED] (Rod Taylor) wrote:
It's a fairly limited case and by no means removes the requirement for
regular vacuums, but for an update heavy structure perhaps it would be
worth while? Even if it could only keep indexes clean it would help.
The problem that persists with this is that
The problem that persists with this is that it throws in extra
processing at the time that the system is the _most_ busy doing
updates, thereby worsening latency at times when the system may
already be reeling at the load.
I think, as a result, that VACUUM will _have_ to be done
asynchronously.
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= [EMAIL PROTECTED] writes:
seriously, I am far below this level of knowledge. But I can contribute
a test that (maybe) can help. I have rewritten the query so it JOINs the
varchar() fields (in fact all fields except the IDPK) at the last INNER
I have asked him for the data and played with his queries, and obtained
massive speedups with the following queries :
http://boutiquenumerique.com/pf/miroslav/query.sql
http://boutiquenumerique.com/pf/miroslav/query2.sql
http://boutiquenumerique.com/pf/miroslav/materialize.sql
Note that my
On my machine (Laptop with Pentium-M 1.6 GHz and 512MB DDR333) I get the
following timings :
Big Joins Query will all the fields and no order by (I just put a SELECT
* in the first table) yielding about 6k rows :
= 12136.338 ms
Replacing the SELECT * from the table with many fields by
In article [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] writes:
One possible approach is to do the invalidation on a sufficiently coarse
grain that we don't care. For example, I would be inclined to make any
change in a table's schema invalidate all plans that use that table at
all; that
Tom Lane wrote:
So I have some results. I have tested the query on both PostgreSQL 8.0.1
and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL
result is 11,667.916 ms, MySQL result is 448.4 ms.
That's a fairly impressive discrepancy :-(, and even the slot_getattr()
patch
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= [EMAIL PROTECTED] writes:
As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
be helpful to change them to char(1)? Would it solve the variable-width
problem at least for some fields and speed the query up?
No, because
Tom Lane wrote:
I wrote:
Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
Miroslav ulc wrote:
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path for field access
Tom Lane [EMAIL PROTECTED] writes
So it'll get an error ... this scenario doesn't strike me as any worse
than any other problem occuring in post-commit cleanup. The locks left
around by the not-cleaned-up transaction would probably be a bigger
issue, for example.
Yes, the result is
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
O.k. I got 7.3.9 to operate as expected on FC2 (64bit) and these are my
results:
enable_hashagg on:
HashAggregate (cost=80.00..82.50 rows=1000 width=404) (actual
time=209.746..209.750 rows=1 loops=1)
You got
(BTom Lane wrote:
(B I'd view this as a postmaster state that propagates to backends.
(B Probably you'd enable it by means of a postmaster option, and the
(B only way to get out of it is to shut down and restart the postmaster
(B without the option.
(B
(BI've created a patch to make a
Hi,
i want to add support for exceptions that are
supported in oracle, in plpgsql.
mainly i am want to add custome exceptions support in
plpgsql. like in Oracle we use
EXCEPTION myexp
can any body help me.
Regards,
Asif Ali.
__
Do You Yahoo!?
Harald Fuchs [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
One possible approach is to do the invalidation on a sufficiently coarse
grain that we don't care. For example, I would be inclined to make any
change in a table's schema invalidate all plans that use that table at
Hi,
i want to add support for exceptions that are
supported in oracle, in plpgsql.
mainly i am want to add custome exceptions support in
plpgsql. like in Oracle we use
EXCEPTION myexp
can any body help me.
Hello
46 matches
Mail list logo