Greg Smith [EMAIL PROTECTED] writes:
On Tue, 26 Jun 2007, Heikki Linnakangas wrote:
How much of the buffer cache do you think we should try to keep clean? And
how large a percentage of the buffer cache do you think have usage_count=0 at
any given point in time?
What I discovered is that
It might be worth backpatching the Makefile.global.in patch (ie, the
ifndef addition) to the 8.2 branch, which would allow us to say 8.2.5
or later instead of 8.3 or later, and would bring correspondingly
nearer the time when people can actually use the feature without
thinking much. Comments?
On 6/26/07, ITAGAKI Takahiro [EMAIL PROTECTED] wrote:
Hi,
I'm testing HOT patches, applying to CVS HEAD.
Thanks a lot for your tests. I am posting a revised patch on -patches.
Please use that for further testing.
In the last few days, many people have reviewed the patch including
Simon,
Am Dienstag, 26. Juni 2007 16:12 schrieb Tom Lane:
True. OK, then let's add the ifndef to Makefile.global and change the
existing extension makefiles to
PG_CONFIG := pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
Any objections?
Yes. I think that
But why do you need them to be different at all? Just make it
russian Russian_Russia
russian ru_RU
Does that not work for some reason?
I'd like to have unique names of configuration. So, if user sets GUC variable or
call function with configuration's name then postgres should not have
Tom Lane wrote:
We could possibly re-allow that (see the comments in AlterTable()) but
it seems like an ugly and inefficient technique that we shouldn't be
encouraging. (The implications for system catalog bloat alone seem
enough reason to not recommend this.) Isn't there a cleaner way to
Dear Peter,
What was the problem with just making all uses of pg_config in
Makefile.global use a hardcoded bindir directly?
Because bindir is given by pg_config:-)
ISTM that the underlying issue, which was not foreseen in the initial pgxs
and fixed later, is that some distributions use a
Hello everyone,
I have created a new data type mychar. How can I specify a limit for it?
This (unlimited version) works fine:
create table table_a(col_a mychar);
This gives an error:
create table table_a(col_a mychar(10));
ERROR: syntax error at or near ( bei Zeichen 34
ZEILE 1: create table
Fabien COELHO [EMAIL PROTECTED] writes:
What was the problem with just making all uses of pg_config in
Makefile.global use a hardcoded bindir directly?
Because bindir is given by pg_config:-)
ISTM that the underlying issue, which was not foreseen in the initial pgxs
and fixed later, is
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Dienstag, 26. Juni 2007 16:12 schrieb Tom Lane:
PG_CONFIG := pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
Any objections?
Yes. I think that solution is wrong. It merely creates other possibilities
Tom Lane [EMAIL PROTECTED] writes:
I don't really see why it's overkill.
Well I think it may be overkill in that we'll be writing out buffers that
still have a decent chance of being hit again. Effectively what we'll be doing
in the approximated LRU queue is writing out any buffer that
Gregory Stark [EMAIL PROTECTED] writes:
If we find it's overkill then what we should consider doing is raising
BM_MAX_USAGE_COUNT. That's effectively tuning the percentage of the lru chain
that we decide we try to keep clean.
Yeah, I don't believe anyone has tried to do performance testing for
On Wed, Jun 27, 2007 at 02:08:43PM +0200, Michael Enke wrote:
Hello everyone,
I have created a new data type mychar. How can I specify a limit for it?
This (unlimited version) works fine:
create table table_a(col_a mychar);
What you want is called user defined typmod and I don't beleive any
Stephen Frost wrote:
* Florian Pflug ([EMAIL PROTECTED]) wrote:
Gregory Stark wrote:
All that really has to happen is that dblink should by default not be
callable
by any user other than Postgres. DBAs should be required to manually run
GRANT EXECUTE ON dblink_connect(text) TO public; if
In GiST, for each new data type we support we're expected to provide
(among other things) a function to determine whether a query is
consistent with a particular index entry (given an operator/
strategy). I haven't been able to figure out when the query value
being passed (arg 1 in the
Martijn van Oosterhout wrote:
On Wed, Jun 27, 2007 at 02:08:43PM +0200, Michael Enke wrote:
Hello everyone,
I have created a new data type mychar. How can I specify a limit for it?
This (unlimited version) works fine:
create table table_a(col_a mychar);
What you want is called user defined
Joshua D. Drake wrote:
Martijn van Oosterhout wrote:
On Wed, Jun 27, 2007 at 02:08:43PM +0200, Michael Enke wrote:
Hello everyone,
I have created a new data type mychar. How can I specify a limit
for it?
This (unlimited version) works fine:
create table table_a(col_a mychar);
What
Michael Enke wrote:
My primary goal is to get quasi numeric ordering on text column, e.g.
1
2
10
Normal order with varchar would be
1
10
2
You don't need to custom type for that. A custom operator class with
custom comparison operators is enough.
--
Heikki Linnakangas
EnterpriseDB
On Wed, Jun 27, 2007 at 09:32:13AM -0700, Eric wrote:
In GiST, for each new data type we support we're expected to provide
(among other things) a function to determine whether a query is
consistent with a particular index entry (given an operator/
strategy). I haven't been able to figure out
On Wed, Jun 27, 2007 at 11:37:01AM +0200, Manera, Villiam wrote:
To better explain my problem I attach one of my functions that is easy
to understand.
For each of my products I must have one main supplier and I may have
also some secondary suppliers.
Therefore for each of my articles
I
* Florian Pflug ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
Uh, have the admin create appropriate views.
I meant letting them use it to connect to abitrary databases and hosts, not
executing only predefined quries. My wording wasn't clear in that regard,
though.
Perhaps I wasn't clear.
Is there a way within the existing installation mechanisms to capture
the files generated by an execution of make, compiling from source,
that are copied to their locations during the make install? For
example, I had considered executing a make -n install, capturing that
output, and turning it
Is anyone currently working on this TODO item?
During index creation, pre-sort the tuples to improve build speed
http://archives.postgresql.org/pgsql-hackers/2007-03/msg01199.php
A few of us would like to tackle it and see if we can add some value here.
Tom,
Shreya
[EMAIL PROTECTED] wrote:
Is anyone currently working on this TODO item?
During index creation, pre-sort the tuples to improve build speed
http://archives.postgresql.org/pgsql-hackers/2007-03/msg01199.php
A few of us would like to tackle it and see if we can add some value here.
That
Doug Knight wrote:
Is there a way within the existing installation mechanisms to capture
the files generated by an execution of make, compiling from source,
that are copied to their locations during the make install? For
example, I had considered executing a make -n install, capturing that
On Wed, 27 Jun 2007, Gregory Stark wrote:
I was seeing 90% dirty+usage_count0 in the really ugly spots.
You keep describing this as ugly but it sounds like a really good situation to
me. The higher that percentage the better your cache hit ratio is.
If your entire buffer cache is mostly
Heikki Linnakangas [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
Is anyone currently working on this TODO item?
During index creation, pre-sort the tuples to improve build speed
If you want to work on hash indexes, though, this TODO item seems more
important to me at least:
Add WAL
On Wed, Jun 27, 2007 at 08:36:54PM -0400, Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
Is anyone currently working on this TODO item?
During index creation, pre-sort the tuples to improve build speed
If you want to work on hash indexes, though,
Tom,
That sounds good, but there are corner cases where it wouldn't work ---
consider a page containing a single maximum-length tuple.
Certainly any mature upgrade-in-place tool will require a checker which you
run first which determines if you have a prohibitive corner case.
Besides, I
Greg Smith [EMAIL PROTECTED] wrote:
If your entire buffer cache is mostly filled with dirty buffers with high
usage counts, you are in for a long wait when you need new buffers
allocated and your next checkpoint is going to be traumatic.
Do you need to increase shared_buffers in such case?
30 matches
Mail list logo