[HACKERS] How to make the user-defined function or type being accessed in the global database server?

2011-02-20 Thread 页公倒龙
Hello, I had defined user-defined function and type in database A by using user 
Root.when i want to access the function and type in another database B by using 
the same user Root, but found that i could't access them. How to make the 
user-defined function or type being accessed in the global database server?


I am looking forward to hearing from you again soon. Million thanks.


Yours sincerely

Re: [HACKERS] How to make the user-defined function or type being accessed in the global database server?

2011-02-20 Thread Heikki Linnakangas

On 20.02.2011 08:03,  wrote:

Hello, I had defined user-defined function and type in database A by using user 
Root.when i want to access the function and type in another database B by using 
the same user Root, but found that i could't access them. How to make the 
user-defined function or type being accessed in the global database server?


You can't. The user-defined function and type need to be created 
separately in each database.


You can, however, create them in the 'template1' database *before* 
creating the other databases. That way they are automatically copied to 
database you create. See 
http://www.postgresql.org/docs/9.0/interactive/manage-ag-templatedbs.html.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Void binary patch

2011-02-20 Thread Radosław Smogura
Just patch for missing procedures for void send/recv

Regards,
Radek
diff --git a/src/backend/utils/adt/pseudotypes.c b/src/backend/utils/adt/pseudotypes.c
index d9329f8..614eb98 100644
--- a/src/backend/utils/adt/pseudotypes.c
+++ b/src/backend/utils/adt/pseudotypes.c
@@ -212,7 +212,20 @@ void_out(PG_FUNCTION_ARGS)
 	PG_RETURN_CSTRING(pstrdup());
 }
 
+Datum
+void_recv(PG_FUNCTION_ARGS)
+{
+PG_RETURN_VOID();
+}
 
+Datum
+void_send(PG_FUNCTION_ARGS)
+{
+	StringInfoData buf;
+
+pq_begintypsend(buf); //Nice machinery to send nothing
+	PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+}
 /*
  * trigger_in		- input routine for pseudo-type TRIGGER.
  */
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 0894985..0711474 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4226,6 +4226,11 @@ DATA(insert OID = 2502 (  anyarray_recv		   PGNSP PGUID 12 1 0 0 f f f t f s 1 0
 DESCR(I/O);
 DATA(insert OID = 2503 (  anyarray_send		   PGNSP PGUID 12 1 0 0 f f f t f s 1 0 17 2277 _null_ _null_ _null_ _null_	anyarray_send _null_ _null_ _null_ ));
 DESCR(I/O);
+DATA(insert OID = 3120 (  void_recv		   PGNSP PGUID 12 1 0 0 f f f t f s 1 0 2278 2281 _null_ _null_ _null_ _null_ void_recv _null_ _null_ _null_ ));
+DESCR(I/O);
+DATA(insert OID = 3121 (  void_send		   PGNSP PGUID 12 1 0 0 f f f t f s 1 0 17 2278 _null_ _null_ _null_ _null_	void_send _null_ _null_ _null_ ));
+DESCR(I/O);
+
 
 /* System-view support functions with pretty-print option */
 DATA(insert OID = 2504 (  pg_get_ruledef	   PGNSP PGUID 12 1 0 0 f f f t f s 2 0 25 26 16 _null_ _null_ _null_ _null_	pg_get_ruledef_ext _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_type.h b/src/include/catalog/pg_type.h
index 0f7312e..9baed6c 100644
--- a/src/include/catalog/pg_type.h
+++ b/src/include/catalog/pg_type.h
@@ -615,7 +615,7 @@ DATA(insert OID = 2276 ( any			PGNSP PGUID  4 t p P f t \054 0 0 0 any_in any_ou
 #define ANYOID			2276
 DATA(insert OID = 2277 ( anyarray		PGNSP PGUID -1 f p P f t \054 0 0 0 anyarray_in anyarray_out anyarray_recv anyarray_send - - - d x f 0 -1 0 0 _null_ _null_ ));
 #define ANYARRAYOID		2277
-DATA(insert OID = 2278 ( void			PGNSP PGUID  4 t p P f t \054 0 0 0 void_in void_out - - - - - i p f 0 -1 0 0 _null_ _null_ ));
+DATA(insert OID = 2278 ( void			PGNSP PGUID  4 t p P f t \054 0 0 0 void_in void_out void_recv void_send - - - i p f 0 -1 0 0 _null_ _null_ ));
 #define VOIDOID			2278
 DATA(insert OID = 2279 ( trigger		PGNSP PGUID  4 t p P f t \054 0 0 0 trigger_in trigger_out - - - - - i p f 0 -1 0 0 _null_ _null_ ));
 #define TRIGGEROID		2279
diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h
index 8392be6..8652ba0 100644
--- a/src/include/utils/builtins.h
+++ b/src/include/utils/builtins.h
@@ -504,6 +504,8 @@ extern Datum anyenum_in(PG_FUNCTION_ARGS);
 extern Datum anyenum_out(PG_FUNCTION_ARGS);
 extern Datum void_in(PG_FUNCTION_ARGS);
 extern Datum void_out(PG_FUNCTION_ARGS);
+extern Datum void_recv(PG_FUNCTION_ARGS);
+extern Datum void_send(PG_FUNCTION_ARGS);
 extern Datum trigger_in(PG_FUNCTION_ARGS);
 extern Datum trigger_out(PG_FUNCTION_ARGS);
 extern Datum language_handler_in(PG_FUNCTION_ARGS);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Devrim GÜNDÜZ
On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
 
  But before expending time on that, I'd want to see some evidence
 that
  it's actually helpful for production situations.  I'm a bit dubious
  that you're going to gain much here.
 
 If you want to build an index on a 500GB table and you have 1TB RAM,
 then being able to use 1GB maintenance_work_mem can only be good,
 no? 

That would also probably speed up Slony (or similar) replication engines
in initial replication phase. I know that I had to wait a lot while
creating big indexes on a machine which had enough ram.
-- 
Devrim GÜNDÜZ
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


[HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Devrim GÜNDÜZ

Yesterday I spent about half an hour to dig an issue on my laptop -- 9.0
instance did not start, and there were nothing in the logs. Eventually I
straced postmaster on start, and noticed an extra char in the
pg_hba.conf, which caused postmaster not to start.

Is it possible to drop something to (startup) log file and mention about
the syntax error in hba file?

(FWIW, I saw the same issue in a Greenplum 4.0.3 instance 2 months
before or so.)

-- 
Devrim GÜNDÜZ
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Sync Rep v17

2011-02-20 Thread Florian Pflug
On Feb20, 2011, at 08:12 , Jaime Casanova wrote:
 considering that synchronous_replication to on means that we *want*
 durability, and that synchronous_commit to off means we don't *care*
 about durability. Then the real question here is: in the presence of
 synchronous_replication to on (which means we want durability), are we
 allowed to assume we can loss data?

From the angle, shouldn't we turn synchronous_replication=on into a third
possible state of synchronous_commit?

We'd then have

synchronous_commit=off #Same as now
synchronous_commit=local #Same as synchronous_commit=on,
 #synchronous_replication=off
synchronous_commit=standby #Same as synchronous_commit=on,
   #synchronous_replication=on

 one way to manage that is simply disallow that combination with an
 error, maybe that is a bit strict but we haven't to make assumptions;
 the other option is to keep safe which means keep durability so if you
 want to risk some data then you should disable synchronous_replication
 as well as synchronous_commit... maybe sending a message to the log
 when you detect the conflicting situation.

The question is where we'd raise the error, though. Doing it on GUC
assignment makes the behaviour depend on assignment order. That's a
bad idea I believe, since it possibly breaks ALTER ROLE/DATEBASE SET 

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update PostgreSQL shared memory usage table for 9.0?

2011-02-20 Thread Kevin Grittner
Bruce Momjian  wrote:
 
 This documentation
 
This page was missed in the SSI documentation update.  I don't want
to get distracted by it today, but we'll need to get this done for
9.1.  I guess I need to grep the sgml files for other mentions of
max_locks_per_transaction for other possible misses.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Bruce Momjian
Devrim G?ND?Z wrote:
 On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
  
   But before expending time on that, I'd want to see some evidence
  that
   it's actually helpful for production situations.  I'm a bit dubious
   that you're going to gain much here.
  
  If you want to build an index on a 500GB table and you have 1TB RAM,
  then being able to use 1GB maintenance_work_mem can only be good,
  no? 
 
 That would also probably speed up Slony (or similar) replication engines
 in initial replication phase. I know that I had to wait a lot while
 creating big indexes on a machine which had enough ram.

Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic?  Right now it is
MAX_KILOBYTES (INT_MAX).

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] work_mem / maintenance_work_mem maximums

2011-02-20 Thread Bruce Momjian
Josh Berkus wrote:
 
  Is this a TODO?  Can we easily fix the tuplesort.c code?
 
 Easily, no.  But that's not a reason for it to not be a TODO.
 
 I, too, would like to be able to make use of 32GB of work_mem effectively.

[ repost to the right thread.]

Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic?  Right now it is
MAX_KILOBYTES (INT_MAX).

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Bernd Helmle



--On 20. Februar 2011 09:32:02 -0500 Bruce Momjian br...@momjian.us wrote:


Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic?  Right now it is
MAX_KILOBYTES (INT_MAX).


This is something i proposed some time ago, too. At least, it will stop us 
from promising something which is maintenance_work_mem not able to deliver.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Bernd Helmle



--On 20. Februar 2011 15:48:06 +0100 Bernd Helmle maili...@oopsware.de 
wrote:



Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic?  Right now it is
MAX_KILOBYTES (INT_MAX).


This is something i proposed some time ago, too. At least, it will stop
us from promising something which is maintenance_work_mem not able to
deliver.


Hmm, on further reflection a better option might be to just document this 
behavior more detailed. I could imagine that making maintenance_work_mem 
having a hard upper limit would break countless SQL scripts, where it was 
set just high enough in the hope of speed increase...


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Well, I figure it will be hard to allow larger maximums, but can we make
 the GUC variable maximums be more realistic?  Right now it is
 MAX_KILOBYTES (INT_MAX).

You seem to be confusing one limitation in one code path with the
overall meaning of maintenance_work_mem.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update PostgreSQL shared memory usage table for 9.0?

2011-02-20 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Bruce Momjian wrote:
 Can someone update the PostgreSQL shared memory usage table for 9.0?
 
 http://www.postgresql.org/docs/9.0/static/kernel-resources.html#SYSVIPC
 
 Right now it says Approximate shared memory bytes required (as of
 8.3).

 This documentation still says as of 8.3. If they are unchanged, can I
 just remove the version mention?

Mentioning the version was intentional, so that people could gauge how
obsolete the information might be.  I certainly think it'd be
inappropriate to remove that caution flag right now, when the SSI patch
has likely blown those numbers all to heck.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Tom Lane
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= dev...@gunduz.org writes:
 Yesterday I spent about half an hour to dig an issue on my laptop -- 9.0
 instance did not start, and there were nothing in the logs. Eventually I
 straced postmaster on start, and noticed an extra char in the
 pg_hba.conf, which caused postmaster not to start.

 Is it possible to drop something to (startup) log file and mention about
 the syntax error in hba file?

That information would certainly be logged somewhere.  Maybe you looked
in the wrong log file?  Or you're using one of those configurations
where early output to stderr goes to /dev/null?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SR standby hangs

2011-02-20 Thread Greg Stark
On Fri, Feb 18, 2011 at 6:59 PM, Andrew Dunstan amduns...@nc.rr.com wrote:
 The server is running as a warm standby, and the client's application tries
 to connect to both the master and the slave, accepting whichever lets it
 connect (hence hot standby is not turned on).
...
   #2  0x005de645 in LockBufferForCleanup () at bufmgr.c:2432
   #3  0x00463733 in heap_xlog_clean (lsn=value optimized out,

Hm, if hot standby isn't turned on why are we bothering with locks at
all? But that said is there *anything* else happening in the database
aside from recovery? Are there any errors in the database log?

But this still shouldn't block. It's either blocked locking the buffer
or blocked waiting for the buffer to become unpinned. It would be nice
to get a backtrace from a debugging build which wouldn't have some of
the functions inlined. It would be really nice to see the pin count on
the buffer in question -- perhaps it has gotten out of sync or
underflowed?

Hmm I wonder, does Tom's boot strapping hack to allow the
postmaster to use the real user and database tables cause pages in
those tables to become pinned? What happens if it plays a cleanup
recovery record on one of those tables while someone is trying to log
in?


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Devrim GÜNDÜZ
On Sun, 2011-02-20 at 10:39 -0500, Tom Lane wrote:
 
  Is it possible to drop something to (startup) log file and mention
  about the syntax error in hba file?
 
 That information would certainly be logged somewhere.  Maybe you
 looked in the wrong log file?  Or you're using one of those
 configurations where early output to stderr goes to /dev/null? 

I tried to start postmaster with:

* pg_ctl -D data start
* postmaster -D data
* pg_ctl -D data -l logfile start

None of them logged anything to anywhere. You can reproduce it on your
machine, too.
-- 
Devrim GÜNDÜZ
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Tom Lane
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= dev...@gunduz.org writes:
 On Sun, 2011-02-20 at 10:39 -0500, Tom Lane wrote:
 That information would certainly be logged somewhere.  Maybe you
 looked in the wrong log file?  Or you're using one of those
 configurations where early output to stderr goes to /dev/null? 

 None of them logged anything to anywhere. You can reproduce it on your
 machine, too.

No, I can't.  I get something like this on stderr:

LOG:  invalid connection type zlocal
CONTEXT:  line 82 of configuration file 
/home/postgres/testversion/data/pg_hba.conf
FATAL:  could not load pg_hba.conf

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Devrim GÜNDÜZ
On Sun, 2011-02-20 at 12:45 -0500, Tom Lane wrote:
 
  None of them logged anything to anywhere. You can reproduce it on
 your
  machine, too.
 
 No, I can't.  I get something like this on stderr:
 
 LOG:  invalid connection type zlocal 

Err, please try putting an extra char before the first comment line,
like:

w# PostgreSQL Client Authentication Configuration File

-- 
Devrim GÜNDÜZ
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Thom Brown
2011/2/20 Devrim GÜNDÜZ dev...@gunduz.org:
 On Sun, 2011-02-20 at 12:45 -0500, Tom Lane wrote:

  None of them logged anything to anywhere. You can reproduce it on
 your
  machine, too.

 No, I can't.  I get something like this on stderr:

 LOG:  invalid connection type zlocal

 Err, please try putting an extra char before the first comment line,
 like:

 w# PostgreSQL Client Authentication Configuration File

That mistake looks familiar ;)

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot *start* server because of a typo in pg_hba.conf

2011-02-20 Thread Tom Lane
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= dev...@gunduz.org writes:
 On Sun, 2011-02-20 at 12:45 -0500, Tom Lane wrote:
 No, I can't.  I get something like this on stderr:
 
 LOG:  invalid connection type zlocal 

 Err, please try putting an extra char before the first comment line,
 like:

 w# PostgreSQL Client Authentication Configuration File

Given exactly that mistake, I get

LOG:  invalid connection type w
CONTEXT:  line 1 of configuration file 
/home/postgres/testversion/data/pg_hba.conf
FATAL:  could not load pg_hba.conf

Are you sure you're looking in the right place?

One thought that comes to mind is that I've seen cases where SELinux
prevented the postmaster from writing anything at all to its stderr.
I didn't think that could still happen in any current SELinux versions,
but it might be worth checking the kernel log for avc denials.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - file_fdw

2011-02-20 Thread Tom Lane
Shigeru HANADA han...@metrosystems.co.jp writes:
 [ 20110218-file_fdw.patch ]

I've adjusted this to fit the extensions infrastructure and the
committed version of the FDW API patch, and applied it.

 * You might forget some combination or unspecified options in
 file_fdw_validator().
 For example, format == NULL or !csv  header cases. I've not tested all
 cases, but please recheck validations used in BeginCopy().

 Right, I've revised validation based on BeginCopy(), and added
 regression tests about validation.

This approach struck me as entirely unmaintainable.  I modified the core
COPY code to allow its option validation code to be called directly.

 If so, we need alternative
 solution in estimate_costs(). We adjust statistics with runtime relation
 size in estimate_rel_size(). Also, we use get_rel_data_width() for not
 analyzed tables. We could use similar technique in file_fdw, too.

 Ah, using get_relation_data_width(), exported version of
 get_rel_data_width(), seems to help estimation.  I'll research around
 it little more.  By the way, adding ANALYZE support for foreign tables
 is reasonable idea for this issue?

I did some quick hacking so that the numbers are at least a little bit
credible, but of course without ANALYZE support the qualification
selectivity estimates are likely to be pretty bogus.  I am not sure
whether there's much of a use-case for supporting ANALYZE though.
I would think that if someone is going to read the same file in multiple
queries, they'd be far better off importing the data into a real table.
In any case, it's too late to worry about that for 9.1.  I suggest
waiting to see what sort of real-world usage file_fdw gets before we
worry about whether it needs ANALYZE support.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix for fuzzystrmatch

2011-02-20 Thread Tom Lane
Alexander Korotkov aekorot...@gmail.com writes:
 Hacker,
 I found two issues in fuzzystrmatch contrib.
 1) Incorrect s_data shift in levenshtein calculation with threshold with
 multibyte characters. i index was used instead of start_column.
 2) Missing dependency of fuzzystrmatch.o on levenshtein.c

 Patch is attached.

Applied, thanks.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup and wal streaming

2011-02-20 Thread Dimitri Fontaine
Hi,

Magnus Hagander mag...@hagander.net writes:
 Better late than never (or?), here's the final cleanup of
 pg_streamrecv for moving into the main distribution, per discussion
 back in late dec or early jan. It also includes the stream logs in
 parallel to backup part that was not completed on pg_basebackup.

And that's something I've been so interested in!  It's only fair game
that I spend time reviewing after my insisting for having it :)

The programs (pg_basebackup and pg_receivexlog) both work as expected,
and show in the pg_stat_replication system view.

pg_basebackup -x option should be revised so that it's easier to make
the difference between streaming WAL while the base backup is ongoing
and fetching them at the end, taking the risk to corrupt the whole
backup as soon as wal_keep_segments is undersized.

  -x, --xlog[=stream]   include required WAL files in backup

It could be --xlog=stream|fetch or something that reads better.

The sent patch includes a binary called pg_receivexlog\(X\), but Magnus
told me on IRC that this is fixed already in his branch (a missing $ at
several places in the Makefile).

Now, on to the code itself.

I wonder if the segment_callback() routine would better be a while{}
loop rather than a recursive construct.  Also, it looks like a lib
function but it's doing exit(1)…

Unfortunately I can't comment (or won't risk learning enough details
tonight to try to be smart here) on FindStreamingStart() implementation,
that seems crucial.

 Other than that, the only changes to pg_basebackup are the moving of a
 couple of functions into streamutil.c to make them usable from both,
 and the progress format output fix Fujii-san mentioned recently.

Check.

 Should be complete except for Win32 support (needs thread/fork thing
 for the  background streaming feature. Shouldn't be too hard, and I
 guess that falls on me anyway..) and the reference documentation.

Yeah, StartLogStreamer() is still using fork() at the moment, I guess
you will have to change that prior to commit.  Maybe you can reuse the
code found in src/bin/pg_dump/pg_backup_archiver.c, spawn_restore.


I can't wait to see the new streaming replication setup docs using that
integrated tool.  Even if baring another step here, we still need to
rely on wal_keep_segments (for switching from the base backup to the
live standby), the critical window is so much reduced… and it's now
possible to prepare the standby using a single integrated command line.


Will the server refrain from recycling a WAL file when all receivers
sent_location are not known to be past the positions contained in it?
If that's the case, the documentation should talk about pg_receivexlog
as an alternative to archiving, relying on libpq.  It that's not the
case, is there a good reason for that not being the case? (even if
that's not on this patch to fix that).

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Tom Lane wrote:
 You seem to be confusing one limitation in one code path with the
 overall meaning of maintenance_work_mem.

 Oh, OK, so sorts are limited, but not hash sizes?  Are there any other
 uses?  Should this be documented somehow?  What is the actual sort
 limit?

The particular complaint that's being made here is about tuplesort.c's
array of SortTuples, which isn't all (or even the largest part) of its
memory consumption.  The tuples themselves eat significantly more in
nearly all cases.  I don't think there's any very easy way to document
what the largest useful maintenance_work_mem for sorting is based on
that --- you'd have to pull a number for tuple size out of the air.
But it's certainly possible to use up lots of gigabytes when sorting
wide tuples.  I think the original complaint in this thread was about
building an index, which probably had relatively small tuples so the
SortTuple constraint was more pressing.

In any case, this is the sort of thing that'd be far better to fix than
document.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: FDW API

2011-02-20 Thread Tom Lane
I wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Fri, Feb 18, 2011 at 10:47 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 The main downside of that is that relation relkinds would have
 to become fixed (because there would be no practical way of updating
 RTEs in stored rules), which means the convert relation to view hack
 would have to go away.

 That actually sounds like a possible problem, because it's possible to
 create views with circular dependencies using CORV, and they won't
 dump-and-reload properly without that hack.

 Urgh.  That's problematic, because even if we changed pg_dump (which
 would not be that hard I think), we'd still have to cope with dump files
 emitted by existing versions of pg_dump.

 [ thinks a bit ... ]  But we can probably hack our way around that:
 teach the rule rewriter to update relkind in any RTE it brings in from a
 stored rule.  We already do something similar in some other cases where
 a stored parsetree node contains information that could become obsolete.

I did a bit of poking around here, and came to the following
conclusions:

1. We don't want to add another RTEKind.  RTE_RELATION basically has the
semantics of anything with a pg_class OID, so it ought to include
foreign tables.  Therefore the fix ought to be to add relkind to
RangeTblEntry.  (BTW, so far as I can tell, RTE_SPECIAL is obsolete:
there are assorted switch cases that handle it, but no place that can
generate the value.  I'm inclined to delete it while we are messing
with this.)

2. In the current code layout, to make sense of relkind you need to
#include catalog/pg_class.h where the values for relkind are #defined.
I dislike the idea of that being true for a field of such a widely-known
struct as RangeTblEntry.  Accordingly, I suggest that we move those
values into parsenodes.h.  (Perhaps we could convert them to an enum,
too, though still keeping the same ASCII values.)

3. We can have the rewriter update an RTE's stored value of relkind
during AcquireRewriteLocks: it opens the rel for each RTE_RELATION entry
anyway, so copying over the relkind is essentially free.  While it's not
instantly obvious that that is soon enough, I think that it is, since
up to the point of acquiring a lock there we can't assume that the rel
isn't being changed or dropped undeneath us --- that is, any earlier
test on an RTE's relkind might be testing just-obsoleted state anyway.

4. I had hoped that we might be able to get rid of some pre-existing
syscache lookups, but at least so far as the parse/plan/execute chain
is concerned, there don't seem to be any other places that need to
fetch a relkind based on just an RTE entry.

So point #4 is a bit discouraging, but other that, this seems like
a fairly straightforward exercise.  I'm inclined to go ahead and do it,
unless there are objections.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep v17

2011-02-20 Thread Jaime Casanova
On Sun, Feb 20, 2011 at 7:20 AM, Florian Pflug f...@phlo.org wrote:
 On Feb20, 2011, at 08:12 , Jaime Casanova wrote:
 considering that synchronous_replication to on means that we *want*
 durability, and that synchronous_commit to off means we don't *care*
 about durability. Then the real question here is: in the presence of
 synchronous_replication to on (which means we want durability), are we
 allowed to assume we can loss data?

 From the angle, shouldn't we turn synchronous_replication=on into a third
 possible state of synchronous_commit?

 We'd then have

 synchronous_commit=off #Same as now
 synchronous_commit=local #Same as synchronous_commit=on,
                         #synchronous_replication=off
 synchronous_commit=standby #Same as synchronous_commit=on,
                           #synchronous_replication=on


that would be a little confuse and difficult to document. at least i
know that as far as we say something like this to activate
synchronous replication set synchronous commit to standby users
somehow will have the impression that locally the commit is
asynchronous (ok, a have had bad experiences with Ecuadorian users ;)

 one way to manage that is simply disallow that combination with an
 error, maybe that is a bit strict but we haven't to make assumptions;
 the other option is to keep safe which means keep durability so if you
 want to risk some data then you should disable synchronous_replication
 as well as synchronous_commit... maybe sending a message to the log
 when you detect the conflicting situation.

 The question is where we'd raise the error, though. Doing it on GUC
 assignment makes the behaviour depend on assignment order. That's a
 bad idea I believe, since it possibly breaks ALTER ROLE/DATEBASE SET 


well, yeah... maybe is just to much worthless work... but we can check
before commit and send a NOTICE message

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte y capacitación de PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Well, I figure it will be hard to allow larger maximums, but can we make
  the GUC variable maximums be more realistic?  Right now it is
  MAX_KILOBYTES (INT_MAX).
 
 You seem to be confusing one limitation in one code path with the
 overall meaning of maintenance_work_mem.

Oh, OK, so sorts are limited, but not hash sizes?  Are there any other
uses?  Should this be documented somehow?  What is the actual sort
limit?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep v17

2011-02-20 Thread Florian Pflug
On Feb21, 2011, at 00:09 , Jaime Casanova wrote:
 On Sun, Feb 20, 2011 at 7:20 AM, Florian Pflug f...@phlo.org wrote:
 On Feb20, 2011, at 08:12 , Jaime Casanova wrote:
 considering that synchronous_replication to on means that we *want*
 durability, and that synchronous_commit to off means we don't *care*
 about durability. Then the real question here is: in the presence of
 synchronous_replication to on (which means we want durability), are we
 allowed to assume we can loss data?
 
 From the angle, shouldn't we turn synchronous_replication=on into a third
 possible state of synchronous_commit?
 
 We'd then have
 
 synchronous_commit=off #Same as now
 synchronous_commit=local #Same as synchronous_commit=on,
 #synchronous_replication=off
 synchronous_commit=standby #Same as synchronous_commit=on,
   #synchronous_replication=on
 
 
 that would be a little confuse and difficult to document. at least i
 know that as far as we say something like this to activate
 synchronous replication set synchronous commit to standby users
 somehow will have the impression that locally the commit is
 asynchronous (ok, a have had bad experiences with Ecuadorian users ;)


I figured we'd say something like

'To guarantee durability of transactions except in the fail-over case,
set synchronous_commit to local. To additionally guarantee durability
even in the case of a fail-over to a standby node, set synchronous_commit
to standby. This causes a COMMIT to only report success once the commit
record has be acknowledged by the current synchronous standby node, and
will therefore induce a higher latency of COMMIT requests.'

Other possible names for the standby options that come to mind are
replication, replica or replicate. Or maybe cluster, but that
seems confusing since we sometimes call the collection of databases managed
by one postgres instance a cluster.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI bug?

2011-02-20 Thread Kevin Grittner
Kevin Grittner wrote:
 
 I'm proceeding on this basis.
 
Result attached. I found myself passing around the tuple xmin value
just about everywhere that the predicate lock target tag was being
passed, so it finally dawned on me that this logically belonged as
part of the target tag. That simplified the changes, and the net
result of following Heikki's suggestion here is the reduction of
total lines of code by 178 while adding coverage for missed corner
cases and fixing bugs.
 
Thanks again, Heikki!
 
I will test this some more tomorrow.  So far I haven't done more than
ensure it passes the standard regression tests and the isolation
tests added for SSI.  The latter took awhile because the hash_any
function was including uninitialized bytes past the length of the tag
in its calculations.  We should probably either fix that or document
it.  I had to add another couple bytes to the tag to get it to a four
byte boundary to fix it.  Easy once you know that's how it works...
 
The attached patch can also be viewed here:
 
http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=46fd5ea6728b566c521ec83048bc00a207289dd9
 
If this stands up to further testing, the only issue I know of for
the 9.1 release is to update the documentation of shared memory usage
to include the new structures.
 
-Kevin


ssi-multi-update-1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] contrib/unaccent regression test failed

2011-02-20 Thread Itagaki Takahiro
I found make installcheck for contrib/unaccent in master is failed
on my machine, The regressions.diffs attached.

I'm not sure the reason because BuildFarm machines are all OK,
but we might need some checks in addition to database encoding.

* OS:
Linux 2.6.35.10-74.fc14.x86_64 #1 SMP Thu Dec 23 16:04:50 UTC 2010
x86_64 x86_64 x86_64 GNU/Linux
* PG DB:
Name|  Owner   | Encoding | Collate | Ctype |   Access
privileges
+--+--+-+---+---
 contrib_regression | postgres | UTF8 | C   | C |

-- 
Itagaki Takahiro


regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update PostgreSQL shared memory usage table for 9.0?

2011-02-20 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Bruce Momjian wrote:
  Can someone update the PostgreSQL shared memory usage table for 9.0?
  
  http://www.postgresql.org/docs/9.0/static/kernel-resources.html#SYSVIPC
  
  Right now it says Approximate shared memory bytes required (as of
  8.3).
 
  This documentation still says as of 8.3. If they are unchanged, can I
  just remove the version mention?
 
 Mentioning the version was intentional, so that people could gauge how
 obsolete the information might be.  I certainly think it'd be
 inappropriate to remove that caution flag right now, when the SSI patch
 has likely blown those numbers all to heck.

OK, thanks. I was hoping that either the numbers are stable, or we are
always going to update them for every major release, but alas, seems
what we do now is the best option.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Void binary patch

2011-02-20 Thread David Fetter
On Sun, Feb 20, 2011 at 11:20:22AM +0100, Radosław Smogura wrote:
 Just patch for missing procedures for void send/recv
 
 Regards,
 Radek

Thanks! :)

Style note: the project doesn't use // as a comment-to-end-of-line in
C because some supported compilers don't understand it.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-02-20 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Tom Lane wrote:
  You seem to be confusing one limitation in one code path with the
  overall meaning of maintenance_work_mem.
 
  Oh, OK, so sorts are limited, but not hash sizes?  Are there any other
  uses?  Should this be documented somehow?  What is the actual sort
  limit?
 
 The particular complaint that's being made here is about tuplesort.c's
 array of SortTuples, which isn't all (or even the largest part) of its
 memory consumption.  The tuples themselves eat significantly more in
 nearly all cases.  I don't think there's any very easy way to document
 what the largest useful maintenance_work_mem for sorting is based on
 that --- you'd have to pull a number for tuple size out of the air.
 But it's certainly possible to use up lots of gigabytes when sorting
 wide tuples.  I think the original complaint in this thread was about
 building an index, which probably had relatively small tuples so the
 SortTuple constraint was more pressing.
 
 In any case, this is the sort of thing that'd be far better to fix than
 document.

Added to TODO:

Allow sorts to use more available memory

* http://archives.postgresql.org/pgsql-hackers/2007-11/msg01026.php
* http://archives.postgresql.org/pgsql-hackers/2010-09/msg01123.php
* http://archives.postgresql.org/pgsql-hackers/2011-02/msg01957.php 

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY ENCODING revisited

2011-02-20 Thread Itagaki Takahiro
On Fri, Feb 18, 2011 at 20:12, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 + extern char *pg_any_to_server(const char *s, int len, int encoding);
 + extern char *pg_server_to_any(const char *s, int len, int encoding);

I applied the version with additional codes for file_fdw.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers