Re: [HACKERS] foreign key locks

2013-01-19 Thread Simon Riggs
On 18 January 2013 21:01, Tom Lane t...@sss.pgh.pa.us wrote:
 Andres Freund and...@2ndquadrant.com writes:
 On 2013-01-18 15:37:47 -0500, Tom Lane wrote:
 I doubt it ever came up before.  What use is logging only the content of
 a buffer page?  Surely you'd need to know, for example, which relation
 and page number it is from.

 It only got to be a length of 0 because the the data got removed due to
 a logged full page write. And the backup block contains the data about
 which blocks it is logging in itself.

 And if the full-page-image case *hadn't* been invoked, what then?  I
 still don't see a very good argument for xlog records with no fixed
 data.

 I wonder if the check shouldn't just check write_len instead, directly
 below the loop that ads backup blocks.

 We're not changing this unless you can convince me that the read-side
 error check mentioned in the comment is useless.


There's some confusion here, I think. Alvaro wasn't proposing a WAL
record that had no fixed data.

The current way XLogRecData works is that if you put data and buffer
together on the same chunk then we optimise the data away if we take a
backup block.

Alvaro chose what looks like the right way to do this when you have
simple data: use one XLogRecData chunk and let the data part get
optimised away. But doing that results in a WAL record that has a
backup block, plus data of 0 length, which then fails the later check.

All current code gets around this by including multiple XLogRecData
chunks, which results in including additional data that is superfluous
when the backup block is present. Alvaro was questioning this; I
didn't understand that either when he said it, but I do now.

The zero length check should stay, definitely.

What this looks like is that further compression of the WAL is
possible, but given its alongside backup blocks, that's on the order
of a 1% saving, so probably isn't worth considering right now. The way
to do that would to include a small token to show record has been
optimised, rather than being zero length.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing connection string to pg_basebackup

2013-01-19 Thread Magnus Hagander
On Fri, Jan 18, 2013 at 1:05 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 18.01.2013 13:41, Amit Kapila wrote:

 On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote:

 On 18.01.2013 08:50, Amit Kapila wrote:

 I think currently user has no way to specify TCP keepalive settings

 from

 pg_basebackup, please let me know if there is any such existing way?


 I was going to say you can just use keepalives_idle=30 in the
 connection string. But there's no way to pass a connection string to
 pg_basebackup on the command line! The usual way to pass a connection
 string is to pass it as the database name, and PQconnect will expand
 it,
 but that doesn't work with pg_basebackup because it hardcodes the
 database name as replication. D'oh.

 You could still use environment variables and a service file to do it,
 but it's certainly more cumbersome. It clearly should be possible to
 pass a full connection string to pg_basebackup, that's an obvious
 oversight.


 So to solve this problem below can be done:
 1. Support connection string in pg_basebackup and mention keepalives or
 connection_timeout
 2. Support recv_timeout separately to provide a way to users who are not
 comfortable tcp keepalives

 a. 1 can be done alone
 b. 2 can be done alone
 c. both 1 and 2.


 Right. Let's do just 1 for now. An general application level, non-TCP,
 keepalive message at the libpq level might be a good idea, but that's a much
 larger patch, definitely not 9.3 material.

+1 for doing 1 now. But actually, I think we can just keep it that way
in the future as well. If you need to specify these fairly advanced
options, using a connection string really isn't a problem.

I think it would be more worthwhile to go through the rest of the
tools in bin/ and make sure they *all* support connection strings.
And, an important point,  do it the same way.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing connection string to pg_basebackup

2013-01-19 Thread Magnus Hagander
On Fri, Jan 18, 2013 at 2:43 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Heikki Linnakangas hlinnakan...@vmware.com writes:
 You could still use environment variables and a service file to do it, but
 it's certainly more cumbersome. It clearly should be possible to pass a full
 connection string to pg_basebackup, that's an obvious oversight.

 FWIW, +1. I would consider it a bugfix (backpatch, etc).

While it's a feature I'd very much like to see, I really don't think
you can consider it a bugfix. It's functionality that was left out -
it's not like we tried to implement it and it didn't work. We pushed
the whole implementation to next version (and then forgot about
actually putting it in the next version, until now)


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More subtle issues with cascading replication over timeline switches

2013-01-19 Thread Amit kapila
On Friday, January 18, 2013 5:27 PM Heikki Linnakangas wrote:


 Indeed, looking at the pg_xlog, it's not there (I did a couple of extra 
 timeline switches:

 ~/pgsql.master$ ls -l data-master/pg_xlog/
 total 131084
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00010001
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00010002
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00010003
 -rw--- 1 heikki heikki   41 Jan 18 13:38 0002.history
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00020003
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00020004
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00020005
 -rw--- 1 heikki heikki   83 Jan 18 13:38 0003.history
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00030005
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00030006
 drwx-- 2 heikki heikki 4096 Jan 18 13:38 archive_status
 ~/pgsql.master$ ls -l data-standbyB/pg_xlog/
 total 81928
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00010001
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00010002
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00020003
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00020004
 -rw--- 1 heikki heikki   83 Jan 18 13:38 0003.history
 -rw--- 1 heikki heikki 16777216 Jan 18 13:38 00030005
 drwx-- 2 heikki heikki 4096 Jan 18 13:38 archive_status

 This can be thought of as another variant of the same issue that was 
 fixed by commit 60df192aea0e6458f20301546e11f7673c102101. When standby B 
 scans for the latest timeline, it finds it to be 3, and it reads the 
 timeline history file for 3. After that patch, it also saves it in 
 pg_xlog. It doesn't save the timeline history file for timeline 2, 
 because that's included in the history of timeline 3. However, when 
 standby C connects, it will try to fetch all the history files that it 
 doesn't have, including 0002.history, which throws the error.

  Is the file 0002.history really required by standby C for any useful 
purpose?
  Can we think of change in current design such that when standby C connects, 
even if some old history file (like 0002.history)
  is not present, it ignores the same and continue.

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] allowing privileges on untrusted languages

2013-01-19 Thread Simon Riggs
On 19 January 2013 13:45, Kohei KaiGai kai...@kaigai.gr.jp wrote:

 I think, it is a time to investigate separation of database superuser 
 privileges
 into several fine-grained capabilities, like as operating system doing.
 https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h

 In case of Linux, the latest kernel has 36 kinds of capabilities that reflects
 a part of root privileges, such as privilege to open listen port less than 
 1024,
 privilege to override DAC permission and so on. Traditional root performs
 as a user who has all the capability in default.

Sounds like the best way to go. The reasoning that led to that change
works for us as well.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] How to build contrib module separately in PostgreSQL on windows

2013-01-19 Thread 朱冯贶天
Hi,
I change some code of extension cube in PostgreSQL to do some experiments.
The problem is that I want to make these changed code effective, and I am not 
sure how to build it on windows.
After downloading the source code, I enter the postgresql-9.2.2\contrib\cube 
to type 'nmake' with VS2010 command environment. However, the Makefile is not 
compatible with vs2010.
Next, I download mingw and use it to build the extension cube. But it also 
failed as above.
What I have found on Internet so far is just build contrib module on Linux , 
Unix and other platform, but no windows.I know that there is a way to build the 
whole PostgreSQL from source code. But it is not convenient.
So, the question is : How to build contrib module separately in PostgreSQL on 
windows? Perhaps it is a simple question , but I can't find the answer.
Thank U for suggestion.   

Re: [HACKERS] How to build contrib module separately in PostgreSQL on windows

2013-01-19 Thread Andrew Dunstan

On 01/19/2013 10:09 AM, 朱冯贶天 wrote:
 Hi,

 I change some code of extension cube in PostgreSQL to do some
 experiments.


 The problem is that I want to make these changed code effective, and I
 am not sure how to build it on windows.


 After downloading the source code, I enter the
 postgresql-9.2.2\contrib\cube to type 'nmake' with VS2010 command
 environment. However, the Makefile is not compatible with vs2010.


 Next, I download mingw and use it to build the extension cube. But
 it also failed as above.


 What I have found on Internet so far is just build contrib module on
 Linux , Unix and other platform, but no windows.

 I know that there is a way to build the whole PostgreSQL from source
 code. But it is not convenient.


 So, the question is : How to build contrib module separately in
 PostgreSQL on windows? Perhaps it is a simple question , but I can't
 find the answer.




Either use mingw or use VS2010. You don't use them together.


See http://www.postgresql.org/docs/current/static/install-windows.html

cheers

andrew



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Query to help in debugging

2013-01-19 Thread Bruce Momjian
On Sat, Jan 19, 2013 at 11:20:19AM -0500, Kevin Grittner wrote:
 Bruce Momjian wrote:
 
  I am wondering if we should make this query more widely used, perhaps by
  putting it in our docs about reporting bugs, or on our website.
 
 http://wiki.postgresql.org/wiki/Server_Configuration
 
 http://wiki.postgresql.org/wiki/Guide_to_reporting_problems#Things_you_need_to_mention_in_problem_reports
 
 Feel free to make any adjustments you feel are needed.  :-)

Oh, so we already have it documnted.  Great.  I adjusted it slightly to
be clearer.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Query to help in debugging

2013-01-19 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 I am wondering if we should make this query more widely used, perhaps by
 putting it in our docs about reporting bugs, or on our website.

I find the manual exclusion list to be poor style, and not at all
future-proof.  Maybe we could use

select name, setting, source from pg_settings
where source not in ('default', 'override');

This would print a few not-all-that-interesting settings made by initdb,
but not having to adjust the exclusion list for different versions is
easily worth that.  I think the source column is potentially useful when
we're casting this type of fishing net, too.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Contrib PROGRAM problem

2013-01-19 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 *sigh*. Don't post after midnight. Of course, this isn't relevant to a 
 cross-compiling environment, where repeated invocations of make 
 repeatedly build the executables.

 The question is whether we care enough about this case to fix it.

I think we certainly need the $(X) inside the command, so that the
correct files get built.  I'm not especially excited about whether a
repeat invocation of make will do useless work --- that would only
really matter to a PG developer, but who'd do development in a
cross-compilation environment, where testing would be highly
inconvenient?  So I'm prepared to sacrifice that aspect of it for
not cluttering the makefiles.

YMMV of course ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] My first patch! (to \df output)

2013-01-19 Thread Phil Sorber
On Jan 19, 2013 10:55 AM, Jon Erdman postgre...@thewickedtribe.net
wrote:


 I did realize that since I moved it to + the doc should change, but I
didn't address that. I'll get on it this weekend.

 As far as the column name and displayed values go, they're taken from the
CREATE FUNCTION syntax, and were recommended by Magnus, Bruce, and Fetter,
who were all sitting next to me day after pgconf.eu Prague. I personally
have no strong feelings either way, I just want to be able to see the info
without having to directly query pg_proc. Whatever you all agree on is fine
by me.

Sounds like you have a +4/-1 on the names then. I'd keep them as is.


Re: [HACKERS] Passing connection string to pg_basebackup

2013-01-19 Thread Dimitri Fontaine
Magnus Hagander mag...@hagander.net writes:
 FWIW, +1. I would consider it a bugfix (backpatch, etc).

 While it's a feature I'd very much like to see, I really don't think
 you can consider it a bugfix. It's functionality that was left out -
 it's not like we tried to implement it and it didn't work. We pushed
 the whole implementation to next version (and then forgot about
 actually putting it in the next version, until now)

Thanks for reminding me about that, I completely forgot about all that.

On the other hand, discrepancies in between command line arguments
processing in our tools are already not helping our users (even if
pg_dump -d seems to have been fixed along the years); so much so that
I'm having a hard time finding any upside into having a different set of
command line argument capabilities for the same tool depending on the
major version.

We are not talking about a new feature per se, but exposing a feature
that about every other command line tool we ship have. So I think I'm
standing on my position that it should get backpatched as a fix.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Query to help in debugging

2013-01-19 Thread Kevin Grittner
Tom Lane wrote:

 I find the manual exclusion list to be poor style, and not at all
 future-proof. Maybe we could use
 
 select name, setting, source from pg_settings
 where source not in ('default', 'override');
 
 This would print a few not-all-that-interesting settings made by initdb,
 but not having to adjust the exclusion list for different versions is
 easily worth that. I think the source column is potentially useful when
 we're casting this type of fishing net, too.

Done.

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-19 Thread Pavel Stehule
Hello

2013/1/18 Tom Lane t...@sss.pgh.pa.us:
 Stephen Frost sfr...@snowman.net writes:
 [ quick review of patch ]

 On reflection it seems to me that this is probably not a very good
 approach overall.  Our general theory for functions taking ANY has
 been that the core system just computes the arguments and leaves it
 to the function to make sense of them.  Why should this be different
 in the one specific case of VARIADIC ANY with a variadic array?


I am not sure if I understand well to question?

Reason why VARIADIC any is different than VARIADIC anyarray is
simple - we have only single type arrays - there are no any[] - and
we use combination FunctionCallInfoData structure for data and parser
expression nodes for type specification. And why we use VARIADIC any
function? Due our coerce rules - that try to find common coerce type
for any unknown (polymorphic) parameters. This coercion can be
acceptable - and then people can use VARIADIC anyarray or
unacceptable - and then people should to use VARIADIC any - for
example we would not lost type info for boolean types. Next motivation
for VARIADIC any - implementation is very simple and fast - nobody
have to do packing and unpacking array - just push values to narg, arg
and argnull fields of FunctionCallInfoData structure. There are no
necessary any operations with data. There are only one issue - this
corner case.

 The approach is also inherently seriously inefficient.  Not only
 does ExecMakeFunctionResult have to build a separate phony Const
 node for each array element, but the variadic function itself can
 have no idea that those Consts are all the same type, which means
 it's going to do datatype lookup over again for each array element.
 (format(), for instance, would have to look up the same type output
 function over and over again.)  This might not be noticeable on toy
 examples with a couple of array entries, but it'll be a large
 performance problem on large arrays.

yes, format() it actually does it - in all cases. And it is not best
- but almost of overhead is masked by using sys caches.

What is important - for this use case - there is simple and perfect
possible optimization - in this case non variadic manner call of
variadic any function all variadic parameters will share same type.
Inside variadic function I have not information so this situation is
in this moment, but just I can remember last used type - and I can
reuse it, when parameter type is same like previous parameter.

So there no performance problem.


 The patch also breaks any attempt that a variadic function might be
 making to cache info about its argument types across calls.  I suppose
 that the copying of the FmgrInfo is a hack to avoid crashes due to
 called functions supposing that their flinfo-fn_extra data is still
 valid for the new argument set.  Of course that's another pretty
 significant performance hit, compounding the per-element hit.  Whereas
 an ordinary variadic function that is given N arguments in a particular
 query will probably do N datatype lookups and cache the info for the
 life of the query, a variadic function called with this approach will
 do one datatype lookup for each array element in each row of the query;
 and there is no way to optimize that.


I believe so cache of last used datatype and related function can be
very effective and enough for this possible performance issues.


 But large arrays have a worse problem: the approach flat out does
 not work for arrays of more than FUNC_MAX_ARGS elements, because
 there's no place to put their values in the FunctionCallInfo struct.
 (As submitted, if the array is too big the patch will blithely stomp
 all over memory with user-supplied data, making it not merely a crash
 risk but probably an exploitable security hole.)

agree -  FUNC_MAX_ARGS should be tested - it is significant security
bug and should be fixed.


 This last problem is, so far as I can see, unfixable within this
 approach; surely we are not going to accept a feature that only works
 for small arrays.  So I am going to mark the CF item rejected not just
 RWF.


disagree - non variadic manner call should not be used for walk around
FUNC_MAX_ARGS limit. So there should not be passed big array.

If somebody need to pass big array to function, then he should to use
usual non variadic function with usual array type parameter. He should
not use a VARIADIC parameter. We didn't design variadic function to
exceeding  FUNC_MAX_ARGS limit.

So I strongly disagree with rejection for this argument. By contrast -
a fact so we don't check array size when variadic function is not
called as variadic function is bug.

Any function - varidic or not varidic in any use case have to have max
FUNC_MAX_ARGS arguments. Our internal variadic functions - that are
+/- VARIADIC any functions has FUNC_MAX_ARGS limit.


 I believe that a workable approach would require having the function
 itself act differently when the VARIADIC flag is set.  Currently there
 

Re: [HACKERS] could not create directory ...: File exists

2013-01-19 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 I committed this with a couple of changes:

Great, thanks!

 * I used GetLatestSnapshot rather than GetTransactionSnapshot.  Since
 we don't allow these operations inside transaction blocks, there
 shouldn't be much difference, but in principle GetTransactionSnapshot
 is the wrong thing; in a serializable transaction it could be quite old.

Makes sense.

 * After reviewing the other uses of SnapshotNow in dbcommands.c, I
 decided we'd better make the same type of change in remove_dbtablespaces
 and check_db_file_conflict, because those are likewise doing filesystem
 operations on the strength of what they find in pg_tablespace.

Thanks for that.  I had noticed the other places where we were using
SnapshotNow, but I hadn't run down if they needed to be changed or not.

 I also ended up deciding to back-patch to 8.3 as well.

Excellent, thanks again.

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] My first patch! (to \df output)

2013-01-19 Thread Stephen Frost
* Phil Sorber (p...@omniti.com) wrote:
 Stephen, I think Jon's column name and values make a lot of sense.

a'ight.  I can't think of anything better.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-19 Thread Stephen Frost
Pavel,

  While I certainly appreciate your enthusiasm, I don't think this is
  going to make it into 9.3, which is what we're currently focused on.

  I'd suggest that you put together a wiki page or similar which
  outlines how this is going to work and be implemented and it can be
  discussed for 9.4 in a couple months.  I don't think writing any more
  code is going to be helpful for this right now.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Query to help in debugging

2013-01-19 Thread Bruce Momjian
On Sat, Jan 19, 2013 at 12:58:35PM -0500, Kevin Grittner wrote:
 Tom Lane wrote:
 
  I find the manual exclusion list to be poor style, and not at all
  future-proof. Maybe we could use
  
  select name, setting, source from pg_settings
  where source not in ('default', 'override');
  
  This would print a few not-all-that-interesting settings made by initdb,
  but not having to adjust the exclusion list for different versions is
  easily worth that. I think the source column is potentially useful when
  we're casting this type of fishing net, too.
 
 Done.

Here is my very wide output:

name|   
current_setting|source

+--+--
 version| PostgreSQL 9.3devel on 
x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-.| 
version()
|.bit   
   |
 application_name   | psql  
   | client
 client_encoding| UTF8  
   | client
 DateStyle  | ISO, MDY  
   | configuration file
 default_text_search_config | pg_catalog.english
   | configuration file
 lc_messages| en_US.UTF-8   
   | configuration file
 lc_monetary| en_US.UTF-8   
   | configuration file
 lc_numeric | en_US.UTF-8   
   | configuration file
 lc_time| en_US.UTF-8   
   | configuration file
 log_timezone   | US/Eastern
   | configuration file
 max_connections| 100   
   | configuration file
 max_stack_depth| 2MB   
   | environment variable
 shared_buffers | 128MB 
   | configuration file
 TimeZone   | US/Eastern
   | configuration file

Is there an easy way to wrap the 'version' value to a 40-character width?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-19 Thread Pavel Stehule
2013/1/19 Stephen Frost sfr...@snowman.net:
 Pavel,

   While I certainly appreciate your enthusiasm, I don't think this is
   going to make it into 9.3, which is what we're currently focused on.

   I'd suggest that you put together a wiki page or similar which
   outlines how this is going to work and be implemented and it can be
   discussed for 9.4 in a couple months.  I don't think writing any more
   code is going to be helpful for this right now.

if we don't define solution now, then probably don't will define
solution for 9.4 too. Moving to next release solves nothing.
Personally, I can living with commiting in 9.4 - people, who use it
and need it, can use existing patch, but I would to have a clean
proposition for this issue, because I spent lot of time on this
relative simple issue - and I am not happy with it. So I would to
write some what will be (probably) commited, and I don't would to
return to this open issue again.

Regards

Pavel



 Thanks,

 Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing connection string to pg_basebackup

2013-01-19 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 I don't think that argument holds any water at all.  There would still
 be differences in command line argument capabilities out there ---
 they'd just be between minor versions not major ones.  That's not any
 easier for people to deal with.  And what will you say to someone whose
 application got broken by a minor-version update?

Fair enough, I suppose.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-19 Thread Andrew Dunstan


On 01/19/2013 02:36 AM, Boszormenyi Zoltan wrote:




Cross-compiling is not really a supported platform. Why don't you 
just build natively? This is know to work as shown by the buildfarm 
animals doing it successfully.


Because I don't have a mingw setup on Windows. (Sorry.)



A long time ago I had a lot of sympathy with this answer, but these days 
not so much. Getting a working mingw/msys environment sufficient to 
build a bare bones PostgreSQL from scratch is both cheap and fairly 
easy. The improvements that mingw has made in its install process, and 
the presence of cheap or free windows instances in the cloud combine to 
make this pretty simple.  But since it's still slightly involved here is 
how I constructed one such  this morning:


 * Create an Amazon t1.micro spot instance of
   Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c)
   (current price $0.006 / hour)
 * get the credentials and log in using:
 o rdesktop -g 80%  -u Administrator -p 'password' amazon-hostname
 * turn off annoying IE security enhancements, and fire up IE
 * go to
   http://sourceforge.net/projects/mingw/files/Installer/mingw-get-inst/
   and download latest mingw-get-inst
 * run this - make sure to select the Msys and the developer toolkit in
   addition to the C compiler
 * navigate in explorer or a command window to \Mingw\Msys\1.0 and run
   msys.bat
 * run this command to install some useful packages:
 o mingw-get install msys-wget msys-rxvt msys-unzip
 * close that window
 * open a normal command window and run the following to create an
   unprivileged user and open an msys window as that user:
 o net user pgrunner SomePw1234 /add
 o runas /user:pgrunner cmd /c \mingw\msys\1.0\msys.bat --rxvt
 * in the rxvt window run:
 o wget
   http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz
 o tar -z -xf postgresql-snapshot.tar.gz
 o wget
   
http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Automated%20Builds/mingw-w64-bin_i686-mingw_20111220.zip/download;
 o mkdir /mingw64
 o cd /mingw64
 o unzip ~/mingw-w64-bin_i686-mingw_20111220.zip
 o cd ~/postgresql-9.3devel
 o export PATH=/mingw64/bin:$PATH
 o ./configure --host=x86_64-w64-mingw32 --without-zlib  make  
   make check
 + ( make some coffee and do the crossword or read War and
   Peace - this can take a while)

Of course, for something faster you would pick a rather more expensive 
instance than t1.micro (say, m1.large, which usually has a spot price of 
$0.03 to $0.06 per hour), and if you're going to do this a lot you'll 
stash your stuff on an EBS volume that you can remount as needed,  or 
zip it up and put it on S3, and then just pull it and unpack it in one 
hit from there. And there's barely enough room left on the root file 
system to do what's above anyway. But you can get the idea  from this. 
Note that we no longer need to look elsewhere for extra tools like flex 
and bison as we once did - the ones that come with modern Msys should 
work just fine.


If you want more build features (openssl, libxml, libxslt, perl, python 
etc) then there is extra work to do in getting hold of those. But then 
cross-compiling for those things might not be easy either.


Somebody more adept at automating Amazon than I am should be able to 
automate most or all of this.


Anyway that should be just about enough info for just about any 
competent developer to get going, even if they have never touched 
Windows. (No doubt one or two people will want to quibble with what I've 
done. That's fine - this is a description, not a prescription.)


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-19 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 2013/1/18 Tom Lane t...@sss.pgh.pa.us:
 The approach is also inherently seriously inefficient. ...

 What is important - for this use case - there is simple and perfect
 possible optimization - in this case non variadic manner call of
 variadic any function all variadic parameters will share same type.
 Inside variadic function I have not information so this situation is
 in this moment, but just I can remember last used type - and I can
 reuse it, when parameter type is same like previous parameter.

 So there no performance problem.

Well, if we have to hack each variadic function to make it work well in
this scenario, that greatly weakens the major argument for the proposed
patch, namely that it provides a single-point fix for VARIADIC behavior.

BTW, I experimented with lobotomizing array_in's caching of I/O function
lookup behavior, by deleting the if-test at arrayfuncs.c line 184.  That
seemed to make it about 30% slower for a simple test involving
converting two-element float8 arrays.  So while failing to cache this
stuff isn't the end of the world, arguing that it's not worth worrying
about is simply wrong.

 But large arrays have a worse problem: the approach flat out does
 not work for arrays of more than FUNC_MAX_ARGS elements, because
 there's no place to put their values in the FunctionCallInfo struct.
 This last problem is, so far as I can see, unfixable within this
 approach; surely we are not going to accept a feature that only works
 for small arrays.  So I am going to mark the CF item rejected not just
 RWF.

 disagree - non variadic manner call should not be used for walk around
 FUNC_MAX_ARGS limit. So there should not be passed big array.

That's utter nonsense.  Why wouldn't people expect concat(), for
example, to work for large (or even just moderate-sized) arrays?

This problem *is* a show stopper for this patch.  I suggested a way you
can fix it without having such a limitation.  If you don't want to go
that way, well, it's not going to happen.

I agree the prospect that each variadic-ANY function would have to deal
with this case for itself is a tad annoying.  But there are only two of
them in the existing system, and it's not like a variadic-ANY function
isn't a pretty complicated beast anyway.

 You propose now something, what you rejected four months ago.

Well, at the time it wasn't apparent that this approach wouldn't work.
It is now, though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Query to help in debugging

2013-01-19 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Tom Lane wrote:
 select name, setting, source from pg_settings
 where source not in ('default', 'override');

 Here is my very wide output:

Why are you insisting on cramming version() into this?  It could
just as easily be a different query.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Query to help in debugging

2013-01-19 Thread Bruce Momjian
On Sat, Jan 19, 2013 at 03:29:36PM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Tom Lane wrote:
  select name, setting, source from pg_settings
  where source not in ('default', 'override');
 
  Here is my very wide output:
 
 Why are you insisting on cramming version() into this?  It could
 just as easily be a different query.

I am fine with that:

SELECT  version();
SELECT  name, current_setting(name), source
FROMpg_settings
WHERE   source NOT IN ('default', 'override');

Output:

test= SELECT  version();
 version

-

 PostgreSQL 9.3devel on x86_64-unknown-linux-gnu, compiled by gcc 
(Debian 4.4.5-8) 4.4.5, 64-bit
(1 row)

test= SELECT  name, current_setting(name), source
test- FROMpg_settings
test- WHERE   source NOT IN ('default', 'override');
name|  current_setting   |source
++--
 application_name   | psql   | client
 client_encoding| UTF8   | client
 DateStyle  | ISO, MDY   | configuration file
 default_text_search_config | pg_catalog.english | configuration file
 lc_messages| en_US.UTF-8| configuration file
 lc_monetary| en_US.UTF-8| configuration file
 lc_numeric | en_US.UTF-8| configuration file
 lc_time| en_US.UTF-8| configuration file
 log_timezone   | US/Eastern | configuration file
 max_connections| 100| configuration file
 max_stack_depth| 2MB| environment variable
 shared_buffers | 128MB  | configuration file
 TimeZone   | US/Eastern | configuration file
(13 rows)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Doc patch making firm recommendation for setting the value of commit_delay

2013-01-19 Thread Noah Misch
On Wed, Nov 14, 2012 at 08:44:26PM +, Peter Geoghegan wrote:
 http://commit-delay-results-ssd-insert.staticloud.com
 http://commit-delay-stripe-insert.staticloud.com
 http://commit-delay-results-stripe-tpcb.staticloud.com
 http://commit-delay-results-ssd-insert.staticloud.com/19/pg_settings.txt

staticloud.com seems to be gone.  Would you repost these?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-19 Thread Pavel Stehule
2013/1/19 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 2013/1/18 Tom Lane t...@sss.pgh.pa.us:
 The approach is also inherently seriously inefficient. ...

 What is important - for this use case - there is simple and perfect
 possible optimization - in this case non variadic manner call of
 variadic any function all variadic parameters will share same type.
 Inside variadic function I have not information so this situation is
 in this moment, but just I can remember last used type - and I can
 reuse it, when parameter type is same like previous parameter.

 So there no performance problem.

 Well, if we have to hack each variadic function to make it work well in
 this scenario, that greatly weakens the major argument for the proposed
 patch, namely that it provides a single-point fix for VARIADIC behavior.

 BTW, I experimented with lobotomizing array_in's caching of I/O function
 lookup behavior, by deleting the if-test at arrayfuncs.c line 184.  That
 seemed to make it about 30% slower for a simple test involving
 converting two-element float8 arrays.  So while failing to cache this
 stuff isn't the end of the world, arguing that it's not worth worrying
 about is simply wrong.

 But large arrays have a worse problem: the approach flat out does
 not work for arrays of more than FUNC_MAX_ARGS elements, because
 there's no place to put their values in the FunctionCallInfo struct.
 This last problem is, so far as I can see, unfixable within this
 approach; surely we are not going to accept a feature that only works
 for small arrays.  So I am going to mark the CF item rejected not just
 RWF.

 disagree - non variadic manner call should not be used for walk around
 FUNC_MAX_ARGS limit. So there should not be passed big array.

 That's utter nonsense.  Why wouldn't people expect concat(), for
 example, to work for large (or even just moderate-sized) arrays?

 This problem *is* a show stopper for this patch.  I suggested a way you
 can fix it without having such a limitation.  If you don't want to go
 that way, well, it's not going to happen.


 I agree the prospect that each variadic-ANY function would have to deal
 with this case for itself is a tad annoying.  But there are only two of
 them in the existing system, and it's not like a variadic-ANY function
 isn't a pretty complicated beast anyway.

 You propose now something, what you rejected four months ago.

 Well, at the time it wasn't apparent that this approach wouldn't work.
 It is now, though.

I have no problem rewrite patch, I'll send new version early.

Regards

Pavel


 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-19 Thread Jeff Janes
On Thursday, January 17, 2013, Magnus Hagander wrote:




  Would it help to step up a few developers and create a second line of
  committers ? The commits by the second line committers will still be
  reviewed by the first line committers before they make into the product,
 but
  may be at later stage or when we are in beta.


Perhaps this is because I've never needed or played with prepared
transactions; but to me a commit is a commit is a commit.  If it isn't in
the product, then whatever it was was not a commit.

 ..

While we can certainly do that, it would probably help just to havae a
 second line of reviewers. Basically a set of more senior reviewers -
 so a patch would go submission - reviewer - senior reviewer -
 committer. With the second line of reviewers focusing more on the
 whole how to do things, etc.


That order seems partially backwards to me.  If someone needs to glance at
a patch and say That whole idea is never going to work or more
optimistically you need to be making that change in smgr, not lmgr, it is
probably not going to be a novice reviewer who does that.

Sometime this type of high-level summary review does happen, at the senior
person's whim, but is not a formal part of the commit fest process.

What I don't know is how much work it takes for one of those senior people
to make one of those summary judgments, compared to how much it takes for
them to just do an entire review from scratch.

Cheers,

Jeff





Re: [HACKERS] string escaping in tutorial/syscat.source

2013-01-19 Thread Tom Lane
Josh Kupershmidt schmi...@gmail.com writes:
 It seems the queries in ./src/tutorial/syscat.source use string
 escaping with the assumption that standard_conforming_strings is off,
 and thus give wrong results with modern versions. A simple fix is
 attached.

I tweaked the comments a little bit and committed this as far back
as 9.1.  Thanks!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-19 Thread Steve Singer

On 13-01-09 03:07 PM, Tom Lane wrote:

Andres Freund and...@2ndquadrant.com writes:

Well, I *did* benchmark it as noted elsewhere in the thread, but thats
obviously just machine (E5520 x 2) with one rather restricted workload
(pgbench -S -jc 40 -T60). At least its rather palloc heavy.
Here are the numbers:
before:
#101646.763208 101350.361595 101421.425668 101571.211688 101862.172051 
101449.857665
after:
#101553.596257 102132.277795 101528.816229 101733.541792 101438.531618 
101673.400992
So on my system if there is a difference, its positive (0.12%).

pgbench-based testing doesn't fill me with a lot of confidence for this
--- its numbers contain a lot of communication overhead, not to mention
that pgbench itself can be a bottleneck.  It struck me that we have a
recent test case that's known to be really palloc-intensive, namely
Pavel's example here:
http://www.postgresql.org/message-id/CAFj8pRCKfoz6L82PovLXNK-1JL=jzjwat8e2bd2pwnkm7i7...@mail.gmail.com

I set up a non-cassert build of commit
78a5e738e97b4dda89e1bfea60675bcf15f25994 (ie, just before the patch that
reduced the data-copying overhead for that).  On my Fedora 16 machine
(dual 2.0GHz Xeon E5503, gcc version 4.6.3 20120306 (Red Hat 4.6.3-2))
I get a runtime for Pavel's example of 17023 msec (average over five
runs).  I then applied oprofile and got a breakdown like this:

   samples|  %|
--
108409 84.5083 /home/tgl/testversion/bin/postgres
 13723 10.6975 /lib64/libc-2.14.90.so
  3153  2.4579 /home/tgl/testversion/lib/postgresql/plpgsql.so

samples  %symbol name
1096010.1495  AllocSetAlloc
6325  5.8572  MemoryContextAllocZeroAligned
6225  5.7646  base_yyparse
3765  3.4866  copyObject
2511  2.3253  MemoryContextAlloc
2292  2.1225  grouping_planner
2044  1.8928  SearchCatCache
1956  1.8113  core_yylex
1763  1.6326  expression_tree_walker
1347  1.2474  MemoryContextCreate
1340  1.2409  check_stack_depth
1276  1.1816  GetCachedPlan
1175  1.0881  AllocSetFree
1106  1.0242  GetSnapshotData
1106  1.0242  _SPI_execute_plan
1101  1.0196  extract_query_dependencies_walker

I then applied the palloc.h and mcxt.c hunks of your patch and rebuilt.
Now I get an average runtime of 1 ms, a full 2% faster, which is a
bit astonishing, particularly because the oprofile results haven't moved
much:

107642 83.7427 /home/tgl/testversion/bin/postgres
 14677 11.4183 /lib64/libc-2.14.90.so
  3180  2.4740 /home/tgl/testversion/lib/postgresql/plpgsql.so

samples  %symbol name
10038 9.3537  AllocSetAlloc
6392  5.9562  MemoryContextAllocZeroAligned
5763  5.3701  base_yyparse
4810  4.4821  copyObject
2268  2.1134  grouping_planner
2178  2.0295  core_yylex
1963  1.8292  palloc
1867  1.7397  SearchCatCache
1835  1.7099  expression_tree_walker
1551  1.4453  check_stack_depth
1374  1.2803  _SPI_execute_plan
1282  1.1946  MemoryContextCreate
1187  1.1061  AllocSetFree
...
653   0.6085  palloc0
...
552   0.5144  MemoryContextAlloc

The number of calls of AllocSetAlloc certainly hasn't changed at all, so
how did that get faster?

I notice that the postgres executable is about 0.2% smaller, presumably
because a whole lot of inlined fetches of CurrentMemoryContext are gone.
This makes me wonder if my result is due to chance improvements of cache
line alignment for inner loops.

I would like to know if other people get comparable results on other
hardware (non-Intel hardware would be especially interesting).  If this
result holds up across a range of platforms, I'll withdraw my objection
to making palloc a plain function.

regards, tom lane



Sorry for the delay I only read this thread today.


I just tried Pawel's test on a POWER5 machine with an older version of 
gcc (see the grebe buildfarm animal for details)


78a5e738e:   37874.855 (average of 6 runs)
78a5e738 + palloc.h + mcxt.c: 38076.8035

The functions do seem to slightly slow things down on POWER. I haven't bothered 
to run oprofile or tprof to get a breakdown of the functions since Andres has 
already removed this from his patch.

Steve




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-01-19 Thread Dickson S. Guedes
2013/1/18 Amit kapila amit.kap...@huawei.com:
 Please find the rebased Patch for Compute MAX LSN.

The function 'remove_parent_refernces' couldn't be called
'remove_parent_references' ?

Why not an extension in PGXN instead of a contrib?

Regards,
-- 
Dickson S. Guedes
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://github.com/guedes - http://guedesoft.net
http://www.postgresql.org.br


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER command reworks

2013-01-19 Thread Kohei KaiGai
2013/1/17 Alvaro Herrera alvhe...@2ndquadrant.com:
 Kohei KaiGai escribió:

 This attached patch is the rebased one towards the latest master branch.

 Great, thanks.  I played with it a bit and it looks almost done to me.
 The only issue I can find is that it lets you rename an aggregate by
 using ALTER FUNCTION, which is supposed to be forbidden.  (Funnily
 enough, renaming a non-agg function with ALTER AGGREGATE does raise an
 error).  Didn't immediately spot the right place to add a check.

 I think these two error cases ought to have regression tests of their
 own.

 I attach a version with my changes.

The patch itself looks to me good.

About ALTER FUNCTION towards aggregate function, why we should raise
an error strictly? Some code allows to modify properties of aggregate function
specified with FUNCTION qualifier.

postgres=# COMMENT ON FUNCTION max(int) IS 'maximum number of integer';
COMMENT
postgres=# COMMENT ON AGGREGATE in4eq(int,int) IS 'comparison of integers';
ERROR:  aggregate in4eq(integer, integer) does not exist

I think using AGGREGATE towards regular function is wrong, but it is not
100% wrong to use FUNCTION towards aggregate function.
In addition, aggregate function and regular function share same namespace,
so it never makes problem even if we allows to identify aggregate function
using ALTER FUNCTION.

How about your opinion? Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lazy_vacuum_heap()'s removal of HEAPTUPLE_DEAD tuples

2013-01-19 Thread Jeff Janes
On Wednesday, January 9, 2013, Noah Misch wrote:

 On Thu, Jan 10, 2013 at 02:45:36AM +, Simon Riggs wrote:
  On 8 January 2013 02:49, Noah Misch n...@leadboat.com javascript:;
 wrote:
   There is a bug in lazy_scan_heap()'s
   bookkeeping for the xid to place in that WAL record.  Each call to
   heap_page_prune() simply overwrites vacrelstats-latestRemovedXid, but
   lazy_scan_heap() expects it to only ever increase the value.  I have a
   attached a minimal fix to be backpatched.  It has lazy_scan_heap()
 ignore
   heap_page_prune()'s actions for the purpose of this conflict xid,
 because
   heap_page_prune() emitted an XLOG_HEAP2_CLEAN record covering them.
 
  Interesting. Yes, bug, and my one of mine also.
 
  ISTM the right fix is fix to correctly initialize on pruneheap.c line 176
  prstate.latestRemovedXid = *latestRemovedXid;
  better to make it work than to just leave stuff hanging.

 That works, too.


As bug fixes don't usually go through the commit-fest process, will someone
be committing one of these two ideas for the back-branches?  And to HEAD,
in case the more invasive patch doesn't make it in?

I have a preliminary nit-pick on the big patch.  It generates a compiler
warning:

vacuumlazy.c: In function ‘lazy_scan_heap’:
vacuumlazy.c:445:9: warning: variable ‘prev_dead_count’ set but not used
[-Wunused-but-set-variable]


Thanks,

Jeff


Re: [HACKERS] pg_dump transaction's read-only mode

2013-01-19 Thread Tom Lane
Pavan Deolasee pavan.deola...@gmail.com writes:
 Sorry for posting on such an old thread. But here is a patch that
 fixes this. I'm also adding to the next commitfest so that we don't
 lose track of it again.

As submitted, this broke pg_dump for dumping from pre-8.0 servers.
(7.4 didn't accept commas in SET TRANSACTION syntax, and versions
before that didn't have the READ ONLY option at all.)  I fixed that
and committed it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-19 Thread Andrew Dunstan


On 01/19/2013 02:51 AM, Boszormenyi Zoltan wrote:
Yes it rings a bell. See 
http://people.planetpostgresql.org/andrew/index.php?/archives/264-Cross-compiling-PostgreSQL-for-WIndows.html



I wanted to add a comment to this blog entry but it wasn't accepted.



The blog is closed for comments. I have moved to a new blog, and this is 
just there for archive purposes.




Here it is:

It doesn't work under Wine, see:
http://www.winehq.org/pipermail/wine-users/2013-January/107008.html

But pg_config works so other PostgreSQL clients can also be built 
using the cross compiler.





If you want to target Wine I think you're totally on your own.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identity projection

2013-01-19 Thread Stephen Frost
Kyotaro,

  Are you planning to update this patch based on Heikki's comments?  The
  patch is listed in the commitfest and we're trying to make some
  progress through all of those patches.

Thanks,

Stephen

* Heikki Linnakangas (hlinn...@iki.fi) wrote:
 On 12.11.2012 12:07, Kyotaro HORIGUCHI wrote:
 Hello, This is new version of identity projection patch.
 
 Reverted projectionInfo and ExecBuildProjectionInfo. Identity
 projection is recognized directly in ExecGroup, ExecResult, and
 ExecWindowAgg. nodeAgg is reverted because I couldn't make it
 sane..
 
 The following is the result of performance test posted before in
 order to show the source of the gain.
 
 Hmm, this reminds me of the discussion on removing useless Limit
 nodes: http://archives.postgresql.org/pgsql-performance/2012-12/msg00127.php.
 
 The optimization on Group, WindowAgg and Agg nodes doesn't seem that
 important, the cost of doing the aggregation/grouping is likely
 overwhelming the projection cost, and usually you do projection in
 grouping/aggregation anyway. But makes sense for Result.
 
 For Result, I think you should aim to remove the useless Result node
 from the plan altogether. And do the same for useless Limit nodes.
 
 - Heikki
 


signature.asc
Description: Digital signature


[HACKERS] pg_upgrade and system() return value

2013-01-19 Thread Bruce Momjian
Can someone comment on the attached patch?  pg_upgrade was testing if
system() returned a non-zero value, while I am thinking I should be
adjusting system()'s return value with WEXITSTATUS().  

Is there any possible bug in back branches just compariing system()'s
turn value to non-zero without calling WEXITSTATUS()?  I never saw a bug
related to this.  I am thinking of applying this just to git head.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/exec.c b/contrib/pg_upgrade/exec.c
new file mode 100644
index e326a10..2b3c203
*** a/contrib/pg_upgrade/exec.c
--- b/contrib/pg_upgrade/exec.c
*** exec_prog(const char *log_file, const ch
*** 99,104 
--- 99,106 
  	fclose(log);
  
  	result = system(cmd);
+ 	if (result != -1)
+ 		result = WEXITSTATUS(result);
  
  	umask(old_umask);
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS]

2013-01-19 Thread wln
subscribe
end



Re: [HACKERS] logical changeset generation v4

2013-01-19 Thread Steve Singer

On 13-01-14 08:38 PM, Andres Freund wrote:

Hi everyone,

Here is the newest version of logical changeset generation.



2) Currently the logical replication infrastructure assigns a 'slot-id'
when a new replica is setup. That slot id isn't really nice
(e.g. id-321578-3). It also requires that [18] keeps state in a global
variable to make writing regression tests easy.

I think it would be better to make the user specify those replication
slot ids, but I am not really sure about it.


Shortly after trying out the latest version I hit the following scenario
1. I started pg_receivellog but mistyped the name of my plugin
2. It looped and used up all of my logical replication slots

I killed pg_receivellog and restarted it with the correct plugin name 
but it won't do anything because I have no free slots.  I can't free the 
slots with -F because I have no clue what the names of the slots are.
I can figure the names out by looking in pg_llog but if my replication 
program can't do that so it won't be able to clean up from a failed attempt.


I agree with you that we should make the user program specify a slot, we 
eventually might want to provide a view that shows the currently 
allocated slots. For a logical based slony I would just generate the 
slot name based on the remote node id.  If walsender generates the slot 
name then I would need to store a mapping between slot names and slons 
so when a slon restarted it would know which slot to resume using.   I'd 
have to use a table in the slony schema on the remote database for 
this.  There would always be a risk of losing track of a slot id if the 
slon crashed after getting the slot number but before committing the 
mapping on the remote database.






3) Currently no options can be passed to an output plugin. I am thinking
about making INIT_LOGICAL_REPLICATION 'plugin' accept the now widely
used ('option' ['value'], ...) syntax and pass that to the output
plugin's initialization function.


I think we discussed this last CF, I like this idea.


4) Does anybody object to:
-- allocate a permanent replication slot
INIT_LOGICAL_REPLICATION 'plugin' 'slotname' (options);

-- stream data
START_LOGICAL_REPLICATION 'slotname' 'recptr';

-- deallocate a permanent replication slot
FREE_LOGICAL_REPLICATION 'slotname';


+1



5) Currently its only allowed to access catalog tables, its fairly
trivial to extend this to additional tables if you can accept some
(noticeable but not too big) overhead for modifications on those tables.

I was thinking of making that an option for tables, that would be useful
for replication solutions configuration tables.


I think this will make the life of anyone developing a new replication 
system easier.  Slony has a lot of infrastructure for allowing slonik 
scripts to wait for configuration changes to popogate everywhere before 
making other configuration changes because you can get race conditions.  
If I were designing a new replication system and I had this feature then 
I would try to use it to come up with a simpler model of propagating 
configuration changes.




Andres Freund




Steve



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Fix NULL checking in check_TSCurrentConfig()

2013-01-19 Thread Xi Wang
The correct NULL check should use `*newval'; `newval' must be non-null.
---
 src/backend/utils/cache/ts_cache.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/utils/cache/ts_cache.c 
b/src/backend/utils/cache/ts_cache.c
index e688b1a..65a8ad7 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -642,7 +642,7 @@ check_TSCurrentConfig(char **newval, void **extra, 
GucSource source)
free(*newval);
*newval = strdup(buf);
pfree(buf);
-   if (!newval)
+   if (!*newval)
return false;
}
 
-- 
1.7.10.4



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Fix off-by-one in PQprintTuples()

2013-01-19 Thread Xi Wang
Don't write past the end of tborder; the size is width + 1.
---
 src/interfaces/libpq/fe-print.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/interfaces/libpq/fe-print.c b/src/interfaces/libpq/fe-print.c
index 076e1cc..7ed489a 100644
--- a/src/interfaces/libpq/fe-print.c
+++ b/src/interfaces/libpq/fe-print.c
@@ -706,7 +706,7 @@ PQprintTuples(const PGresult *res,
fprintf(stderr, libpq_gettext(out of 
memory\n));
abort();
}
-   for (i = 0; i = width; i++)
+   for (i = 0; i  width; i++)
tborder[i] = '-';
tborder[i] = '\0';
fprintf(fout, %s\n, tborder);
-- 
1.7.10.4



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-01-19 Thread Amit kapila

On Sunday, January 20, 2013 4:04 AM Dickson S. Guedes wrote:
2013/1/18 Amit kapila amit.kap...@huawei.com:
 Please find the rebased Patch for Compute MAX LSN.

The function 'remove_parent_refernces' couldn't be called
'remove_parent_references' ?

Shall fix this.

 Why not an extension in PGXN instead of a contrib?

This functionality has similarity to pg_resetxlog, so we thought of putting it 
either in bin or in contrib.
Finally based on suggestions from other community members, we have added to 
contrib.


With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [RFC] overflow checks optimized away

2013-01-19 Thread Xi Wang
Intel's icc and PathScale's pathcc compilers optimize away several
overflow checks, since they consider signed integer overflow as
undefined behavior.  This leads to a vulnerable binary.

Currently we use -fwrapv to disable such (mis)optimizations in gcc,
but not in other compilers.


Examples


1) x + 1 = 0 (assuming x  0).

src/backend/executor/execQual.c:3088

Below is the simplified code.

-
void bar(void);
void foo(int this_ndims)
{
if (this_ndims = 0)
return;
int elem_ndims = this_ndims;
int ndims = elem_ndims + 1;
if (ndims = 0)
bar();
}
-

$ icc -S -o - sadd1.c
...
foo:
# parameter 1: %edi
..B1.1:
..___tag_value_foo.1:
..B1.2:
ret

2) x + 1  x

src/backend/utils/adt/float.c:2769
src/backend/utils/adt/float.c:2785
src/backend/utils/adt/oracle_compat.c:1045 (x + C  x)

Below is the simplified code.

-
void bar(void);
void foo(int count)
{
int result = count + 1;
if (result  count)
bar();
}
-

$ icc -S -o - sadd2.c
...
foo:
# parameter 1: %edi
..B1.1:
..___tag_value_foo.1:
ret   
3) x + y = x (assuming y  0)

src/backend/utils/adt/varbit.c:1142
src/backend/utils/adt/varlena.c:1001
src/backend/utils/adt/varlena.c:2024
src/pl/plpgsql/src/pl_exec.c:1975
src/pl/plpgsql/src/pl_exec.c:1981

Below is the simplified code.

-
void bar(void);
void foo(int sp, int sl)
{
if (sp = 0)
return;
int sp_pl_sl = sp + sl;
if (sp_pl_sl = sl)
bar();
}
-

$ icc -S -o - sadd3.c
foo:
# parameter 1: %edi
# parameter 2: %esi
..B1.1:
..___tag_value_foo.1:
..B1.2:
ret   

Possible fixes
==

* Recent versions of icc and pathcc support gcc's workaround option,
-fno-strict-overflow, to disable some optimizations based on signed
integer overflow.  It's better to add this option to configure.
They don't support gcc's -fwrapv yet.

* This -fno-strict-overflow option cannot help in all cases: it cannot
prevent the latest icc from (mis)compiling the 1st case.  We could also
fix the source code by avoiding signed integer overflows, as follows.

x + y = 0 (assuming x  0, y  0)
-- x  INT_MAX - y

x + y = x (assuming y  0)
-- x  INT_MAX - y

I'd suggest to fix the code rather than trying to work around the
compilers since the fix seems simple and portable.

See two recent compiler bugs of -fwrapv/-fno-strict-overflow as well.

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55883
http://software.intel.com/en-us/forums/topic/358200

* I don't have access to IBM's xlc compiler.  Not sure how it works for
the above cases.

- xi 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers