Re: [HACKERS] SPI and transactions

2015-11-22 Thread Craig Ringer
On 18 November 2015 at 18:18, Konstantin Knizhnik  wrote:


> But now SPI is used not only inside UDFs. It is also used in background
> workers. For example in receiver_raw, written by Michael Paquier (I lot of
> thanks Michael, understand logical replication without them will be much
> more difficult).
> Right now transactions have to be started by background worker using
> StartTransactionCommand().
> So receiver_raw is not able to preserve master's transaction semantic
> (certainly it can be implemented).


I doubt the raw receiver approach can ever really lead to a complete
replication solution, so I'm not completely convinced this is a problem
worth solving. That tool is a great demo and learning utility, but that's
very much what I see it as. (Then again, I would say that, wouldn't I? I
have my own work in the running in the same space. Make of that what you
will.)

I suspect you'd need a way to invoke an incomplete SQL parser that can
parse the SQL well enough to give you a TransactionStmt or tell you "I
dunno what it it, but it doesn't look like a TransactionStmt".

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

On 18 November 2015 at 18:18, Konstantin Knizhnik  wrote:

> Hello,
>
> SPI was originally developed for execution SQL statements from C user
> defined functions in context of existed transaction.
> This is why it is not possible to execute any transaction manipulation
> statement (BEGIN, COMMIT, PREPARE,...) using
> SPI_execute:SPI_ERROR_TRANSACTION is returned.
>
> But now SPI is used not only inside UDFs. It is also used in background
> workers. For example in receiver_raw, written by Michael Paquier (I lot of
> thanks Michael, understand logical replication without them will be much
> more difficult).
> Right now transactions have to be started by background worker using
> StartTransactionCommand().
> So receiver_raw is not able to preserve master's transaction semantic
> (certainly it can be implemented).
>
> I wonder whether SPI can be extended now to support transaction
> manipulation functions when been called outside transaction context? Or
> there are some principle problem with it?
>
> Thanks in advance,
> Konstantin
>
>
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[HACKERS] Identify user requested queries

2015-11-22 Thread Praveen M
Hi All,

When the user attempts to make a connection with the database , the code
will look into various pg_catalog tables internally. However the user also
can query the pg_catalog tables. Is there a way to identify the user
requested (or typed query) vs the system requested (internal) queries?

Also what procedure or function in the code that indicates the user can
write queries , something like I wanted to know the code where the
connection is created and available for user to use.

Please Help!!

Praveen


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Pavel Stehule
2015-11-22 23:54 GMT+01:00 Corey Huinker :

> What about pg_size_unpretty()?
>>
> I was going to suggest pg_size_ugly(), but unpretty does emphasize the
> inverse (rather than opposite) nature of the function.
>

"unpretty", "ugly" aren't good names

maybe pg_size_bytes or different approach

we can introduce data type - bytesize - default input/output is human
readable text - and conversion to bigint is implicit

Some like

select .. where pg_table_size(oid) > bytesize '3.5GB'

and instead pg_size_pretty(pg_table_size(oid)) we can write
pg_table_size(oid)::bytesize

Regards

Pavel


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2015-11-22 Thread Andreas Karlsson

On 08/26/2015 07:46 AM, Michael Paquier wrote:

On Wed, Aug 26, 2015 at 12:24 PM, Michael Paquier
 wrote:

On Wed, Aug 26, 2015 at 10:57 AM, Tom Lane  wrote:

[...]
So I think the way to move this forward is to investigate how to hold
the SSL config constant until SIGHUP in an EXEC_BACKEND build.  If we
find out that that's unreasonably difficult, maybe we'll decide that
we can live without it; but I'd like to see the question investigated
rather than ignored.


You have a point here.

In EXEC_BACKEND, parameter updated via SIGHUP are only taken into
account by newly-started backends, right?


Oops. I mistook with PGC_BACKEND here. Sorry for the noise.


Hence, a way to do what we
want is to actually copy the data needed to initialize the SSL context
into alternate file(s). When postmaster starts up, or when SIGHUP
shows up those alternate files are upserted by the postmaster.
be-secure-openssl.c needs also to be changed such as with EXEC_BACKEND
the context needs to be loaded from those alternate files. At quick
glance this seems doable.


Still, this idea would be to use a set of alternate files in global/
to set the context, basically something like
config_exec_ssl_cert_file, config_exec_ssl_key_file and
config_exec_ssl_ca_file. It does not seem to be necessary to
manipulate [read|write]_nondefault_variables() as the use of this
metadata should be made only when SSL context is initialized on
backend. Other thoughts welcome.


Sorry for dropping this patch, but now I have started looking at it again.

I started implementing your suggested solution, but realized that I do 
not like copying of the private key file. The private key might have 
been put by the DBA on another file system for security reasons and 
having PostgreSQL copy potentially sensitive data to somewhere under 
pg_data seems like a surprising behavior. Especially since this only 
happens on some platforms.


I guess a possible solution would be to read the files into the 
postmaster (where we already have the private key today) and have 
OpenSSL read the keys from memory and re-implement something like 
SSL_CTX_use_certificate_chain_file() in our code, and similar things for 
the other functions which now take a path. This seems like a bit too 
much work to burden this patch with (and not obviously something we 
would want anyway) since the behavior is already different on Windows in 
the current code.


Thoughts?

I have attached a rebased version of the original patch which applies on 
current master.


Andreas
diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c
index e9bc282..7dda4be 100644
--- a/src/backend/libpq/be-secure-openssl.c
+++ b/src/backend/libpq/be-secure-openssl.c
@@ -77,12 +77,14 @@ static DH  *load_dh_buffer(const char *, size_t);
 static DH  *tmp_dh_cb(SSL *s, int is_export, int keylength);
 static int	verify_cb(int, X509_STORE_CTX *);
 static void info_cb(const SSL *ssl, int type, int args);
-static void initialize_ecdh(void);
+static SSL_CTX *initialize_context(void);
+static bool initialize_ecdh(SSL_CTX *context);
 static const char *SSLerrmessage(void);
 
 static char *X509_NAME_to_cstring(X509_NAME *name);
 
 static SSL_CTX *SSL_context = NULL;
+static bool SSL_initialized = false;
 
 /*  */
 /*		 Hardcoded values		*/
@@ -157,168 +159,39 @@ KWbuHn491xNO25CQWMtem80uKw+pTnisBRF/454n1Jnhub144YRBoN8CAQI=\n\
 /*
  *	Initialize global SSL context.
  */
-void
+int
 be_tls_init(void)
 {
-	struct stat buf;
+	SSL_CTX *context;
 
-	STACK_OF(X509_NAME) *root_cert_list = NULL;
-
-	if (!SSL_context)
-	{
+	if (!SSL_initialized) {
 #if SSLEAY_VERSION_NUMBER >= 0x0907000L
 		OPENSSL_config(NULL);
 #endif
 		SSL_library_init();
 		SSL_load_error_strings();
-
-		/*
-		 * We use SSLv23_method() because it can negotiate use of the highest
-		 * mutually supported protocol version, while alternatives like
-		 * TLSv1_2_method() permit only one specific version.  Note that we
-		 * don't actually allow SSL v2 or v3, only TLS protocols (see below).
-		 */
-		SSL_context = SSL_CTX_new(SSLv23_method());
-		if (!SSL_context)
-			ereport(FATAL,
-	(errmsg("could not create SSL context: %s",
-			SSLerrmessage(;
-
-		/*
-		 * Disable OpenSSL's moving-write-buffer sanity check, because it
-		 * causes unnecessary failures in nonblocking send cases.
-		 */
-		SSL_CTX_set_mode(SSL_context, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
-
-		/*
-		 * Load and verify server's certificate and private key
-		 */
-		if (SSL_CTX_use_certificate_chain_file(SSL_context,
-			   ssl_cert_file) != 1)
-			ereport(FATAL,
-	(errcode(ERRCODE_CONFIG_FILE_ERROR),
-  errmsg("could not load server certificate file \"%s\": %s",
-		 ssl_cert_file, SSLerrmessage(;
-
-		if (stat(ssl_key_file, &buf) != 0)
-			ereport(FATAL,
-	(errcode_for_file_access(),
-	 errmsg("could not access private key file \"%s\": %m",
-			ssl_key_file)

Re: [HACKERS] Using quicksort for every external sort run

2015-11-22 Thread Peter Geoghegan
On Fri, Nov 20, 2015 at 2:58 PM, Peter Geoghegan  wrote:
> The numbers speak for themselves here. I just want to be clear about
> the disadvantages of what I propose, even if it's well worth it
> overall in most (all?) cases.

There is a paper called "Critical Evaluation of Existing External
Sorting Methods in the Perspective of Modern Hardware":

http://ceur-ws.org/Vol-1343/paper8.pdf

This paper was not especially influential, and I don't agree with
every detail, or I at least don't think that every recommendation
should be adopted to Postgres. Even still, the paper is the best
summary I have seen so far. It clearly explains why there is plenty to
recommend a simple hybrid sort-merge strategy over replacement
selection, despite the fact that replacement selection is faster when
using 1970s hardware.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Corey Huinker
>
> What about pg_size_unpretty()?
>
I was going to suggest pg_size_ugly(), but unpretty does emphasize the
inverse (rather than opposite) nature of the function.


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Guillaume Lelarge
Le 22 nov. 2015 21:29, "Pavel Stehule"  a écrit :
>
>
>
> 2015-11-22 21:19 GMT+01:00 Jim Nasby :
>>
>> On 11/22/15 2:11 PM, Pavel Stehule wrote:
>>>
>>> What about pg_size(text), pg_size(value bigint, unit text) ?
>>
>>
>> I like, though I'd make it numeric or float. pg_size(3.5, 'GB')
certainly seems like a reasonable use case...
>
>
> yes, good note.
>

What about pg_size_unpretty()?


Re: [HACKERS] more RLS oversights

2015-11-22 Thread Noah Misch
On Tue, Jul 28, 2015 at 04:04:29PM -0700, Joe Conway wrote:
> On 07/27/2015 05:34 PM, Joe Conway wrote:
> > On 07/27/2015 01:13 PM, Alvaro Herrera wrote:
> >> Hmm, these are not ACL objects, so conceptually it seems cleaner
> >> to use a different symbol for this.  I think the catalog state
> >> and the error messages would be a bit confusing otherwise.
> > 
> > Ok -- done

> Pushed to HEAD and 9.5

I reviewed this commit, f781a0f "Create a pg_shdepend entry for each role in
TO clause of policies."  This commit rendered the
http://www.postgresql.org/docs/devel/static/role-removal.html procedure[1]
incomplete.  Before dropping a role, one must additionally drop each policy
mentioning the role in pg_policy.polroles:

begin;
create role alice;
create table t (c int);
grant select on table t to alice;
create policy p0 on t to alice using (true);
reassign owned by alice to current_user;
drop owned by alice;
drop role alice;
rollback;

shdepDropOwned() ignores SHARED_DEPENDENCY_POLICY entries.  Should it instead
remove the role from polroles, dropping the policy if that would empty
polroles?  (Which should change, the documented role-removal procedure or the
DROP OWNED treatment of policies?)  Independently,
http://www.postgresql.org/docs/devel/static/sql-drop-owned.html deserves an
update since it discusses every other object type having role dependencies.

Thanks,
nm

[1] That page did not exist until 2015-10-07 (commit 1ea0c73), after the
commit I'm reviewing here.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-11-22 Thread Jeff Janes
On Sun, Nov 22, 2015 at 8:16 AM, Masahiko Sawada  wrote:

> Thank you for taking the time to review this patch!
> The updated version patch is attached.

I am skeptical about just copying the old page header to be two new
page headers.  I don't know what the implications for this will be on
pd_lsn.  Since pg_upgrade can only run on a cluster that was cleanly
shutdown, I think that just copying it from the old page to both new
pages it turns into might be fine.  But pd_checksum will certainly be
wrong, breaking pg_upgrade for cases where checksums are turned on in.
It needs to be recomputed on both new pages.  It looks like there is
no precedence for doing that in pg_upgrade so this will be breaking
new ground.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Pavel Stehule
2015-11-22 21:19 GMT+01:00 Jim Nasby :

> On 11/22/15 2:11 PM, Pavel Stehule wrote:
>
>> What about pg_size(text), pg_size(value bigint, unit text) ?
>>
>
> I like, though I'd make it numeric or float. pg_size(3.5, 'GB') certainly
> seems like a reasonable use case...


yes, good note.

Regards

Pavel


>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-22 Thread Marko Tiikkaja

On 2015-11-22 21:17, Jim Nasby wrote:

On 11/22/15 11:34 AM, Marko Tiikkaja wrote:

On 2015-11-22 18:29, Jim Nasby wrote:

Only if you know how many columns there already are.

Or does this not work if you hand it a row?


It "works" in the sense that it tells you whether the row is NULL or
not.  I.e. the answer will always be 0 or 1.


Hrm, I was hoping something like count_nulls(complex_type.*) would work.


Nope:

=# select num_nulls((f).*) from (select '(1,2,3)'::foo) ss(f);
ERROR:  row expansion via "*" is not supported here


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Jim Nasby

On 11/22/15 2:11 PM, Pavel Stehule wrote:

What about pg_size(text), pg_size(value bigint, unit text) ?


I like, though I'd make it numeric or float. pg_size(3.5, 'GB') 
certainly seems like a reasonable use case...

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-22 Thread Jim Nasby

On 11/22/15 11:34 AM, Marko Tiikkaja wrote:

On 2015-11-22 18:29, Jim Nasby wrote:

Only if you know how many columns there already are.

Or does this not work if you hand it a row?


It "works" in the sense that it tells you whether the row is NULL or
not.  I.e. the answer will always be 0 or 1.


Hrm, I was hoping something like count_nulls(complex_type.*) would work.

I guess one could always create a wrapper function that does 
count_not_nulls() anyway.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Marko Tiikkaja

On 2015-11-22 18:30, Jim Nasby wrote:

On 11/21/15 12:49 AM, Pavel Stehule wrote:


I propose inversion function to pg_size_pretty function - like
pg_human_size.

Usage:

SELECT * FROM pg_class
WHERE pg_table_size(oid) > pg_human_size('2GB');


I'm not a big fan of the name, but +1 on the general idea.

Maybe pg_size_pretty(text)?


pg_ytterp_ezis(text)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Pavel Stehule
2015-11-22 18:30 GMT+01:00 Jim Nasby :

> On 11/21/15 12:49 AM, Pavel Stehule wrote:
>
>>
>> I propose inversion function to pg_size_pretty function - like
>> pg_human_size.
>>
>> Usage:
>>
>> SELECT * FROM pg_class
>>WHERE pg_table_size(oid) > pg_human_size('2GB');
>>
>
> I'm not a big fan of the name, but +1 on the general idea.
>

I am for any other good name


>
> Maybe pg_size_pretty(text)?
>

I understand to your idea, but it can be too strange - function and
inversion function share same name.

What about pg_size(text), pg_size(value bigint, unit text) ?

Regards

Pavel


> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-22 Thread Marko Tiikkaja

On 2015-11-22 18:29, Jim Nasby wrote:

Only if you know how many columns there already are.

Or does this not work if you hand it a row?


It "works" in the sense that it tells you whether the row is NULL or 
not.  I.e. the answer will always be 0 or 1.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Jim Nasby

On 11/21/15 12:49 AM, Pavel Stehule wrote:


I propose inversion function to pg_size_pretty function - like
pg_human_size.

Usage:

SELECT * FROM pg_class
   WHERE pg_table_size(oid) > pg_human_size('2GB');


I'm not a big fan of the name, but +1 on the general idea.

Maybe pg_size_pretty(text)?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-22 Thread Jim Nasby

On 11/20/15 11:55 PM, Marko Tiikkaja wrote:

On 2015-11-21 06:52, Jim Nasby wrote:

On 11/20/15 11:12 PM, Marko Tiikkaja wrote:

On 2015-11-21 06:02, I wrote:

Here's a patch implementing this under the name num_nulls().  For
January's CF, of course.


I forgot to update the some references in the documentation.  Fixed in
v3, attached.


I thought there was going to be a not-null equivalent as well? I've
definitely wanted both variations in the past.


I'm not sure that's necessary.  It's quite simple to implement yourself
using the  int - int  operator.


Only if you know how many columns there already are.

Or does this not work if you hand it a row?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-11-22 Thread Masahiko Sawada
On Sat, Nov 21, 2015 at 6:50 AM, Jeff Janes  wrote:
> On Thu, Nov 19, 2015 at 6:44 AM, Masahiko Sawada  
> wrote:
>> On Thu, Nov 19, 2015 at 5:54 AM, Jeff Janes  wrote:
>>> On Wed, Nov 18, 2015 at 11:18 AM, Jeff Janes  wrote:

 I get an error when running pg_upgrade from 9.4 to 9.6-this

 error while copying relation "mediawiki.archive"
 ("/tmp/data/base/16414/21043_vm" to
 "/tmp/data_fm/base/16400/21043_vm"): No such file or directory
>>>
>>> OK, so the problem seems to be that rewriteVisibilitymap can get
>>> called with errno already set to a nonzero value.
>>>
>>> It never clears it, and then fails at the end despite that no error
>>> has actually occurred.
>>>
>>> Just setting it to 0 at the top of the function seems to be correct
>>> thing to do.  Or does it need to save the old value and restore it?
>>
>> Thank you for testing!
>> I think that the former is better, so attached latest patch.
>>
>>> But now when I want to do the upgrade faster, I run into this:
>>>
>>> "This utility cannot upgrade from PostgreSQL version from 9.5 or
>>> before to 9.6 or later with link mode."
>>>
>>> Is this really an acceptable a tradeoff?  Surely we can arrange to
>>> link everything else and rewrite just the _vm, which is a tiny portion
>>> of the data directory.  I don't think that -k promises to link
>>> everything it possibly can.
>>
>> I agree.
>> I've changed the patch so that.
>> pg_upgarde creates new _vm file and rewrites it even if upgrading to
>> 9.6 with link mode.
>
>
> The rewrite code thinks that only the first page of a vm has a header
> of size SizeOfPageHeaderData, and the rest of the pages have a zero
> size header.  So the resulting _vm is corrupt.
>
> After pg_upgrade, doing a vacuum freeze verbose gives:
>
>
> WARNING:  invalid page in block 1 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 1 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 2 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 2 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 3 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 3 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 4 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 4 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 5 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 5 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 6 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 6 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 7 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 7 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 8 of relation base/16402/22430_vm;
> zeroing out page
> WARNING:  invalid page in block 8 of relation base/16402/22430_vm;
> zeroing out page
>

Thank you for taking the time to review this patch!
The updated version patch is attached.

Regards,

--
Masahiko Sawada


000_add_frozen_bit_into_visibilitymap_v26.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm-members] latest buildfarm client release

2015-11-22 Thread Andrew Dunstan



On 11/22/2015 12:47 AM, Tom Lane wrote:

Andrew Dunstan  writes:

I have just released version 4.16 of the PostgreSQL Buildfarm client

I updated my critters to 4.16, and since nothing much was happening in
git, decided to test by doing "run_build.pl --nosend --verbose --force"
manually on prairiedog.  That run went fine, but the cron job firing
run_branches.pl every few minutes was still live, and one of its runs
went a tad nuts even though nothing was happening in git:

Buildfarm member prairiedog failed on REL9_3_STABLE stage pgsql-Git
Buildfarm member prairiedog failed on REL9_4_STABLE stage pgsql-Git
Buildfarm member prairiedog failed on REL9_5_STABLE stage pgsql-Git

That resulted in these reports uploaded to the server:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2015-11-21%2018%3A27%3A29
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2015-11-21%2018%3A27%3A19
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2015-11-21%2018%3A26%3A12

which contain the following failure reports, respectively:

Missing checked out branch bf_REL9_5_STABLE:
fatal: Not a git repository (or any of the parent directories): .git

Missing checked out branch bf_REL9_4_STABLE:
fatal: Not a git repository (or any of the parent directories): .git

fatal: unable to read tree 719b1b413b507d0fc86162f6aa45b6e44e6d82a1
Cannot rebase: Your index contains uncommitted changes.
Please commit or stash them.

None of that makes any possible sense, because I certainly wasn't touching
the git tree by hand, and the run_build job was only touching HEAD.
There's nothing really broken on the machine, because the next set of
runs went through fine.

Don't know what to make of this, except that probably the buildfarm
script's concurrent-job interlocks need some attention.





Oh, ouch. Well, that message comes from us just doing "git branch" to 
sanity check what branch we're on, and that happens before anything that 
was changed in this release. I had assumed, possibly naively, that git 
would lock against itself. Maybe not with multiple workdirs. This only 
matters if you're using git_use_workdirs, like you are, since otherwise 
the git repos are totally independent, and run_build is definitely 
locked against itself on a given branch. I'll look at adding a global 
wait lock, just while git checkout is running,  to cover this case. In 
normal operation we don't expect this to occur, since run_branches.pl 
just runs branches one at a time, so I don't think we need to put out an 
emergency fix, but you've uncovered a corner case that all my testing 
has missed.


Thanks for the report.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Make timestamptz_out less slow.

2015-11-22 Thread David Rowley
On 12 November 2015 at 13:44, Peter Geoghegan  wrote:

> On Wed, Nov 4, 2015 at 6:30 PM, David Rowley
>  wrote:
> >> Have you thought about *just* having an int64 pg_ltostr()? Are you
> >> aware of an appreciable overhead from having int32 callers just rely
> >> on a promotion to int64?
> >
> > I'd not thought of that. It would certainly add unnecessary overhead on
> > 32bit machines.
> > To be honest, I don't really like the idea of making fsec_t an int64,
> there
> > are just to many places around the code base that need to be updated.
> Plus
> > with float datetimes, we'd need to multiple the fractional seconds by
> > 10, which is based on the MAX_TIME_PRECISION setting as defined
> when
> > HAVE_INT64_TIMESTAMP is undefined.
>
> OK.
>
> >> I would just use a period/full stop here:
> >>
> >> + * Note str is not NUL terminated, callers are responsible for NUL
> >> terminating
> >> + * str themselves.
> >>
> >
> > Do you mean change the comment to "Note str is not NUL terminated.
> Callers
> > are responsible for NUL terminating str themselves." ?
>
> Yes.
>
> > Yes, that's a bit ugly. How about just Assert(padding > 0) just like
> above?
> > That gets rid of one Note:
> > The optimized part is probably not that important anyway. I just
> mentioned
> > it to try and reduce the changes of someone using it when the padding is
> > always too small.
>
> Cool.
>
> >> Maybe it's better to just special case INT_MIN and the do an Abs()?
> >> Actually, would any likely caller of pg_ltostr_zeropad() actually care
> >> if INT_MIN was a case that you cannot handle, and assert against? You
> >> could perhaps reasonably make it the caller's problem. Haven't made up
> >> my mind if your timestamp_delta.patch is better than that; cannot
> >> immediately see a problem with putting it on the caller.
> >>
> >
> > I can make that case work the same as pg_ltoa() for pg_ltostr(), that's
> not
> > a problem, I did that in the delta patch. With pg_ltostr_zeropad() I'd
> need
> > to add some corner case code to prepend the correct number of zeros onto
> > -2147483648. This is likely not going to look very nice, and also it's
> > probably going to end up about the same number of lines of code to
> what's in
> > the patch already. If you think it's better for performance, then I've
> > already done tests to show that it's not better. I really don't think
> it'll
> > look very nice either. I quite like my negative number handling, since
> it's
> > actually code that would be exercised 50% of the time ran than 1/ 2^32 of
> > the time, assuming all numbers have an equal chance of being passed.
>
> Fair enough.
>
>
Many thanks for spending time on this Peter.


> >> More generally, maybe routines like EncodeDateTime() ought now to use
> >> the Abs() macro.
> >
> > You might be right about that. Then we can just have pg_ltostr() do
> unsigned
> > only. The reason I didn't do that is because I was trying to minimise
> what I
> > was changing in the patch, and I thought it was less likely to make it
> if I
> > started adding calls to Abs() around the code base.
>
> I am very familiar with being conflicted about changing things beyond
> what is actually strictly necessary. It's still something I deal with
> a lot. One case that I think I have right in recommending commenting
> on is the need to centrally document that there are many exceptions to
> the 1900-based behavior of struct pg_tm. As I said, we should not
> continue to inconsistently locally note the exceptions, even if your
> patch doesn't make it any worse.
>
>
Just to confirm, you mean this comment?

int tm_year; /* relative to 1900 */

Please let me know if you disagree, but I'm not sure it's the business of
this patch to fix that. If it's wrong now, then it was wrong before my
patch, so it should be a separate patch which fixes it.

At this stage I don't quite know what the fix should be, weather it's doing
tm->tm_year -= 1900; in timestamp2tm() after the j2date() call, or if it's
just removing the misleading comment.

I also don't quite understand why we bother having it relative to 1900 and
not just base it on 0.

Is there anything else you see that's pending before it can be marked as
ready for committer?

--
 David Rowley   http://www.2ndQuadrant.com/

 PostgreSQL Development, 24x7 Support, Training & Services