[HACKERS] spoonbill - rare buildfarm failures in test_shm_mq_pipelined()

2016-10-25 Thread Stefan Kaltenbrunner
Spoonbill is very rarely (ie once every few months) failing like this:

[2016-08-29 18:15:35.273 CEST:57c45f88.52d4:8] LOG:  statement: SELECT
test_shm_mq_pipelined(16384, (select
string_agg(chr(32+(random()*95)::int), '') from
generate_series(1,27)), 200, 3);
[2016-08-29 18:15:35.282 CEST:57c45f88.52d4:9] ERROR:  floating-point
exception
[2016-08-29 18:15:35.282 CEST:57c45f88.52d4:10] DETAIL:  An invalid
floating-point operation was signaled. This probably means an
out-of-range result or an invalid operation, such as division by zero.
[2016-08-29 18:15:35.282 CEST:57c45f88.52d4:11] STATEMENT:  SELECT
test_shm_mq_pipelined(16384, (select
string_agg(chr(32+(random()*95)::int), '') from
generate_series(1,27)), 200, 3);


Some examples:


http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbill=2016-10-22%2023%3A00%3A06
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbill=2016-08-29%2011%3A00%3A06
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbill=2016-06-21%2023%3A00%3A06


Any ideas on what is causing this? IIrc we had issues with that specific
test on spoonbill (and other sparc based boxes) before so maybe we
failed to fix the issue completely...




Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-08-18 Thread Stefan Kaltenbrunner
On 08/18/2016 09:52 PM, Alvaro Herrera wrote:
> Stefan Kaltenbrunner wrote:
>> On 08/18/2016 09:30 PM, Christian Convey wrote:
> 
>>> Yury: Would it make sense to add a call to "cmake_minimum_required" in
>>> one or more of your CMakeLists.txt files?
>>
>> it would make sense nevertheless but I dont think that 2.8.11 is old
>> enough - looking at the release information and the feature compatibily
>> matrix it would seems we should more aim at something like 2.8.0 or 2.8.3...
> 
> Last year I checked versions installable in Debian:
> https://www.postgresql.org/message-id/20150829213631.GI2912@alvherre.pgsql
> From that I would say that the maximum minimum is 2.8.2.  Not sure if
> there's any platform where 2.8.0 (released in 2009) or older would be
> necessary.

well we have for example a NetBSD 5.1 boxe (coypu) on the buildfarm that
have a software stack that is basically 2008/2009ish...
So 2.8.0-2.8.3 seems like a realistic target to me still



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-08-18 Thread Stefan Kaltenbrunner
On 08/18/2016 09:42 PM, Christian Convey wrote:
> Hi Tom,
> 
> Thanks for that information.
> 
> Is there some document I can read that explains which platform
> versions (e.g., OpenBSD 5.3) are considered strongly supported?

well not sure we have very clear document on that - I would say that the
buildfarm is the most authoritative answer to that. So I think skimming
the buildfarm for the oldest and strangest platforms would be a good start.

> 
> I ask because I'm curious if/how someone in Yury's situation could
> predict which minimum version of CMake must be supported in order for
> his patch to be accepted.  (And if he accepts my offer to pitch in,
> I'll actually need that particular detail.)

well I personally think the level to meet would be that all the systems
on the buildfarm that can build -HEAD at the time the patch is proposed
for a commit should be able to build using the new system with whatever
cmake version is available in those by default (if it is at all).


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-08-18 Thread Stefan Kaltenbrunner
On 08/18/2016 09:30 PM, Christian Convey wrote:
> Hi Karl,
> 
> I'll need to let Yury answer your original question regarding the best
> way to report CMake-related bugs.
> 
> Regarding the errors you're getting...  I just looked at CMake's
> online documentation regarding your "target_compile_definitions"
> error.
> 
> From what I can tell, the "target_compile_definition" property was
> introduced in CMake 2.8.11.  It sounds like your version of CMake is
> just a little too old.

Well - "too old" is a relative term - cmake 2.8.10 was released in only
october 2012 and cmake 2.8.11 in may 2013 so it is not even 4 years old,
the oldest currently supported (though for not much longer) postgresql
release 9.1 was released in september 2011 and 9.2 was also released
before october 2012.
So while Cmake compat might only make it for v10, I dont think that we
can depend on bleeding edge version like that for our buildtools...


> 
> Regarding how one can know the required CMake version: My modus
> operandi for CMake projects in general is (1) read the project's
> how-to-build docs, and if that's not heplful, (2) hope that the
> project's CMake files contain a "cmake_minimum_required" call to give
> me a clear error message.  I didn't find any such indication in Yuri's
> files, although perhaps I missed it.
> 
> 
> Yury: Would it make sense to add a call to "cmake_minimum_required" in
> one or more of your CMakeLists.txt files?

it would make sense nevertheless but I dont think that 2.8.11 is old
enough - looking at the release information and the feature compatibily
matrix it would seems we should more aim at something like 2.8.0 or 2.8.3...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-08-18 Thread Stefan Kaltenbrunner
On 08/18/2016 08:57 PM, Christian Convey wrote:
> Hi Stefan,
> 
> I think I've seen similar errors when a project's CMake files assumed
> a newer version of CMake than the one being run.
> 
> Which version of CMake gave you those errors?  (Sorry if you provided
> that detail and I'm just missing it.)


% cmake --version
cmake version 2.8.10.2

a quick look in the docs does not seem to reveal any kind of "minimum"
cmake version required - and the above is what is packaged as part of
openbsd 5.3 (which is outdated and unsupported upstream but it is
currently perfectly fine building all postgresql versions including
-HEAD and serving as a buildfarm member for years)



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-08-18 Thread Stefan Kaltenbrunner
On 06/29/2016 06:23 PM, Yury Zhuravlev wrote:
> Hello Hackers.
> 
> I decided to talk about the current state of the project:
> 1. Merge with 9.6 master. 2. plpython2, plpython3, plperl, pltcl, plsql
> all work correctly (all tests pass).
> 3. Works done for all contrib modules. 4. You can use gettext, .po->.mo
> will have converted by CMake.  5. All test pass under some Linux,
> FreeBSD, Solaris10 (on Sparc), Windows MSVC 2015. MacOSX I think not big
> trouble too.  6. Prototype for PGXS (with MSVC support) done.
> I think is very big progress but I came across one problem.
> I not have access to many OS and platforms. For each platform need tests
> and small fixes. I can't develop and give guarantee without it.
> 
> I think this is very important work which makes it easier further
> support Postgres but I can not do everything himself. It's physically
> impossible.
> 
> I think without community support I can't do significantly more.
> 
> Current version you can get here:
> https://github.com/stalkerg/postgres_cmake

hmm how do you actually want reports on how it works?

I just played with it on spoonbill (OpenBSD 5.3/sparc64) and got this:

CMake Error at CMakeLists.txt:1250 (file):
  file does not recognize sub-command GENERATE


CMake Error at src/port/CMakeLists.txt:156 (target_compile_definitions):
  Unknown CMake command "target_compile_definitions".


-- Configuring incomplete, errors occurred!


there is also a ton of stuff like:


Make Error: Internal CMake error, TryCompile generation of cmake failed
-- Looking for opterr - not found
-- Looking for optreset
CMake Error at CMakeLists.txt:10 (ADD_EXECUTABLE):
  Target "cmTryCompileExec3458204847" links to item " m" which has
leading or
  trailing whitespace.  This is now an error according to policy CMP0004.


CMake Error: Internal CMake error, TryCompile generation of cmake failed
-- Looking for optreset - not found
-- Looking for fseeko
CMake Error at CMakeLists.txt:10 (ADD_EXECUTABLE):
  Target "cmTryCompileExec2628321539" links to item " m" which has
leading or
  trailing whitespace.  This is now an error according to policy CMP0004.


CMake Error: Internal CMake error, TryCompile generation of cmake failed


which I have no idea whether they are an actual problem or just
"configure" noise





Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HEADSUP: gitmaster.postgresql.org - upgrade NOW

2016-01-28 Thread Stefan Kaltenbrunner
On 01/28/2016 05:24 PM, Robert Haas wrote:
> On Thu, Jan 28, 2016 at 10:25 AM, Stefan Kaltenbrunner
> <ste...@kaltenbrunner.cc> wrote:
>> On 01/28/2016 04:00 PM, Stefan Kaltenbrunner wrote:
>>> Per discussion in the afternoon break at  FOSDEM/PGDay 2016 Developer
>>> Meeting we are going to upgrade gemulon.postgresql.org aka "gitmaster"
>>> today (the upgrade was originally scheduled end of last year but due to
>>> release and other constraints was never executed).
>>> The upgrade is going to start in a few minutes - will send a notice
>>> after it is done.
>>
>> upgrade completed - everything should be up again!
> 
> That was quick!

if my math is correct that was the 49th machine (with some more to come)
upgraded from wheezy to jessie - I think we have some routine now :)

> 
> Stefan, let me just mention how much I appreciate the work you and the
> entire infrastructure team do to keep our project running.  I am less
> aware of what all that work is than some people, I know, but I know
> that it really makes a difference and certainly I think about how nice
> it is to be able to push a commit and know that somebody else has
> taken responsibility for making sure it has someplace to which to get
> pushed.
> 
> So again: thanks.

on behalf of the entire team - thanks for the appreciation, we are doing
our best not get noticed (because in that case something would be
broken) :)


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HEADSUP: gitmaster.postgresql.org - upgrade NOW

2016-01-28 Thread Stefan Kaltenbrunner
On 01/28/2016 04:00 PM, Stefan Kaltenbrunner wrote:
> Hi all!
> 
> Per discussion in the afternoon break at  FOSDEM/PGDay 2016 Developer
> Meeting we are going to upgrade gemulon.postgresql.org aka "gitmaster"
> today (the upgrade was originally scheduled end of last year but due to
> release and other constraints was never executed).
> The upgrade is going to start in a few minutes - will send a notice
> after it is done.


upgrade completed - everything should be up again!



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] HEADSUP: gitmaster.postgresql.org - upgrade NOW

2016-01-28 Thread Stefan Kaltenbrunner
Hi all!

Per discussion in the afternoon break at  FOSDEM/PGDay 2016 Developer
Meeting we are going to upgrade gemulon.postgresql.org aka "gitmaster"
today (the upgrade was originally scheduled end of last year but due to
release and other constraints was never executed).
The upgrade is going to start in a few minutes - will send a notice
after it is done.



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New email address

2015-11-26 Thread Stefan Kaltenbrunner
On 11/26/2015 09:10 PM, Tom Lane wrote:
> Stefan Kaltenbrunner <ste...@kaltenbrunner.cc> writes:
>> that seems entirely doable with our current infrastructure (and even
>> with minimal-to-no hackery on mj2) - but it still carries the "changes
>> From:" issue :/
> 
> Yeah.  What do you think of the other approach of trying to preserve
> validity of the incoming DKIM-Signature (in particular, by not changing
> the Subject or message body)?

well not changing the subject seems like something we could do without
fuss - not changing the body would likely mean we would (again) get a
number of people asking "how do I unsubscribe", but maybe we will have
to live with that.

As for google/gmail - it seems they are indeed moving towards p=reject
based on:

https://dmarc.org/2015/10/global-mailbox-providers-deploying-dmarc-to-protect-users/
https://wordtothewise.com/2015/10/dmarc-news-gmail-preject-and-arc/


so we have to do "something" anyway (before June 2016) - I have not
actually studied the referenced ietf drafts mentioned in the second post
yet so maybe there is something in there to help with our usecase...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New email address

2015-11-26 Thread Stefan Kaltenbrunner
On 11/24/2015 11:03 PM, José Luis Tallón wrote:
> On 11/24/2015 07:55 PM, Tom Lane wrote:
>> [snip]
>> The clearly critical thing, though, is that when forwarding a message
>> from
>> a person at a DMARC-using domain, we would have to replace the From: line
>> with something @postgresql.org.  This is what gets it out from under the
>> original domain's DMARC policy.
> 
> One possibility that comes to mind:
> 
> - Remove the sender's DMARC headers+signature **after thoroughly
> checking it** (to minimize the amount of UBE/UCE/junk going in)
> - Replace the sender's (i.e. 'From:' header) with
> list-sender+munched-em...@postgresql.org (VERP-ified address)
> 
> - Add the required headers, footers, change the subject line, etc
> 
> - DKIM-sign the resulting message with postgresql.org's keys before
> sending it

that seems entirely doable with our current infrastructure (and even
with minimal-to-no hackery on mj2) - but it still carries the "changes
From:" issue :/


>> [snip]
>>
>> If Rudy's right that Gmail is likely to start using p=reject DMARC
>> policy,
>> we are going to have to do something about this before that; we have too
>> many people on gmail.  I'm not exactly in love with replacing From:
>> headers but there may be little alternative.  We could do something like
>> From: Persons Real Name 
>> Reply-To: ...
>> so that at least the person's name would still be readable in MUA
>> displays.
> Yup
> 
>> We'd have to figure out whether we want the Reply-To: to be the original
>> author or the list; as I recall, neither of those are fully satisfactory.
> Or just strip it, though that trump the sender's explicit preference
> (expressed by setting the header)
> 
> 
> I might be able to help a bit with implementation if needed.

the MTA side of things is fairly easy/straightforward(including using
exim for some of the heavy lifting like doing the inbound dkim checking
and "hinting" the downstream ML-boxes with the results) - however the
main mailinglist infrastructure is still mj2 and that is aeons old perl
- still interested in helping with implementation? ;)


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New email address

2015-11-24 Thread Stefan Kaltenbrunner
On 11/24/2015 07:55 PM, Tom Lane wrote:
> I wrote:
>> "Rudolph T. Maceyko"  writes:
>>> The basic changes since Yahoo implemented their p=reject DMARC policy
>>> last year (and others followed) were:
>>> * make NO CHANGES to the body of the message--no headers, footers, etc. 
>>> * make NO CHANGES to the subject header of the message--no more
>>> "[Highland Park]" 
>>> * when mail comes to the list from a domain that uses a p=reject DMARC
>>> policy, CHANGE THE FROM HEADER so that it comes from the list.
> 
> After further off-list discussion with Rudy, I'm not entirely convinced
> by his reasoning for dropping Subject-munging and footer-addition; it
> seems like that might be at least in part a limitation of his
> mailman-based infrastructure.
> 
> The clearly critical thing, though, is that when forwarding a message from
> a person at a DMARC-using domain, we would have to replace the From: line
> with something @postgresql.org.  This is what gets it out from under the
> original domain's DMARC policy.

exactly

> 
> The other stuff Rudy did, including adding the list's own DKIM-Signatures
> and publishing DMARC and SPF policy for the list domain, is not
> technically necessary (yet) but it makes the list traffic less likely to
> get tagged as spam by antispam heuristics.  And, as he noted, there are
> feedback loops that mean once some traffic gets tagged as spam it becomes
> more likely that future traffic will be.

well the purpose of the feedback loops is for the receiving ISP to feed
back information about (mostly) user tagged "I dont want this email"
(which is what the work on - not actual heuristics triggering) stuff -
from historical experience that works very very poor for a mailing list
like ours because in almost all cases subscribers are simply using the
"this is spam" feature to declare an email as "unwanted" and using it as
a shortcut to actually unsubscribing (and not thinking about any further
impact).


> 
> If Rudy's right that Gmail is likely to start using p=reject DMARC policy,
> we are going to have to do something about this before that; we have too
> many people on gmail.  I'm not exactly in love with replacing From:
> headers but there may be little alternative.  We could do something like
>   From: Persons Real Name 
>   Reply-To: ...
> so that at least the person's name would still be readable in MUA
> displays.
> 
> We'd have to figure out whether we want the Reply-To: to be the original
> author or the list; as I recall, neither of those are fully satisfactory.

well this is basically what it boils down to - we will absolutely have
to do is replacing "From:" (assuming the gmail rumour is true which I'm
not entirely convinced though) but what are we prepared to replace the
current system with and are we accepting that the lists are going to
work differently.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2015-10-01 Thread Stefan Kaltenbrunner
On 10/01/2015 05:10 PM, Andres Freund wrote:
> On 2015-10-01 11:07:12 -0400, Tom Lane wrote:
>> Andres Freund  writes:
>>> On 2015-10-01 16:48:32 +0200, Magnus Hagander wrote:
 That would require people to actually use the bug form to submit the
 initial thread as well of course - which most developers don't do
 themselves today. But there is in itself nothing that prevents them from
 doing that, of course - other than a Small Amount Of Extra Work.
>>
>>> It'd be cool if there were a newbug@ or similar mail address that
>>> automatically also posted to -bugs or so.
>>
>> I believe that's spelled pgsql-b...@postgresql.org.
> 
> The point is that newbug would automatically assign a bug id, without
> going through the form.

if we only want that - we can trivially implement that on the mailserver
side by asking the backend database sequence for a bugid and rewriting
the subject...
But given debbugs is on the radar not sure we need it...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] upcoming infrastructure changes/OS upgrades on *.postgresql.org

2015-09-25 Thread Stefan Kaltenbrunner
On 09/25/2015 08:53 PM, Andres Freund wrote:
> On 2015-09-25 20:47:21 +0200, Stefan Kaltenbrunner wrote:
>> yeah the point about 9.0.x is a very good one - so I think we will
>> target mid/end of october for borka so we get a bit of time to deal with
>> any fallout from the release (if needed).
>>
>> We could target the same timeframe for gemulon, unless people think that
>> doing it this(!) weekend would work as well (as in would be far enough
>> out from the upcoming releases).
> 
> Seems better to do both after the release. It's not like wheezy support
> is running out tomorrow.

yeah - Thinking about it I agree on doing both afterwards, and while
wheezy still has a lot of life left we also have almost 20 boxes still
left to upgrade which takes time (~40 are already done but the ones left
are more complex and have more dependencies and/or complexity) so we
need to keep up the pace ;)




Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] upcoming infrastructure changes/OS upgrades on *.postgresql.org

2015-09-25 Thread Stefan Kaltenbrunner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all!

As part of our regular maintenance schedule and in our continous effort
to run our systems on current and fully supported operating systems the
postgresql sysadmin has started upgrading the OS on our infrastructure
hosts from Debian Wheezy(aka Debian 7.x) to Debian Jessie (Debian 8.x).

This mail is mostly to announce our intend to upgrade the following two
systems in the forseeable future(no date set yet though) that might
directly affect -hackers/-commiters or a subset of those (release team)
and whether people want us to take special measures ahead of time to
ensure minimal disruption to the workflow process:


* gemulon.postgresql.org - gitmaster.postgresql, master git repository
(we will likely coordinate this internally so that somebody on the team
with a commit bit will test after the upgrade)

* borka.postgresql.org - official tarball and docs build server, the
upgrade will upgrade the toolchain on that box to the respective
versions of what is in jessie. If people think it is necessarily to
(double)check the effects beforehand we would like to know so we can
coordinate some
testing.




Stefan

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWBY5TAAoJEO1GOCPAcHIudgwQAMHCRkDy9KWWTN3gKsObwWkO
hQ+bc1CBPys5lKohI5Sv0KfunEk0DsbxzyTk3+eYmJRplzD2l8WB/dlYV7Kb0idY
Xi1CZ26/0Jc+YaQ5ImNurGEPFhhZpPw8eNJAxl6Kd7ux0ObQ8DWPKLGeZO3Jj1Mc
Ni+L8PHreg7PlE1j9MU0iSOwASCPNlS0/yf60QPHCZbCrE+7+WSsmFd0Cr8gG9oB
ClG2JzA/tylM/clsyRowAEg9NLj5l57SL/J6dlqbHZEGLy6dgGpkMhucCY8qJ87V
jgEd8jPmY9//Nvs0kIIGuvAhNty8DF0B9styXOBeNjSIbEJWaUr8VO5cRvehE8PN
KKlaFh79QvnxfnWhCWBR5LUFtFUcWBUiIyb6f7xoFY9vwS6surpxFsqiDveJyEZq
BN2z29OAtUxyRo7ZZaAwJiAoxLJ3mbZwIFIcfmXyMS/XE9SfzyuUX1uQZealRXHM
wlAR8NNmVJVOAlx6dvYawtoxJ1aAi63gjU8oQicASze+syS0vD6sKUe3BRp3vC54
st/Fv7ZEkko7Fc+2dpJKp78k+1Rz/M9YGW1XBcKqVuxUy/u6Njo90x9E0Su2cst/
0Jz/P3mckq2T/nf7AzV3Bx3TSnV5TUFPKLzm7+OwIA1lPRfsXXJ3CNRXP2hjmN1A
0Bsx1uUk1k8+bT+xhGT5
=8nHl
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] upcoming infrastructure changes/OS upgrades on *.postgresql.org

2015-09-25 Thread Stefan Kaltenbrunner
On 09/25/2015 08:30 PM, Tom Lane wrote:
> Stefan Kaltenbrunner <ste...@kaltenbrunner.cc> writes:
>> * borka.postgresql.org - official tarball and docs build server, the
>> upgrade will upgrade the toolchain on that box to the respective
>> versions of what is in jessie. If people think it is necessarily to
>> (double)check the effects beforehand we would like to know so we can
>> coordinate some testing.
> 
> I think it's probably sufficient to run a test tarball build after
> the upgrade; I can do so if you notify me when the upgrade's done.
> 
> As Andres says nearby, it would be good if this did *not* happen
> during the first week of October, since we have releases scheduled
> then.  Actually, since that will be the final 9.0.x release, I'd
> vote for postponing the borka upgrade till after that.  That's
> one less old branch to worry about new-toolchain compatibility for.
> And if there is some subtle problem in the tarballs that we find later,
> not having to reissue the last 9.0.x release would be a good thing.

yeah the point about 9.0.x is a very good one - so I think we will
target mid/end of october for borka so we get a bit of time to deal with
any fallout from the release (if needed).

We could target the same timeframe for gemulon, unless people think that
doing it this(!) weekend would work as well (as in would be far enough
out from the upcoming releases).


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2015-09-24 Thread Stefan Kaltenbrunner
On 09/24/2015 07:03 PM, Josh Berkus wrote:
> On 09/23/2015 10:25 PM, Thomas Munro wrote:
>> On Thu, Sep 24, 2015 at 1:31 PM, Joe Conway  wrote:
>>> On 09/23/2015 05:21 PM, Thomas Munro wrote:
 Do you think it would make any sense to consider evolving what we have
 already?  At the moment, we have a bug form, and when you submit it it
 does this (if I'm looking at the right thing, please correct me if I'm
 not):
> 
> I know we're big on reinventing the wheel here, but it would really be a
> better idea to use an established product than starting over from
> scratch. Writing a bug tracker is a lot of work and maintenance.
> 
>> The two most common interactions could go something like this:
>>
>> 1.  User enters bug report via form, creating an issue in NEW state
>> and creating a pgsql-bugs thread.  Someone responds by email that this
>> is expected behaviour, not a bug, not worth fixing or not a Postgres
>> issue etc using special trigger words.  The state is automatically
>> switched to WORKS_AS_DESIGNED or WONT_FIX.  No need to touch the web
>> interface: the only change from today's workflow is awareness of the
>> right wording to trigger the state change.
>>
>> 2.  User enters bug report via form, creating issue #1234 in NEW
>> state.   Someone responds by email to acknowledge that that may indeed
>> be an issue, and any response to an issue in NEW state that doesn't
>> reject it switches it to UNDER_DISCUSSION.  Maybe if a commitfest item
>> references the same thread (or somehow references the issue number?)
>> its state is changed to IN_COMMITFEST, or maybe as you say there could
>> be a way to generate the commitfest item from the issue, not sure
>> about that.  Eventually a commit log message says "Fixes bug #1234"
>> and the state automatically goes to FIXED.
> 
> I don't know debbugs, but I know that it would be possible to program RT
> to do all of the above, except add the item to the commitfest.

well minus the fact that the commitfest process looked
different/non-existent back in 2007/2008 this is basically how the BZ
based PoC I did back than worked (hooked into the bug tracking form to
allow BZ to "learn" about an issue/bug/thread and keep tracking it
automatically afterwards. You could send it simply commands as part as
the mail or click in the gui (or do nothing at all - though it would
close the bug if it found the bug id in a commit).

Even adding something to the commitfest should be fairly easy to do in
most tools because they all have hooks to send stuff via email, to
twitter, hipchat, IRC or whatnot.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposing COPY .. WITH PERMISSIVE

2015-09-02 Thread Stefan Kaltenbrunner
On 09/02/2015 10:10 PM, dinesh kumar wrote:
> On Tue, Sep 1, 2015 at 10:58 PM, Stefan Kaltenbrunner
> <ste...@kaltenbrunner.cc <mailto:ste...@kaltenbrunner.cc>> wrote:
> 
> On 07/25/2015 03:38 AM, dinesh kumar wrote:
> >
> >
> > On Fri, Jul 24, 2015 at 10:22 AM, Robert Haas <robertmh...@gmail.com 
> <mailto:robertmh...@gmail.com>
> > <mailto:robertmh...@gmail.com <mailto:robertmh...@gmail.com>>> wrote:
> >
> > On Thu, Jul 23, 2015 at 8:15 PM, dinesh kumar
> > <dineshkuma...@gmail.com <mailto:dineshkuma...@gmail.com>
> <mailto:dineshkuma...@gmail.com <mailto:dineshkuma...@gmail.com>>>
> wrote:
> > > On Thu, Jul 23, 2015 at 9:21 AM, Robert Haas
> > <robertmh...@gmail.com <mailto:robertmh...@gmail.com>
> <mailto:robertmh...@gmail.com <mailto:robertmh...@gmail.com>>> wrote:
> > >>
> > >> On Thu, Jul 23, 2015 at 12:19 PM, dinesh kumar
> > <dineshkuma...@gmail.com <mailto:dineshkuma...@gmail.com>
> <mailto:dineshkuma...@gmail.com <mailto:dineshkuma...@gmail.com>>>
> > >> wrote:
> > >> > Sorry for my  unclear description about the proposal.
> > >> >
> > >> > "WITH PERMISSIVE" is equal to our existing behavior. That
> is, chmod=644
> > >> > on
> > >> > the created files.
> > >> >
> > >> > If User don't specify "PERMISSIVE" as an option, then the
> chmod=600 on
> > >> > created files. In this way, we can restrict the other
> users from reading
> > >> > these files.
> > >>
> > >> There might be some benefit in allowing the user to choose the
> > >> permissions, but (1) I doubt we want to change the default
> behavior
> > >> and (2) providing only two options doesn't seem flexible
> enough.
> > >>
> > >
> > > Thanks for your inputs Robert.
> > >
> > > 1) IMO, we will keep the exiting behavior as it is.
> > >
> > > 2) As the actual proposal talks about the permissions of
> group/others. So,
> > > we can add few options as below to the WITH clause
> > >
> > > COPY
> > > ..
> > > ..
> > > WITH
> > > [
> > > NO
> > > (READ,WRITE)
> > > PERMISSION TO
> > > (GROUP,OTHERS)
> > > ]
> >
> > If we're going to do anything here, it should use COPY's
> > extensible-options syntax, I think.
> >
> >
> > Thanks Robert. Let me send a patch for this.
> 
> 
> how are you going to handle windows or unix ACLs here?
> Its permission model is quite different and more powerful than (non-acl
> based) unix in general, handling this in a flexible way might soon get
> very complicated and complex for limited gain...
> 
> 
> Hi Stefan,
> 
> I had the same questions too. But, I believe, our initdb works in these
> cases, after creating the data cluster. Isn't ?

maybe - but having a fixed "default"  is very different from baking a
classic unix permission concept of user/group/world^others into actual
DDL or into a COPY option. The proposed syntax might make some sense to
a admin used to a unix style system but it is likely utterly
incomprehensible to somebody who is used to (windows style) ACLs.

I dont have a good answer on what to do else atm but I dont think we
should embedded traditional/historical unix permission models in our
grammer unless really really needed...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposing COPY .. WITH PERMISSIVE

2015-09-01 Thread Stefan Kaltenbrunner
On 07/25/2015 03:38 AM, dinesh kumar wrote:
> 
> 
> On Fri, Jul 24, 2015 at 10:22 AM, Robert Haas  > wrote:
> 
> On Thu, Jul 23, 2015 at 8:15 PM, dinesh kumar
> > wrote:
> > On Thu, Jul 23, 2015 at 9:21 AM, Robert Haas
> > wrote:
> >>
> >> On Thu, Jul 23, 2015 at 12:19 PM, dinesh kumar
> >
> >> wrote:
> >> > Sorry for my  unclear description about the proposal.
> >> >
> >> > "WITH PERMISSIVE" is equal to our existing behavior. That is, 
> chmod=644
> >> > on
> >> > the created files.
> >> >
> >> > If User don't specify "PERMISSIVE" as an option, then the chmod=600 
> on
> >> > created files. In this way, we can restrict the other users from 
> reading
> >> > these files.
> >>
> >> There might be some benefit in allowing the user to choose the
> >> permissions, but (1) I doubt we want to change the default behavior
> >> and (2) providing only two options doesn't seem flexible enough.
> >>
> >
> > Thanks for your inputs Robert.
> >
> > 1) IMO, we will keep the exiting behavior as it is.
> >
> > 2) As the actual proposal talks about the permissions of group/others. 
> So,
> > we can add few options as below to the WITH clause
> >
> > COPY
> > ..
> > ..
> > WITH
> > [
> > NO
> > (READ,WRITE)
> > PERMISSION TO
> > (GROUP,OTHERS)
> > ]
> 
> If we're going to do anything here, it should use COPY's
> extensible-options syntax, I think.
> 
> 
> Thanks Robert. Let me send a patch for this.


how are you going to handle windows or unix ACLs here?
Its permission model is quite different and more powerful than (non-acl
based) unix in general, handling this in a flexible way might soon get
very complicated and complex for limited gain...



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (full) Memory context dump considered harmful

2015-08-24 Thread Stefan Kaltenbrunner

On 08/22/2015 06:25 AM, Tomas Vondra wrote:

On 08/21/2015 08:37 PM, Tom Lane wrote:

Tomas Vondra tomas.von...@2ndquadrant.com writes:


I also don't think logging just subset of the stats is a lost case.
Sure, we can't know which of the lines are important, but for example
logging just the top-level contexts with a summary of the child contexts
would be OK.


I thought a bit more about this.  Generally, what you want to know about
a given situation is which contexts have a whole lot of allocations
and/or a whole lot of child contexts.  What you suggest above won't work
very well if the problem is buried more than about two levels down in
the context tree.  But suppose we add a parameter to memory context stats
collection that is the maximum number of child contexts to print *per
parent context*.  If there are more than that, summarize the rest as per
your suggestion.  So any given recursion level might look like

  FooContext: m total in n blocks ...
ChildContext1: m total in n blocks ...
  possible grandchildren...
ChildContext2: m total in n blocks ...
  possible grandchildren...
ChildContext3: m total in n blocks ...
  possible grandchildren...
k more child contexts containing m total in n blocks ...

This would require a fixed amount of extra state per recursion level,
so it could be done with a few more parameters/local variables in
MemoryContextStats and no need to risk a malloc().

The case where you would lose important data is where the serious
bloat is in some specific child context that is after the first N
children of its direct parent. I don't believe I've ever seen a case
where that was critical information as long as N isn't too tiny.


Couldn't we make it a bit smarter to handle even cases like this? For
example we might first count/sum the child contexts, and then print
either all child contexts (if there are only a few of them) or just
those that are 5% of the total, 2x the average or something like that.


While having that kind of logic would be nice, i dont think it is 
required. For the case I had the proposed patch from tom seems perfectly 
fine to me - not sure we would want a GUC. From a DBA perspective I dont 
think anybody needs millions of lines of almost duplicated memory 
context dumps and also not sure we need it from a developer perspective 
either (other than the information: there were more than those I printed)




regards


Stefan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (full) Memory context dump considered harmful

2015-08-20 Thread Stefan Kaltenbrunner
On 08/20/2015 06:09 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 I wonder if we should have a default of capping the dump to say 1k lines 
 or such and only optionally do a full one.
 
 -1.  It's worked like this for the last fifteen years or thereabouts,
 and you're the first one to complain.  I suspect some weirdness in
 your logging setup, rather than any systemic problem that we
 need to lobotomize our diagnostic output in order to prevent.

not sure what you consider weird in the logging setup here - the context
dump is imho borderline on internal diagnostic output at a debug level
(rather than making sense to an average sysadmin) already (and no way to
control it). But having (like in our case) the backend dumping 2 million
basically identical lines into a general logfile per event seems
excessive and rather abusive towards the rest of the system (just from
an IO perspective for example or from a log file post processing tool
perspective)

 
 (The reason I say lobotomize is that there's no particularly good
 reason to assume that the first N lines will tell you what you need
 to know.  And the filter rule would have to be *very* stupid, because
 we can't risk trying to allocate any additional memory to track what
 we're doing here.)

I do understand that there migt be challenges there but in the last 15
years machines got way faster and pg got way capable and some of those
capabilities might need to get revisited in that regards - and while it
is very nice that pg survives multiple oom cases pretty nicely I dont
think it is entitled to put an imho unreasonable burden on the rest of
the system by writing insane amounts of data.

Just from a sysadmin perspective it also means that it is trivial for a
misbehaving app to fill up the logfile on a system because unlike almost
all other actual other logging output there seems to be no way to
control/disabled it on a per role/database level.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] (full) Memory context dump considered harmful

2015-08-20 Thread Stefan Kaltenbrunner

Hi all!


We just had a case of a very long running process of ours that creates 
does a lot of prepared statements through Perls DBD:Pg running into:


https://rt.cpan.org/Public/Bug/Display.html?id=88827

This resulted in millions of prepared statements created, but not 
removed in the affected backends over the course of 1-2 hours until the 
backends in question ran out of memory.
The out of memory condition resulted in one memory context dump 
generated per occurance each consisting of 2M lines(!) (basically a 
line of CachedPlan/CachePlanSource per statement/function).
In the 20 Minutes or it took monitoring to alert and operations to react 
this cause a followup incident because repeated out of memory 
conditions caused over 400M(!!) loglines amounting to some 15GB of data 
running the log partition dangerously close to full.


an example memory context dump looks like this:


TopMemoryContext: 582728880 total in 71126 blocks; 6168 free (52 
chunks); 582722712 used
  TopTransactionContext: 8192 total in 1 blocks; 6096 free (1 chunks); 
2096 used

ExecutorState: 8192 total in 1 blocks; 5392 free (0 chunks); 2800 used
  ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
SPI Exec: 0 total in 0 blocks; 0 free (0 chunks); 0 used
SPI Proc: 8192 total in 1 blocks; 5416 free (0 chunks); 2776 used
  PL/pgSQL function context: 8192 total in 1 blocks; 1152 free (1 
chunks); 7040 used
  PL/pgSQL function context: 24576 total in 2 blocks; 11400 free (1 
chunks); 13176 used
  Type information cache: 24576 total in 2 blocks; 11888 free (5 
chunks); 12688 used
  PL/pgSQL function context: 8192 total in 1 blocks; 1120 free (1 
chunks); 7072 used
  PL/pgSQL function context: 24576 total in 2 blocks; 10984 free (1 
chunks); 13592 used
  PL/pgSQL function context: 57344 total in 3 blocks; 29928 free (2 
chunks); 27416 used
  PL/pgSQL function context: 57344 total in 3 blocks; 28808 free (2 
chunks); 28536 used
  PL/pgSQL function context: 24576 total in 2 blocks; 5944 free (3 
chunks); 18632 used
  RI compare cache: 24576 total in 2 blocks; 15984 free (5 chunks); 
8592 used
  RI query cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 
used
  PL/pgSQL function context: 57344 total in 3 blocks; 31832 free (2 
chunks); 25512 used
  PL/pgSQL function context: 57344 total in 3 blocks; 29600 free (2 
chunks); 27744 used
  PL/pgSQL function context: 57344 total in 3 blocks; 39688 free (5 
chunks); 17656 used

  CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 
chunks); 6512 used
  PLpgSQL function cache: 24520 total in 2 blocks; 3744 free (0 
chunks); 20776 used
  Prepared Queries: 125886512 total in 25 blocks; 4764208 free (91 
chunks); 121122304 used

  TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 
chunks); 12688 used

  MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 
6512 used
  smgr relation table: 24576 total in 2 blocks; 5696 free (4 chunks); 
18880 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 
chunks); 32 used

  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 64 free (0 chunks); 960 used
  ExecutorState: 57344 total in 3 blocks; 21856 free (2 chunks); 
35488 used

ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8160 free (1 chunks); 32 used
  Relcache by OID: 24576 total in 2 blocks; 12832 free (3 chunks); 
11744 used
  CacheMemoryContext: 42236592 total in 28 blocks; 7160904 free (298 
chunks); 35075688 used

CachedPlan: 7168 total in 3 blocks; 1544 free (0 chunks); 5624 used
CachedPlanSource: 7168 total in 3 blocks; 3904 free (1 chunks); 
3264 used

CachedPlan: 15360 total in 4 blocks; 6440 free (1 chunks); 8920 used
CachedPlanSource: 7168 total in 3 blocks; 1352 free (0 chunks); 
5816 used

CachedPlan: 15360 total in 4 blocks; 6440 free (1 chunks); 8920 used
CachedPlanSource: 7168 total in 3 blocks; 1352 free (0 chunks); 
5816 used

CachedPlan: 7168 total in 3 blocks; 1544 free (0 chunks); 5624 used
CachedPlanSource: 7168 total in 3 blocks; 3904 free (1 chunks); 
3264 used

CachedPlan: 15360 total in 4 blocks; 6440 free (1 chunks); 8920 used
CachedPlanSource: 7168 total in 3 blocks; 1352 free (0 chunks); 
5816 used

CachedPlan: 15360 total in 4 blocks; 6440 free (1 chunks); 8920 used
CachedPlanSource: 7168 total in 3 blocks; 1352 free (0 chunks); 
5816 used

CachedPlan: 7168 total in 3 blocks; 1544 free (0 chunks); 5624 used
CachedPlanSource: 7168 total in 3 blocks; 3904 free (1 chunks); 
3264 used

CachedPlan: 15360 

Re: [CORE] [HACKERS] postpone next week's release

2015-06-03 Thread Stefan Kaltenbrunner
On 05/31/2015 03:51 AM, David Steele wrote:
 On 5/30/15 8:38 PM, Joshua D. Drake wrote:

 On 05/30/2015 03:48 PM, David Steele wrote:
 On 5/30/15 2:10 PM, Robert Haas wrote:
 What, in this release, could break things badly?  RLS? Grouping sets?
 Heikki's WAL format changes?  That last one sounds really scary to me;
 it's painful if not impossible to fix the WAL format in a minor
 release.

 I would argue Heikki's WAL stuff is a perfect case for releasing a
 public alpha/beta soon.  I'd love to test PgBackRest with an official
 9.5dev build.  The PgBackRest test suite has lots of tests that run on
 versions 8.3+ and might well shake out any bugs that are lying around.

 You are right. Clone git, run it nightly automated and please, please
 report anything you find. There is no reason for a tagged release for
 that. Consider it a custom, purpose built, build-test farm.
 
 Sure - I can write code to do that.  But then why release a beta at all?

FWIW: we also carry official snapshots on the download site (
https://ftp.postgresql.org/pub/snapshot/dev/) that you could use if you
dont want git directly - those even receive some form of QA (for a
snapshot to be posted it is required to pass a full buildfarm run on the
buildbox).



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MD5 authentication needs help

2015-03-04 Thread Stefan Kaltenbrunner
On 03/04/2015 04:52 PM, Stephen Frost wrote:
 Bruce, all,
 
 I've been discussing this with a few folks outside of the PG community
 (Debian and Openwall people specifically) and a few interesting ideas
 have come out of that which might be useful to discuss.
 
 The first is a don't break anything approach which would move the
 needle between network data sensitivity and on-disk data sensitivity
 a bit back in the direction of making the network data more sensitive.
 
 this approach looks like this: pre-determine and store the values (on a
 per-user basis, so a new field in pg_authid or some hack on the existing
 field) which will be sent to the client in the AuthenticationMD5Password
 message.  Further, calculate a random salt to be used when storing data
 in pg_authid.  Then, for however many variations we feel are necessary,
 calculate and store, for each AuthenticationMD5Password value:
 
 md5_challenge, hash(salt || response)
 
 We wouldn't store 4 billion of these, of course, which means that the
 challenge / response system becomes less effective on a per-user basis.
 We could, however, store X number of these and provide a lock-out
 mechanism (something users have asked after for a long time..) which
 would make it likely that the account would be locked before the
 attacker was able to gain access.  Further, an attacker with access to
 the backend still wouldn't see the user's cleartext password, nor would
 we store the cleartext password or a token in pg_authid which could be
 directly used for authentication, and we don't break the wireline
 protocol or existing installations (since we could detect that the
 pg_authid entry has the old-style and simply 'upgrade' it).
 
 That's probably the extent of what we could do to improve the current
 'md5' approach without breaking the wireline protocol or existing stored
 data.
 
 A lot of discussion has been going on with SCRAM and SASL, which is all
 great, but that means we end up with a dependency on SASL or we have to
 reimplement SCRAM (which I've been thinking might not be a bad idea-
 it's actually not that hard), but another suggestion was made which may
 be worthwhile to consider- OpenSSL and GnuTLS both support TLS-SRP, the
 RFC for which is here: http://www.ietf.org/rfc/rfc5054.txt.  We already
 have OpenSSL and therefore this wouldn't create any new dependencies and
 might be slightly simpler to implement.

not sure we should depend on TLS-SRP - the libressl people removed the
support for SRP pretty early in the development process:
https://github.com/libressl/libressl/commit/f089354ca79035afce9ec649f54c18711a950ecd



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New CF app deployment

2015-02-26 Thread Stefan Kaltenbrunner
On 02/26/2015 01:59 PM, Michael Paquier wrote:
 On Thu, Feb 26, 2015 at 9:15 PM, Asif Naeem anaeem...@gmail.com wrote:
 This thread seems relevant, Please guide me to how can access older CF pages
 The MSVC portion of this fix got completely lost in the void:
 https://commitfest.postgresql.org/action/patch_view?id=1330

 Above link result in the following error i.e.

 Not found
 The specified URL was not found.

 Please do let me know if I missed something. Thanks.
 
 Try commitfest-old instead, that is where the past CF app stores its
 data, like that:
 https://commitfest-old.postgresql.org/action/patch_view?id=1330

hmm maybe we should have some sort of handler the redirects/reverse
proxies to the old commitfest app for this.



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multiple backends attempting to wait for pincount 1

2015-02-22 Thread Stefan Kaltenbrunner
On 02/13/2015 06:27 AM, Tom Lane wrote:
 Two different CLOBBER_CACHE_ALWAYS critters recently reported exactly
 the same failure pattern on HEAD:
 
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhordt=2015-02-06%2011%3A59%3A59
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tickdt=2015-02-12%2010%3A22%3A57
 
 I'd say we have a problem.  I'd even go so far as to say that somebody has
 completely broken locking, because this looks like autovacuum and manual
 vacuuming are hitting the same table at the same time.

fwiw - looks like spoonbill(not doing CLOBBER_CACHE_ALWAYS) managed to
trigger that ones as well:

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbilldt=2015-02-23%2000%3A00%3A06

there is also some failures from the BETWEEN changes in that
regression.diff but that might be fallout from the above problem.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-01-11 Thread Stefan Kaltenbrunner
On 01/11/2015 11:27 AM, Stephen Frost wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
 On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net wrote:
 Yeah, if we come up with a plan for X workers and end up not being able
 to spawn that many then I could see that being worth a warning or notice
 or something.  Not sure what EXPLAIN has to do anything with it..

 That seems mighty odd to me.  If there are 8 background worker
 processes available, and you allow each session to use at most 4, then
 when there are 2 sessions trying to do parallelism at the same time,
 they might not all get their workers.  Emitting a notice for that
 seems like it would be awfully chatty.
 
 Yeah, agreed, it could get quite noisy.  Did you have another thought
 for how to address the concern raised?  Specifically, that you might not
 get as many workers as you thought you would?

Wild idea: What about dealing with it as some sort of statistic - ie
track some global counts in the stats collector or on a per-query base
in pg_stat_activity and/or through pg_stat_statements?

Not sure why it is that important to get it on a per-query base, imho it
is simply a configuration limit we have set (similiar to work_mem or
when switching to geqo) - we dont report per query through
notice/warning there either (though the effect is kind visible in explain).


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-01-09 Thread Stefan Kaltenbrunner
On 01/09/2015 08:01 PM, Stephen Frost wrote:
 Amit,
 
 * Amit Kapila (amit.kapil...@gmail.com) wrote:
 On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby jim.na...@bluetreble.com wrote:
 I agree, but we should try and warn the user if they set
 parallel_seqscan_degree close to max_worker_processes, or at least give
 some indication of what's going on. This is something you could end up
 beating your head on wondering why it's not working.

 Yet another way to handle the case when enough workers are not
 available is to let user  specify the desired minimum percentage of
 requested parallel workers with parameter like
 PARALLEL_QUERY_MIN_PERCENT. For  example, if you specify
 50 for this parameter, then at least 50% of the parallel workers
 requested for any  parallel operation must be available in order for
 the operation to succeed else it will give error. If the value is set to
 null, then all parallel operations will proceed as long as at least two
 parallel workers are available for processing.
 
 Ugh.  I'm not a fan of this..  Based on how we're talking about modeling
 this, if we decide to parallelize at all, then we expect it to be a win.
 I don't like the idea of throwing an error if, at execution time, we end
 up not being able to actually get the number of workers we want-
 instead, we should degrade gracefully all the way back to serial, if
 necessary.  Perhaps we should send a NOTICE or something along those
 lines to let the user know we weren't able to get the level of
 parallelization that the plan originally asked for, but I really don't
 like just throwing an error.

yeah this seems like the the behaviour I would expect, if we cant get
enough parallel workers we should just use as much as we can get.
Everything else and especially erroring out will just cause random
application failures and easy DoS vectors.
I think all we need initially is being able to specify a maximum number
of workers per query as well as a maximum number of workers in total
for parallel operations.


 
 Now, for debugging purposes, I could see such a parameter being
 available but it should default to 'off/never-fail'.

not sure what it really would be useful for - if I execute a query I
would truely expect it to get answered - if it can be made faster if
done in parallel thats nice but why would I want it to fail?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Updating copyright notices to 2015 for PGDG

2015-01-06 Thread Stefan Kaltenbrunner
On 01/06/2015 04:19 PM, Bruce Momjian wrote:
 On Sat, Jan  3, 2015 at 01:45:37PM -0800, David Fetter wrote:
 On Sat, Jan 03, 2015 at 09:54:16PM +0900, Michael Paquier wrote:
 Hi all,

 Shouldn't we update the copyright notices to 2015 for PGDG like in
 7e04792? I mean those things mainly:
 Portions Copyright (c) 1996-2014, PostgreSQL Global Development Group
 Regards,

 I just ran this:

 ./src/tools/copyright.pl 
 Using current year:  2015
 Manually update doc/src/sgml/legal.sgml and 
 src/interfaces/libpq/libpq.rc.in too.
 Also update ./COPYRIGHT and doc/src/sgml/legal.sgml in all back branches.

 and did what it said on the current branch.

 Please find patch attached.
 
 I will run the script today.  I didn't do it earlier because I want to
 be current on reading community email before doing it.

hmm is it intentional that the commit also changed other files?

looks like the commited patch added newlines to various files that had
none before for example:

src/test/isolation/specs/nowait-2.spec
src/test/isolation/specs/nowait-3.spec
src/test/isolation/specs/skip-locked-4.spec
src/test/modules/commit_ts/commit_ts.conf


http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4baaf863eca5412e07a8441b3b7e7482b7a8b21a#patch1352


while I do think that the files should have newlines I dont think those
should be added in a copyright bump commit and I think the script might
actually break files where we specifically dont want a newline (afaik we
dont have atm but still)


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Updating copyright notices to 2015 for PGDG

2015-01-06 Thread Stefan Kaltenbrunner
On 01/06/2015 10:12 PM, Alvaro Herrera wrote:
 Bruce Momjian wrote:
 On Tue, Jan  6, 2015 at 08:46:19PM +0100, Stefan Kaltenbrunner wrote:
 I will run the script today.  I didn't do it earlier because I want to
 be current on reading community email before doing it.

 hmm is it intentional that the commit also changed other files?

 looks like the commited patch added newlines to various files that had
 none before for example:

 Specifically, these files had no newline after the last line in the
 file.
 
 I don't think we have any files that require not to have a trailing
 newline.  Do we need an explicit check against it?  Seems doubtful, but
 then if the need arises, we will break it each year and who knows if
 anybody will be vigilant enough to notice.  Stefan caught it this time,
 but who would normally skim 18000 lines of supposedly mechanical diff
 looking for issues?  (How did you catch this in the first place?)

yeah while the trailing newline thingy does not seem to be a real issue
it still caught my eye when I was glancing at the diff (I was basically
scrolling through it when I noticed this)

 
 This makes me wonder however how wise it is to update the copyright
 notices in every single file in the repo.  Why do we need this?  Why not
 abolish the practice and live forever with most files having copyright
 2015?  (Only new files would have newer years in their copyright
 notices, I guess.)  Does this provide us with any kind of protection,
 and if so against what, and how does it protect us?  Since we have a
 very clean git history which shows us the exact provenance of every
 single line of source code, and we have excellent mail archives that
 show where each line came from for all development in the last decade,
 this single line of (C) boilerplate in each file seems completely
 pointless.

I dont know why it is really needed but maybe for the files that have
identical copyrights one could simple reference to the COPYRIGHT file we
already have in the tree?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL information view

2014-07-14 Thread Stefan Kaltenbrunner
On 07/13/2014 10:35 PM, Magnus Hagander wrote:
 On Sun, Jul 13, 2014 at 10:32 PM, Stefan Kaltenbrunner
 ste...@kaltenbrunner.cc wrote:
 On 07/12/2014 03:08 PM, Magnus Hagander wrote:
 As an administrator, I find that you fairly often want to know what
 your current connections are actually using as SSL parameters, and
 there is currently no other way than gdb to find that out - something
 we definitely should fix.

 Yeah that would be handy - however I often wish to be able to figure
 that out based on the logfile as well, any chance of getting these into
 connection-logging/log_line_prefix as well?
 
 We do already log some of it if you have enabled log_connections -
 protocol and cipher. Anything else in particular you'd be looking for
 - compression info?

DN mostly, not sure I care about compression info...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL information view

2014-07-13 Thread Stefan Kaltenbrunner
On 07/12/2014 03:08 PM, Magnus Hagander wrote:
 As an administrator, I find that you fairly often want to know what
 your current connections are actually using as SSL parameters, and
 there is currently no other way than gdb to find that out - something
 we definitely should fix.

Yeah that would be handy - however I often wish to be able to figure
that out based on the logfile as well, any chance of getting these into
connection-logging/log_line_prefix as well?



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table supported architectures

2014-06-27 Thread Stefan Kaltenbrunner
On 06/27/2014 08:26 PM, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
 On 2014-06-27 13:12:31 -0400, Robert Haas wrote:
 I don't personally object to dropping Alpha, but when this was
 discussed back in October, Stefan did:

 http://www.postgresql.org/message-id/52616373.10...@kaltenbrunner.cc
 
 As an ex-packager I do not believe the argument that it will matter
 to packagers if we desupport one of their secondary architectures.
 There are many, many packages that have never claimed to work on
 oddball architectures at all.  Packagers would be better served
 by honesty about what we can support.

yeah I guess so - I was mostly pointing out that alpha looked like to be
a way more active platform than most of what was discussed in that
thread. I personally dont think that continuing to support alpha will
buy us anything...

 
 Ah, right. I still am in favor of dropping it because I don't it is
 likely to work, but, as a compromise, we could remove only the Tru64
 variant? Openbsd + gcc is much less of a hassle.
 
 But I think he's rather in the minority anyway.
 
 Looks like it.
 
 There would be value in continuing to support Alpha if we had one
 in the buildfarm.  We don't, and have not had in any recent memory,
 and I haven't noticed anyone offering to provide one in future.
 
 The actual situation is that we're shipping a port that most
 likely doesn't work, and we have no way to fix it.  That's of
 no benefit to anyone.

yeah I dont have access to any alpha hardware to provide a buildfarm box
so I cant help there :(



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sigh, we need an initdb

2014-06-04 Thread Stefan Kaltenbrunner
On 06/04/2014 08:56 PM, Joshua D. Drake wrote:
 
 On 06/04/2014 11:52 AM, Tom Lane wrote:
 
 I think we could possibly ship 9.4 without fixing this, but it would be
 imprudent.  Anyone think differently?

 Of course, if we do fix this then the door opens for pushing other
 initdb-forcing fixes into 9.4beta2, such as the LOBLKSIZE addition
 that I was looking at when I noticed this, or the pg_lsn catalog
 additions that were being discussed a couple weeks ago.
 
 It certainly seems that if we are going to initdb anyway, let's do it
 with approved features that got kicked (assuming) only because they
 would cause an initdb.

agreed there - I dont think the no initdb rule during BETA really buys
us that much these days. If people test our betas at all they do on
scratch boxes in development/staging, i really doubt that (especially
given the .0 history we had in the last years) people really move -BETA
installs to production or expect to do so.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] buildfarm animals and 'snapshot too old'

2014-05-15 Thread Stefan Kaltenbrunner
On 05/15/2014 07:46 PM, Andrew Dunstan wrote:
 
 On 05/15/2014 12:43 PM, Tomas Vondra wrote:
 Hi all,

 today I got a few of errors like these (this one is from last week,
 though):

 Status Line: 493 snapshot too old: Wed May  7 04:36:57 2014 GMT
 Content:
 snapshot to old: Wed May  7 04:36:57 2014 GMT

 on the new buildfarm animals. I believe it was my mistake (incorrectly
 configured local git mirror), but it got me thinking about how this will
 behave with the animals running CLOBBER_CACHE_RECURSIVELY.

 If I understand the Perl code correctly, it does this:

 (1) update the repository
 (2) run the tests
 (3) check that the snapshot is not older than 24 hours (pgstatus.pl:188)
 (4) fail if older

 Now, imagine that the test runs for days/weeks. This pretty much means
 it's wasted, because the results will be thrown away anyway, no?

 
 
 The 24 hours runs from the time of the latest commit on the branch in
 question, not the current time, but basically yes.
 
 We've never had machines with runs that long. The longest in recent
 times has been friarbird, which runs CLOBBER_CACHE_ALWAYS and takes
 around 4.5 hours. But we have had misconfigured machines reporting
 unbelievable snapshot times.  I'll take a look and see if we can tighten
 up the sanity check. It's worth noting that one thing friarbird does is
 skip the install-check stage - it's almost certainly not going to have
 terribly much interesting to tell us from that, given it has already run
 a plain make check.

well I'm not sure about about misconfigured but both my personal
buildfarm members and pginfra run ones (like gaibasaurus) got errors
complaining about snapshot too old in the past for long running tests
so I'm not sure it is really a we never had machine with runs that
long. So maybe we should not reject those submissions at submission
time but rather mark them clearly on the dashboard and leave the final
interpretation to a human...




Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [DOCS] Re: Viability of text HISTORY/INSTALL/regression README files (was Re: [COMMITTERS] pgsql: Document a few more regression test hazards.)

2014-02-06 Thread Stefan Kaltenbrunner
On 02/05/2014 07:27 PM, Robert Haas wrote:
 On Tue, Feb 4, 2014 at 11:43 PM, Noah Misch n...@leadboat.com wrote:
 Right.  I mean, a lot of the links say things like Section 26.2
 which obviously makes no sense in a standalone text file.

 For xrefs normally displayed that way, text output could emit a URL, either
 inline or in the form of a footnote.  For link targets (e.g. SQL commands)
 having a friendly text fragment for xref sites, use the normal fragment.
 
 True.  If we're going to keep these things around, something like that
 would avoid some annoyances for documentation authors.  But I still
 think we ought to just nuke them, because who cares?

I dont care about HISTORY and even less so about regress_README but I
would prefer to keep INSTALL because I know that people do look at that
one...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Compress GIN posting lists, for smaller index size.

2014-01-23 Thread Stefan Kaltenbrunner
On 01/22/2014 06:28 PM, Heikki Linnakangas wrote:
 Compress GIN posting lists, for smaller index size.
 
 GIN posting lists are now encoded using varbyte-encoding, which allows them
 to fit in much smaller space than the straight ItemPointer array format used
 before. The new encoding is used for both the lists stored in-line in entry
 tree items, and in posting tree leaf pages.
 
 To maintain backwards-compatibility and keep pg_upgrade working, the code
 can still read old-style pages and tuples. Posting tree leaf pages in the
 new format are flagged with GIN_COMPRESSED flag, to distinguish old and new
 format pages. Likewise, entry tree tuples in the new format have a
 GIN_ITUP_COMPRESSED flag set in a bit that was previously unused.
 
 This patch bumps GIN_CURRENT_VERSION from 1 to 2. New indexes created with
 version 9.4 will therefore have version number 2 in the metapage, while old
 pg_upgraded indexes will have version 1. The code treats them the same, but
 it might be come handy in the future, if we want to drop support for the
 uncompressed format.
 
 Alexander Korotkov and me. Reviewed by Tomas Vondra and Amit Langote.


it seems that this commit made spoonbill an unhappy animal:

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbilldt=2014-01-23%2000%3A00%3A04



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changeset Extraction v7.0 (was logical changeset generation)

2014-01-19 Thread Stefan Kaltenbrunner
On 01/18/2014 02:31 PM, Robert Haas wrote:
 On Thu, Jan 16, 2014 at 10:15 PM, Craig Ringer cr...@2ndquadrant.com wrote:
 Anybody who actually uses SHIFT_JIS as an operational encoding, rather
 than as an input/output encoding, is into pain and suffering. Personally
 I'd be quite happy to see it supported as client_encoding, but forbidden
 as a server-side encoding. That's not the case right now - so since we
 support it, we'd better guard against its quirks.
 
 I think that *is* the case right now.  pg_wchar.h sayeth:
 
 /* followings are for client encoding only */
 PG_SJIS,/* Shift JIS
 (Winindows-932) */

while you have that file open: s/Winindows-932/Windows-932 maybe?



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Happy new year from viva64

2013-12-27 Thread Stefan Kaltenbrunner
On 12/27/2013 01:27 PM, Alexander Korotkov wrote:
 Hackers,
 
 I believe many of us have seen report of checking PostgreSQL with
 PVS-Studio.
 http://www.viva64.com/en/b/0227/
 Me and Oleg Bartunov got license key for PVS-Studio. Thanks Viva64 for it.
 I've just run PVS-Studio against PostgreSQL head and it gives me very
 many warnings. CSV-file with them is attached. I believe most of them
 are just noise. But there could be useful warning which aren't mentioned
 in the blog post.
 Probably somebody have ideas about what to do this that list?

hmm reading up on the generated warnings I wonder if that list was
generated with the default/recommended settings of the static analyzer -
for example there is a _ton_ of V122 - Memsize type is used in the
struct/class. errors but the docs on http://www.viva64.com/en/d/0070/
state:

By default it is disabled since it generates false warnings in more
than 99% of cases.

So maybe you need to apply some default settings to get a more sensible
report to start from?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why we are going to have to go DirectIO

2013-12-04 Thread Stefan Kaltenbrunner

On 12/04/2013 04:30 PM, Peter Eisentraut wrote:

On 12/4/13, 2:14 AM, Stefan Kaltenbrunner wrote:

running a
few kvm instances that get bootstrapped automatically is something that
is a solved problem.


Is it sound to run performance tests on kvm?


as sounds as on any other platform imho, the performance characteristics 
will differ between bare metal or other virtualisation platforms but the 
future is virtual and that is what a lot of stuff runs on...



Stefan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why we are going to have to go DirectIO

2013-12-04 Thread Stefan Kaltenbrunner
On 12/04/2013 04:33 PM, Jonathan Corbet wrote:
 On Tue, 03 Dec 2013 10:44:15 -0800
 Josh Berkus j...@agliodbs.com wrote:
 
 It seems clear that Kernel.org, since 2.6, has been in the business of
 pushing major, hackish, changes to the IO stack without testing them or
 even thinking too hard about what the side-effects might be.  This is
 perhaps unsurprising given that two of the largest sponsors of the
 Kernel -- who, incidentally, do 100% of the performance testing -- don't
 use the IO stack.

 This says to me that Linux will clearly be an undependable platform in
 the future with the potential to destroy PostgreSQL performance without
 warning, leaving us scrambling for workarounds.  Too bad the
 alternatives are so unpopular.
 
 Wow, Josh, I'm surprised to hear this from you.
 
 The active/inactive list mechanism works great for the vast majority of
 users.  The second-use algorithm prevents a lot of pathological behavior,
 like wiping out your entire cache by copying a big file or running a
 backup.  We *need* that kind of logic in the kernel.
 
 Now, back in 2012, Johannes (working for one of those big contributors)
 hit upon an issue where second-use falls down.  So he set out to fix it:
 
   https://lwn.net/Articles/495543/
 
 This code has been a bit slow getting into the mainline for a few reasons,
 but one of the chief ones is this: nobody is saying from the sidelines
 that they need it!  If somebody were saying Postgres would work a lot
 better with this code in place and had some numbers to demonstrate that,
 we'd be far more likely to see it get into an upcoming release.
 
 In the end, Linux is quite responsive to the people who participate in its
 development, even as testers and bug reporters.  It responds rather less
 well to people who find problems in enterprise kernels years later,
 granted.  
 
 The amount of automated testing, including performance testing, has
 increased markedly in the last couple of years.  I bet that it would not
 be hard at all to get somebody like Fengguang Wu to add some
 Postgres-oriented I/O tests to his automatic suite:
 
   https://lwn.net/Articles/571991/
 
 Then we would all have a much better idea of how kernel releases are
 affecting one of our most important applications; developers would pay
 attention to that information.

hmm interesting tool, I can see how that would be very useful for early
warning style detection on the kernel development side using a small
set of postgresql benchmarks. That would basically help with part of
Josh complained that it will take ages for regressions to be detected.
From postgresqls pov we would also need additional long term and more
complex testing spanning different postgresql version on various
distribution platforms (because that is what people deploy in
production, hand built git-fetched kernels are rare) using tests that
both might have extended runtimes and/or require external infrastructure


 
 Or you could go off and do your own thing, but I believe that would leave
 us all poorer.

fully agreed


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why we are going to have to go DirectIO

2013-12-04 Thread Stefan Kaltenbrunner
On 12/04/2013 07:30 PM, Joshua D. Drake wrote:
 
 On 12/04/2013 07:32 AM, Stefan Kaltenbrunner wrote:

 On 12/04/2013 04:30 PM, Peter Eisentraut wrote:
 On 12/4/13, 2:14 AM, Stefan Kaltenbrunner wrote:
 running a
 few kvm instances that get bootstrapped automatically is something that
 is a solved problem.

 Is it sound to run performance tests on kvm?

 as sounds as on any other platform imho, the performance characteristics
 will differ between bare metal or other virtualisation platforms but the
 future is virtual and that is what a lot of stuff runs on...
 
 In actuality you need both. We need to know what the kernel is going to
 do on bare metal. For example, 3.2 to 3.8 are total crap for random IO
 access. We will only catch that properly from bare metal tests or at
 least, we will only catch it easily on bare metal tests.
 
 If we know the standard bare metal tests are working then the next step
 up would be to test virtual.
 
 BTW: Virtualization is only one future and it is still a long way off
 from serving the needs that bare metal serves at the same level
 (speaking PostgreSQL specifically).

we need to get that off the ground - and whatever makes it easier to get
off the ground will help. and if we solve the automation for
virtualisation, bare metal is just a small step away (or the other way
round). Getting comparable performance levels between either different
postgresql versions (or patches) or different operating systems with
various workloads is probably more valuable now that getting absolute
peak performance levels under specific tests long term.



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why we are going to have to go DirectIO

2013-12-03 Thread Stefan Kaltenbrunner
On 12/03/2013 08:23 PM, Josh Berkus wrote:
 On 12/03/2013 10:59 AM, Joshua D. Drake wrote:
 This seems rather half cocked. I read the article. They found a problem,
 that really will only affect a reasonably small percentage of users,
 created a test case, reported it, and a patch was produced.
 
 Users with at least one file bigger than 50% of RAM is unlikely to be
 a small percentage.
 

 Kind of like how we do it.
 
 I like to think we'd have at least researched the existing literature on
 2Q algorithms (which is extensive) before implementing our own.  Oh,
 wait, we *did*.  Nor is this the first ill-considered performance hack
 pushed into mainline kernels without any real testing.  It's not even
 the first *that year*.
 
 While I am angry over this -- no matter what Kernel.org fixes now, we're
 going to have to live with their mistake for 3 years -- the DirectIO
 thing isn't just me; when I've gone to Linux Kernel events to talk about
 IO, that's the response I've gotten from most Linux hackers: you
 shouldn't be using the filesystem, use DirectIO and implement your own
 storage.
 
 That's why they don't feel that it's a problem to break the IO stack;
 they really don't believe that anyone who cares about performance should
 be using it.

reading that article I think this is an overreaction, it is not
kernel.orgs fault that distributions exist and bugs and regression
happen in all pieces of software.

We are in no way different and I would like to note that we do not have
any form of sensible performance related regression testing either.
I would even argue that there is ton more regression testing (be it
performance or otherwise) going into the linux kernel (even on a
relative scale) than we do and pointing the finger at something they are
dealing with once noticed.
If we care about our performance on various operating systems it is
_OUR_ responsibility to track that closely and automated and report back
and only if that feedback loop fails to work we are actually in a real
position to consider something as drastical as considering a platform
undependable or looking into alternatives (like directIO).


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why we are going to have to go DirectIO

2013-12-03 Thread Stefan Kaltenbrunner
On 12/04/2013 05:40 AM, Peter Eisentraut wrote:
 On Tue, 2013-12-03 at 14:44 -0800, Josh Berkus wrote:
 Would certainly be nice.  Realistically, getting good automated
 performace tests will require paying someone like Greg S., Mark or me
 for 6 solid months to develop them, since worthwhile open source
 performance test platforms currently don't exist.  That money has
 never been available; maybe I should do a kickstarter.
 
 I think the problem is, it's not even clear what the deliverable might
 be.  Benchmarking tools exist, and running them on a regular schedule
 shouldn't be difficult.  But that doesn't find regressions between
 kernel versions, for example, or regressions in particular queries
 (unless they happen to be included in the benchmark).

I agree on the problem of specifying an exact deliverable - however
simple using some of the extisting benchmark tool and maybe augment them
by the myriad of simple micro level regressions we have in the form of
sql queries in the archives would be a sensible start. It might not help
for all cases but it can help for some and we learn something that might
help us building the next iteration of it. Adding say some operatimng
systems to the mix of we have the above would be fairly easy - running a
few kvm instances that get bootstrapped automatically is something that
is a solved problem.

 
 The first step here should be to work out the minimum viable product,
 and then see what it would take to get that done.

yeah we need to start somewhere and see what we can learn.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Record comparison compiler warning

2013-11-02 Thread Stefan Kaltenbrunner
On 10/31/2013 07:51 PM, Kevin Grittner wrote:
 Bruce Momjian br...@momjian.us wrote:
 On Wed, Oct 16, 2013 at 11:49:13AM -0700, Kevin Grittner wrote:
 
 Bruce Momjian br...@momjian.us wrote:

 I am seeing this compiler warning in git head:

  rowtypes.c: In function 'record_image_cmp':
  rowtypes.c:1433: warning: 'cmpresult' may be used
  uninitialized in this function rowtypes.c:1433: note: 'cmpresult' was 
 declared here

 I had not gotten a warning under either gcc or clang, but that was
 probably because I was doing assert-enabled builds, and the
 Assert(false) saved me.  That seemed a little marginal anyway, so
 how about this?:

 Would you please send the file as ASCII, e.g. not:

 A0A0A0A0A0A0A0A0A0A0A0A0A0A0A0A0A0A0A0 
 default:
 
 Huh, I did not see anything remotely like that in my email or in
 the archives:
 
 http://www.postgresql.org/message-id/1381949353.78943.yahoomail...@web162902.mail.bf1.yahoo.com

http://www.postgresql.org/message-id/raw/1381949353.78943.yahoomail...@web162902.mail.bf1.yahoo.com


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] removing old ports and architectures

2013-10-18 Thread Stefan Kaltenbrunner
On 10/18/2013 02:41 PM, Robert Haas wrote:
 On Thu, Oct 17, 2013 at 5:41 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 10/17/13 12:45 PM, Robert Haas wrote:
 The attached patch, which I propose to apply relatively soon if nobody
 objects, removes the IRIX port.

 +1
 
 Done.  And here's a patch for removing the alpha architecture and
 Tru64 UNIX (aka OSF/1) which runs on that architecture, per discussion
 upthread.  Barring objections, I'll apply this next week.

hmm there are still some operating systems that officially support the
alpha architecture which will likely result in problems for their ports.
One example is OpenBSD both the current version (5.3) as well as the
upcoming release do fully support alpha and have binary packages and
source ports for postgresql and afaik they have no intention to stop
supporting that plattform.


 
 On a related note, I think we should update the paragaraph in
 installation.sgml that begins In general, PostgreSQL can be expected
 to work on these CPU architectures.  Any architecture that doesn't
 have a buildfarm animal should be relegated to the second sentence,
 which reads Code support exists for ... but these architectures are
 not known to have been tested recently.  Similarly, I think the
 following paragraph should be revised so that only operating systems
 for which we have current buildfarm support are considered fully
 supported.  Others should be relegated to a sentence later in the
 paragraph that says something like code support exists but not tested
 recently or expected to work but not tested regularly.

seems like an improvement to me.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] removing old ports and architectures

2013-10-18 Thread Stefan Kaltenbrunner
On 10/18/2013 06:29 PM, Andres Freund wrote:
 On 2013-10-18 18:24:58 +0200, Stefan Kaltenbrunner wrote:
 On 10/18/2013 02:41 PM, Robert Haas wrote:
 On Thu, Oct 17, 2013 at 5:41 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 10/17/13 12:45 PM, Robert Haas wrote:
 The attached patch, which I propose to apply relatively soon if nobody
 objects, removes the IRIX port.

 +1

 Done.  And here's a patch for removing the alpha architecture and
 Tru64 UNIX (aka OSF/1) which runs on that architecture, per discussion
 upthread.  Barring objections, I'll apply this next week.

 hmm there are still some operating systems that officially support the
 alpha architecture which will likely result in problems for their ports.
 One example is OpenBSD both the current version (5.3) as well as the
 upcoming release do fully support alpha and have binary packages and
 source ports for postgresql and afaik they have no intention to stop
 supporting that plattform.
 
 Hm. If you read their status page (which I think you linked to before):
 http://openbsd.org/alpha.html you can find stuff like X11 not
 working. So I don't see that forcing us to much.

not sure that page is acurate and not sure how relevant X11 support is
for postgresql :)

Anyway they do currently have packages (9.2 in -stable and 9.3 in
-current) available and I think we should consider packagers here as
well - I personally don't have any particular need for alpha but it is
clearly not as dead as some of the others we are discussing.


 Note also that we already don't support all openbsd platforms.

sure - but does that also mean we should desupport without at least
considering it?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] removing old ports and architectures

2013-10-16 Thread Stefan Kaltenbrunner
On 10/16/2013 07:04 PM, Robert Haas wrote:
 On Sat, Oct 12, 2013 at 8:46 PM, Andres Freund and...@2ndquadrant.com wrote:
 I think we should remove support the following ports:
 - IRIX
 - UnixWare
 - Tru64
 
 According to http://en.wikipedia.org/wiki/IRIX, IRIX has been
 officially retired.  The last release of IRIX was in 2006 and support
 will end in December of 2013.  Therefore, it will be unsupported by
 the time PostgreSQL 9.4 is released.
 
 According to http://en.wikipedia.org/wiki/UnixWare, UnixWare is not
 dead, although there have been no new releases in 5 years.
 
 According to http://en.wikipedia.org/wiki/Tru64_UNIX, Tru64 has been
 officially retired.  Support ended in December, 2012.  This seems safe
 to remove.
 
 So I vote for removing IRIX and Tru64 immediately, but I'm a little
 more hesitant about shooting UnixWare, since it's technically still
 supported.
 
 Neither of those are relevant.

agreed


 I think we should remove support for the following architectures:
 - VAX
 
 According to http://en.wikipedia.org/wiki/VAX#History, all
 manufacturing of VAX computers ceased in 2005, but according to
 http://en.wikipedia.org/wiki/OpenVMS#Major_release_timeline, OpenVMS
 is still releasing new versions.  I'm not sure what to make of that.

VAX is also an officially supported OpenBSD port (see
http://www.openbsd.org/vax.html)





Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE FUNCTION .. SET vs. pg_dump

2013-09-03 Thread Stefan Kaltenbrunner
On 09/03/2013 06:15 PM, Robert Haas wrote:
 On Sun, Sep 1, 2013 at 10:36 AM, Stefan Kaltenbrunner
 ste...@kaltenbrunner.cc wrote:
 It would seem that a simple solution would be to add an elevel argument
 to ProcessGUCArray and then call it with NOTICE in the case that
 check_function_bodies is true.  None of the contrib modules call
 ProcessGUCArray, but should we worry that some external module does?

 attached is a rough patch that does exactly that, if we are worried
 about an api change we could simple do a ProcessGUCArrayNotice() in the
 backbranches if that approach is actually sane.
 
 This patch has some definite coding-style issues, but those should be
 easy to fix.  The bigger thing I worry about is whether distributing
 the decision as to what elevel ought to be used here all over the code
 base is indeed sane.  Perhaps that ship has already sailed, though.

I can certainly fix the coding style thing up - but it was declared as
rough mostly because I'm not entirely sure the way this is going is
actually the right way to attack this...

This whole stuff seems to be a bit messy and bolted on in some ways.
There is ProcessGUCArray(), but also set_config_option() and its
external wrapper SetConfigOption() - the division of labour between
the caller deciding on what it wants vs what the function does with some
combination of elevel and source internally is not very consistent at best.

I also note that a lot of places are actually calling
set_config_option() directly, so maybe there is an opportunity to unify
here as well.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE FUNCTION .. SET vs. pg_dump

2013-09-01 Thread Stefan Kaltenbrunner
On 09/01/2013 12:53 AM, Stephen Frost wrote:
 * Stefan Kaltenbrunner (ste...@kaltenbrunner.cc) wrote:
 On 08/18/2013 05:40 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 While working on upgrading the database of the search system on
 postgresql.org to 9.2 I noticed that the dumps that pg_dump generates on
 that system are actually invalid and cannot be reloaded without being
 hacked on manually...

 CREATE TEXT SEARCH CONFIGURATION pg (
 PARSER = pg_catalog.default );

 CREATE FUNCTION test() RETURNS INTEGER
 LANGUAGE sql SET default_text_search_config TO 'public.pg' AS $$
 SELECT 1;
 $$;

 once you dump that you will end up with an invalid dump because the
 function will be dumped before the actual text search configuration is
 (re)created.

 I don't think it will work to try to fix this by reordering the dump;
 it's too easy to imagine scenarios where that would lead to circular
 ordering constraints.  What seems like a more workable answer is for
 CREATE FUNCTION to not attempt to validate SET clauses when
 check_function_bodies is off, or at least not throw a hard error when
 the validation fails.  (I see for instance that if you try
 ALTER ROLE joe SET default_text_search_config TO nonesuch;
 you just get a notice and not an error.)

 However, I don't recall if there are any places where we assume the
 SET info was pre-validated by CREATE/ALTER FUNCTION.

 any further insights into that issue? - seems a bit silly to have an
 open bug that actually prevents us from taking (restorable) backups of
 the search system on our own website...
 
 It would seem that a simple solution would be to add an elevel argument
 to ProcessGUCArray and then call it with NOTICE in the case that
 check_function_bodies is true.  None of the contrib modules call
 ProcessGUCArray, but should we worry that some external module does?

attached is a rough patch that does exactly that, if we are worried
about an api change we could simple do a ProcessGUCArrayNotice() in the
backbranches if that approach is actually sane.

 
 This doesn't address Tom's concern that we may trust in the SET to
 ensure that the value stored is valid.  That seems like it'd be pretty
 odd given how we typically handle GUCs, but I've not done a
 comprehensive review to be sure.

well the whole per-database/per-user GUC handling is already pretty
weird/inconsistent, if you for example alter a database with an invalid
 default_text_search_config you get a NOTICE about it, every time you
connect to that database later on you get a WARNING.


mastermind=# alter database mastermind set default_text_search_config to
'foo';
NOTICE:  text search configuration foo does not exist
ALTER DATABASE
mastermind=# \q
mastermind@powerbrain:~$ psql
WARNING:  invalid value for parameter default_text_search_config: foo
psql (9.1.9)
Type help for help.

 
 Like Stefan, I'd really like to see this fixed, and sooner rather than
 later, so we can continue the process of upgrading our systems to 9.2..

well - we can certainly work around it but others might not...


Stefan
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index 6e19736..7fda64c
*** a/src/backend/catalog/pg_db_role_setting.c
--- b/src/backend/catalog/pg_db_role_setting.c
*** ApplySetting(Snapshot snapshot, Oid data
*** 262,268 
  			 * right to insert an option into pg_db_role_setting was checked
  			 * when it was inserted.
  			 */
! 			ProcessGUCArray(a, PGC_SUSET, source, GUC_ACTION_SET);
  		}
  	}
  
--- 262,268 
  			 * right to insert an option into pg_db_role_setting was checked
  			 * when it was inserted.
  			 */
! 			ProcessGUCArray(a, PGC_SUSET, source, GUC_ACTION_SET,0);
  		}
  	}
  
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 2a98ca9..5ecc630
*** a/src/backend/catalog/pg_proc.c
--- b/src/backend/catalog/pg_proc.c
*** ProcedureCreate(const char *procedureNam
*** 679,688 
  		if (set_items)			/* Need a new GUC nesting level */
  		{
  			save_nestlevel = NewGUCNestLevel();
  			ProcessGUCArray(set_items,
  			(superuser() ? PGC_SUSET : PGC_USERSET),
  			PGC_S_SESSION,
! 			GUC_ACTION_SAVE);
  		}
  		else
  			save_nestlevel = 0; /* keep compiler quiet */
--- 679,699 
  		if (set_items)			/* Need a new GUC nesting level */
  		{
  			save_nestlevel = NewGUCNestLevel();
+ 			/* reduce elevel to NOTICE if check_function_bodies is disabled */
+ 			if (check_function_bodies) {
  			ProcessGUCArray(set_items,
  			(superuser() ? PGC_SUSET : PGC_USERSET),
  			PGC_S_SESSION,
! 			GUC_ACTION_SAVE,
! 			0);
! 			}
! 			else {
! 			ProcessGUCArray(set_items,
! 			(superuser() ? PGC_SUSET : PGC_USERSET),
! 			PGC_S_SESSION,
! 			GUC_ACTION_SAVE,
! 			NOTICE);
! 			}
  		}
  		else
  			save_nestlevel = 0; /* keep compiler quiet */
diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c

Re: [HACKERS] CREATE FUNCTION .. SET vs. pg_dump

2013-08-31 Thread Stefan Kaltenbrunner
On 08/18/2013 05:40 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 While working on upgrading the database of the search system on
 postgresql.org to 9.2 I noticed that the dumps that pg_dump generates on
 that system are actually invalid and cannot be reloaded without being
 hacked on manually...
 
 CREATE TEXT SEARCH CONFIGURATION pg (
 PARSER = pg_catalog.default );
 
 CREATE FUNCTION test() RETURNS INTEGER
 LANGUAGE sql SET default_text_search_config TO 'public.pg' AS $$
 SELECT 1;
 $$;
 
 once you dump that you will end up with an invalid dump because the
 function will be dumped before the actual text search configuration is
 (re)created.
 
 I don't think it will work to try to fix this by reordering the dump;
 it's too easy to imagine scenarios where that would lead to circular
 ordering constraints.  What seems like a more workable answer is for
 CREATE FUNCTION to not attempt to validate SET clauses when
 check_function_bodies is off, or at least not throw a hard error when
 the validation fails.  (I see for instance that if you try
 ALTER ROLE joe SET default_text_search_config TO nonesuch;
 you just get a notice and not an error.)
 
 However, I don't recall if there are any places where we assume the
 SET info was pre-validated by CREATE/ALTER FUNCTION.

any further insights into that issue? - seems a bit silly to have an
open bug that actually prevents us from taking (restorable) backups of
the search system on our own website...



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CREATE FUNCTION .. SET vs. pg_dump

2013-08-18 Thread Stefan Kaltenbrunner
Hi all!


While working on upgrading the database of the search system on
postgresql.org to 9.2 I noticed that the dumps that pg_dump generates on
that system are actually invalid and cannot be reloaded without being
hacked on manually...

Simple way to reproduce is using the following:



CREATE TEXT SEARCH CONFIGURATION pg (
PARSER = pg_catalog.default );

CREATE FUNCTION test() RETURNS INTEGER
LANGUAGE sql SET default_text_search_config TO 'public.pg' AS $$
SELECT 1;
$$;


once you dump that you will end up with an invalid dump because the
function will be dumped before the actual text search configuration is
(re)created. I have not checked in any more detail but I suspect that
this problem is not only affecting default_text_search_config.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE FUNCTION .. SET vs. pg_dump

2013-08-18 Thread Stefan Kaltenbrunner
On 08/18/2013 05:40 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 While working on upgrading the database of the search system on
 postgresql.org to 9.2 I noticed that the dumps that pg_dump generates on
 that system are actually invalid and cannot be reloaded without being
 hacked on manually...
 
 CREATE TEXT SEARCH CONFIGURATION pg (
 PARSER = pg_catalog.default );
 
 CREATE FUNCTION test() RETURNS INTEGER
 LANGUAGE sql SET default_text_search_config TO 'public.pg' AS $$
 SELECT 1;
 $$;
 
 once you dump that you will end up with an invalid dump because the
 function will be dumped before the actual text search configuration is
 (re)created.
 
 I don't think it will work to try to fix this by reordering the dump;
 it's too easy to imagine scenarios where that would lead to circular
 ordering constraints.  What seems like a more workable answer is for
 CREATE FUNCTION to not attempt to validate SET clauses when
 check_function_bodies is off, or at least not throw a hard error when
 the validation fails.  (I see for instance that if you try
 ALTER ROLE joe SET default_text_search_config TO nonesuch;
 you just get a notice and not an error.)

hmm yeah - just throwing a NOTICE with check_function_bodies=off seems
like reasonable workaround to this problem area.
Not sure it would be required to turn it into a NOTICE in general,
though alter role/alter database seems like an established precedence
for this.



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Statistics collection for CLUSTER command

2013-08-09 Thread Stefan Kaltenbrunner
On 08/09/2013 12:02 AM, Vik Fearing wrote:
 On 08/08/2013 07:57 PM, Stefan Kaltenbrunner wrote:
 
 On 08/08/2013 01:52 PM, Vik Fearing wrote:
 I would add this to the next commitfest but I seem to be unable to log
 in with my community account (I can log in to the wiki).  Help appreciated.
 whould be a bit easier to diagnose if we knew your community account name 
 
 Sorry, it's glaucous.

hmm looks like your account may be affected by one of the buglets
introduced (and fixed shortly afterwards) of the main infrastructure to
debian wheezy - please try logging in to the main website and change
your password at least once. That should make it working again for the
commitfest app...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Statistics collection for CLUSTER command

2013-08-08 Thread Stefan Kaltenbrunner
On 08/08/2013 01:52 PM, Vik Fearing wrote:
 As part of routine maintenance monitoring, it is interesting for us to
 have statistics on the CLUSTER command (timestamp of last run, and
 number of runs since stat reset) like we have for (auto)ANALYZE and
 (auto)VACUUM.  Patch against today's HEAD attached.
 
 I would add this to the next commitfest but I seem to be unable to log
 in with my community account (I can log in to the wiki).  Help appreciated.

whould be a bit easier to diagnose if we knew your community account name ;)


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unsafe GUCs and ALTER SYSTEM WAS: Re: ALTER SYSTEM SET

2013-08-05 Thread Stefan Kaltenbrunner
On 08/05/2013 08:02 PM, Josh Berkus wrote:
 On 08/05/2013 10:49 AM, Stephen Frost wrote:
 Josh, I really have to ask- are these people who are implementing puppet
 to control these configs really clamoring to have an 'ALTER SYSTEM' PG
 command to have to code against instead of dealing with text files?  I
 feel like you're arguing for these parameters to be modifiable through
 ALTER SYSTEM on the grounds that these parameters need to be set at some
 point and in some way and not because having them set through ALTER
 SYSTEM actually makes any *sense*.
 
 Nope.  ALTER SYSTEM, from my POV, is mainly for folks who *don't* use
 Puppet/Chef/whatever.  Here's where I see ALTER SYSTEM being useful:
 
 * invididually managed servers with out centralized management (i.e. one
 DBA, one server).
 * developer machines (i.e. laptops and vms)
 * automated testing of tweaking performance parameters
 * setting logging parameters temporarily on systems under centralized
 management

overridding the configuration system, that will just lead to very
confused sysadmins why something that was configurated now behaves
differently and I cause operational hazards because people _WILL_ forget
changing those temporary only settings back?

 
 For that reason, the only way in which I think it makes sense to try to
 make ALTER SYSTEM set work together with Puppet/Chef is in the rather
 limited context of modifying the logging settings for limited-time data
 collection.  Mostly, ALTER SYSTEM SET is for systems were people
 *aren't* using Puppet/Chef.

I tend to disagree, the current approach of ALTER SYSTEM requiring
superuser basically means:

* in a few years from now people will just use superuser over the
network for almost all stuff because its easy and I can click around in
$gui, having potential unsafe operations available over the network
will in turn cause a lot of actual downtime (in a lot of cases the
reason why people want remote management is because the don't have
physical/shell access - so if they break stuff they cannot fix)

* for classic IaaS/SaaS/DBaaS the ALTER SYSTEM seems to be mostly
useless in the current form - because most of them will not or cannot
hand out flat out superuser (like if you run a managed service you might
want customers to be able to tweak some stuff but say not
archive/pitr/replication stuff because the responsibility for backups is
with the hosting company)




Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unsafe GUCs and ALTER SYSTEM WAS: Re: ALTER SYSTEM SET

2013-08-05 Thread Stefan Kaltenbrunner
On 08/05/2013 07:01 PM, Josh Berkus wrote:
 Stephen, all:
 
 (forked from main ALTER SYSTEM discussion.  this thread is meant to
 discuss only this question:
 
 E) whether unsafe settings or restart settings should be allowed in
 ALTER SYSTEM SET.)
 
 On 08/02/2013 01:48 PM, Stephen Frost wrote:
 Reflecting on this a bit more, I'm curious what your list-of-15 is.
 
 Based on serving literally hundreds of clients, the below are the
 settings we change on client servers 50% or more of the time.  Other
 settings I touch maybe 10% of the time.  THese are also, in general, the
 settings which I modify when we create Puppet/Chef/Salt scripts for clients.
 
 listen_addresses*@
 shared_buffers*@
 work_mem
 maintenance_work_mem
 effective_cache_size
 synchronous_commit (because of amazon)
 wal_buffers*@
 checkpoint_segments*@
 checkpoint_completion_target* (because of ext3)
 autovacuum* (turn off for data warehouses,
  turn back on for very mistaken users)
 stats_file_directory*@
 
 replication/archiving settings as a set*@
 wal_level, max_wal_senders, wal_keep_segments, hot_standby, archive_mode
 and archive_command
 
 logging settings as a set
 logging_collector*
 everything else
 
 * = requires a cold start to change
 @ potentially can cause failure to restart
 
 Note that two of the settings, shared_buffers and wal_buffers, become
 much less of an issue for restarting the system in 9.3.  Also, it's
 possible that Heikki's automated WAL log management might deal with
 out-of-disk-space better than currently, which makes that less of a risk.
 
 However, you'll see that among the 11 core settings, 7 require a full
 restart, and 5 could potentially cause failure to restart.  That means
 that from my perspective, ALTER SYSTEM SET is at least 45% useless if it
 can't touch unsafe settngs, and 63% useless if it can't touch any
 setting which requires a restart.  Adding the replication settings into
 things makes stuff significantly worse that way, although ALTER SYSTEM
 SET would be very useful for logging options provided that
 logging_collector was turned on.

not sure at all I agree with our % useless measure but we need to
consider that having all of those available over remote means they will
suddenly become action at a distance thingies, people will play with
them more and randomly change stuff, and a lot of those can break the
entire system because of say overrunning system resources. The same
thing can happen now just as well, but having them available from remote
will also result in tools doing this and people that have less
information about the hardware and system or what else is going on on
that box. Also we have to keep in mind that in most scenarios the
logfile and potentially reported errors/warnings there will be useless
because people will never see them...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unsafe GUCs and ALTER SYSTEM WAS: Re: ALTER SYSTEM SET

2013-08-05 Thread Stefan Kaltenbrunner
On 08/05/2013 08:21 PM, Josh Berkus wrote:
 On 08/05/2013 11:14 AM, Stefan Kaltenbrunner wrote:
 * in a few years from now people will just use superuser over the
 network for almost all stuff because its easy and I can click around in
 $gui, having potential unsafe operations available over the network
 will in turn cause a lot of actual downtime (in a lot of cases the
 reason why people want remote management is because the don't have
 physical/shell access - so if they break stuff they cannot fix)
 
 See thread Disabling ALTER SYSTEM SET.
 
 * for classic IaaS/SaaS/DBaaS the ALTER SYSTEM seems to be mostly
 useless in the current form - because most of them will not or cannot
 hand out flat out superuser (like if you run a managed service you might
 want customers to be able to tweak some stuff but say not
 archive/pitr/replication stuff because the responsibility for backups is
 with the hosting company)
 
 100% in agreement.  If someone thought we were serving DBAAS with this,
 they haven't paid attention to the patch.
 
 However, there are other places where ALTER SYSTEM SET will be valuable.
  For example, for anyone who wants to implement an autotuning utility.
 For example, I'm writing a network utility which checks bgwriter stats
 and tries adjusting settings over the network to improve checkpoint
 issues.  Not having to SSH configuration files into place (and make sure
 they're not overridden by other configuration files) would make writing
 that script a *lot* easier.  Same thing with automated performance testing.

seems like an excessively narrow usecase to me - people doing that kind
of specific testing can easily do automation over ssh, and those are
very few vs. having to maintain a fairly complex piece of code in
postgresql core.
Nevertheless my main point is that people _WILL_ use this as a simple
convinience tool not fully understanding all the complex implications,
and in a few years from now running people with superuser by default
(because people will create cool little tools say to change stuff from
my tray or using $IOS app that have a little small comment make sure
to create the user WITH SUPERUSER and people will follow like lemmings.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling ALTER SYSTEM SET WAS: Re: ALTER SYSTEM SET command to change postgresql.conf parameters

2013-08-05 Thread Stefan Kaltenbrunner
On 08/05/2013 10:18 PM, Dimitri Fontaine wrote:
 Hi,
 
 I'm now completely lost in the current threads. Is there a single valid
 use case for the feature we're trying to design? Who is the target
 audience of this patch?

wonder about that myself...


 
 Josh Berkus j...@agliodbs.com writes:
 I don't see this as a solution at all.  Mr. Sysadmin, we've given the
 DBAs a new tool which allows them to override your version-controlled
 database parameter settings.  You can turn it off, though, by using this
 incredibly complicated, brand-new Event Trigger tool which requires
 writing lots of SQL code to make work.
 
 Well, given what has been said already and very recently again by Tom,
 it's superuser only and installing a precedent wherein superuser is not
 allowed to use a feature looks like a dead-end. You would have to make a
 case that it's comparable to allow_system_table_mods.
 
 If you believe that revoking ALTER SYSTEM SET privileges to superusers
 isn't going to be accepted, I know of only two other paths to allow you
 to implementing your own policy, including per-GUC policy and
 non-superuser granting of ALTER SYSTEM SET in a controled fashion:
 
   - add per GUC GRANT/REVOKE capabilities to SETTINGs,

realistically I think this is what we want(fsvo) for this feature as a
prerequisite, however that also will make it fairly complex to use for
both humans and tools so not sure we would really gain anything...


   - implement the same thing within an Event Trigger.
 
 The former has been talked about lots of time already in the past and
 I'm yet to see any kind of progress made about it despite plenty of user
 support for the feature, the latter requires a shared catalog for global
 object Event Triggers and maybe a flexible Extension that you can manage
 by just dropping a configuration file into the PostgreSQL conf.d.
 
 So when trying to be realistic the answer is incredibly complicated
 because it involves a stored procedure to implement the local policy and
 a command to enable the policy, really, I wonder who you're addressing
 there. Certainly not DBA, so that must be sysadmins, who would be better
 off without the feature in the first place if I'm understanding you.
 
 Again, what are we trying to achieve?!

no idea - wondering about that myself...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling ALTER SYSTEM SET WAS: Re: ALTER SYSTEM SET command to change postgresql.conf parameters

2013-08-05 Thread Stefan Kaltenbrunner
On 08/05/2013 09:53 PM, Alvaro Herrera wrote:
 Tom Lane escribió:
 
 What Josh seems to be concerned with in this thread is the question of
 whether we should support an installation *policy decision* not to allow
 ALTER SYSTEM SET.  Not because a particular set of parameters is broken,
 but just because somebody is afraid the DBA might break things.  TBH
 I'm not sure I buy that, at least not as long as ALTER SYSTEM is a
 superuser feature.  There is nothing in Postgres that denies permissions
 to superusers, and this doesn't seem like a very good place to start.
 
 Someone made an argument about this on IRC: GUI tool users are going to
 want to use ALTER SYSTEM through point-and-click, and if all we offer is
 superuser-level access to the feature, we're going to end up with a lot
 of people running with superuser privileges just so that they are able
 to tweak inconsequential settings.  This seems dangerous.

indeed it is

 
 The other issue is that currently you can only edit a server's config if
 you are logged in to it.  If we permit SQL-level access to that, and
 somebody who doesn't have access to edit the files blocks themselves
 out, there is no way for them to get a working system *at all*.

thinking more about that - is there _ANY_ prerequisite of an application
that can be completely reconfigured over a remote access protocol and
solved the reliability and security challenges of that to a reasonable
degree?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-28 Thread Stefan Kaltenbrunner
On 06/27/2013 12:22 PM, Magnus Hagander wrote:
 On Tue, Jun 25, 2013 at 3:31 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:

 On 2013/06/25, at 22:23, Fujii Masao masao.fu...@gmail.com wrote:

 On Tue, Jun 25, 2013 at 6:33 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Tue, Jun 25, 2013 at 5:33 PM, Misa Simic misa.si...@gmail.com wrote:
 Hi,

 Where we can find latest snapshot for 9.3 version?

 We have taken latest snapshot from
 http://ftp.postgresql.org/pub/snapshot/dev/

 But it seems it is for 9.4 version...
 9.3 has moved to branch REL9_3_STABLE a couple of days ago.

 Yes. We can find the snapshot from REL9_3_STABLE git branch.
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=shortlog;h=refs/heads/REL9_3_STABLE
 Indeed, I completely forgot that you can download snapshots from 
 postgresql.org's git. Simply use that instead of the FTP server now as long 
 as 9.3 snapshots are not generated there.
 
 In case somebody is still looking, snapshots are properly building for 9.3 
 now.
 
 Those snapshots aren't identical to a download from git, as they've
 gone through a make dist-prep or whatever it's called. But they're
 pretty close.

there is more to that - those snapshots also will also only get
published if the source passed a full buildfarm run  as a basic form of
validation.


 
 However, if oyu're looking for a snapshot, please use the one on the
 ftpsite. Generating those snapshots on the git server is slow and
 expensive...

definitly


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-28 Thread Stefan Kaltenbrunner
On 06/28/2013 06:51 PM, Alvaro Herrera wrote:
 Magnus Hagander escribió:
 
 However, if oyu're looking for a snapshot, please use the one on the
 ftpsite. Generating those snapshots on the git server is slow and
 expensive...
 
 Maybe we should redirect those gitweb snapshot URLs to the FTP site?

maybe - but I can actually see the (rare) usecase of being able to
create a snapshot on a per-commit base, so redirecting to something that
is more of a basic verified snapshot tarball once a day seems wrong to
me, despite the fact that I think that using those is a better idea in
general.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fast gin cache performance improvement

2013-06-23 Thread Stefan Kaltenbrunner
On 06/23/2013 04:03 AM, ian link wrote:
 Looks like my community login is still not working. No rush, just wanted
 to let you know. Thanks!

have you tried to log in once to the main website per:

http://www.postgresql.org/message-id/CABUevEyt9tQfcF7T2Uzcr8WeF9M=s8qSACuCmN5L2Et26=r...@mail.gmail.com

?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 beta breaks some extensions make install

2013-05-30 Thread Stefan Kaltenbrunner
On 05/29/2013 06:08 PM, Cédric Villemain wrote:
 I just took time to inspect our contribs, USE_PGXS is not supported by all
 of them atm because of SHLIB_PREREQS (it used submake) I have a patch
 pending here to fix that. Once all our contribs can build with USE_PGXS I
 fix the VPATH.
 
 I've added 'most' of the patches to the commitfest... (I am not sure it is 
 required, as this is more bugfix than anything else IMHO)
 See 
 https://commitfest.postgresql.org/action/patch_view?id=1122
 https://commitfest.postgresql.org/action/patch_view?id=1123
 https://commitfest.postgresql.org/action/patch_view?id=1124
 
 
 I stopped trying to add new item after too many failures from 
 https://commitfest.postgresql.org/action/patch_form 
 So one patch is not in the commitfest yet (fix_install_ext_vpath.patch)

failures? what kind of issues did you experience?



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] gemulon.postgresql.org/gitmaster.postgresql.org

2013-05-23 Thread Stefan Kaltenbrunner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi All!


We will be upgrading gemulon.postgresql.org during the next few hours
to the current release of debian (wheezy/7.0) as discussed with
various people.
To prevent any kind of issues we will be locking out commiters for a
brief amount of time so don't be surprised if you get an error message.


Stefan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAlGeQaYACgkQr1aG+WhhYQH4PACgncD04Mlo+sC27UROsnRkVo3e
NuEAoM/3U5QGt/TETG5f9OjXEdfATd+w
=zNvy
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gemulon.postgresql.org/gitmaster.postgresql.org

2013-05-23 Thread Stefan Kaltenbrunner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/23/2013 12:20 PM, Stefan Kaltenbrunner wrote:
 Hi All!
 
 
 We will be upgrading gemulon.postgresql.org during the next few
 hours to the current release of debian (wheezy/7.0) as discussed
 with various people. To prevent any kind of issues we will be
 locking out commiters for a brief amount of time so don't be
 surprised if you get an error message.

all done - happy commiting



Stefan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAlGeXicACgkQr1aG+WhhYQEgpQCgt/QNu0YS3AtFun0xwi017Dza
J78AmwQ71DRH6SOqIanBz9AdGe/0xGof
=mVe8
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spoonbill vs. -HEAD

2013-05-07 Thread Stefan Kaltenbrunner

On 04/04/2013 02:18 AM, Tom Lane wrote:

Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:

On 04/03/2013 12:59 AM, Tom Lane wrote:

BTW, on further thought it seems like maybe this is an OpenBSD bug,
at least in part: what is evidently happening is that the temporary
blockage of SIGINT during the handler persists even after we've
longjmp'd back to the main loop.  But we're using sigsetjmp(..., 1)
to establish that longjmp handler --- so why isn't the original signal
mask reinstalled when we return to the main loop?

If (your version of) OpenBSD is getting this wrong, it'd explain why
we've not seen similar behavior elsewhere.



hmm trolling the openbsd cvs history brings up this:
http://www.openbsd.org/cgi-bin/cvsweb/src/sys/arch/sparc64/sparc64/machdep.c?r1=1.143;sortby=date#rev1.143


That's about alternate signal stacks, which we're not using.

I put together a simple test program (attached) and tried it on
spoonbill, and it says that the signal *does* get unblocked when control
returns to the sigsetjmp(...,1).  So now I'm really confused.  Somehow
the results we're getting in a full-fledged backend do not match up with
the results gotten by this test program ... but why?


as a followup to this - I spend some time upgrading spoonbill to the 
lastest OpenBSD release (5.3) the other day and it seems to be able to 
pass a full regression test run now on a manual run. I will add it to 
the regular reporting schedule again, but it seems at least part of the 
problem is/was an Operating system level issue that got fixed in either 
5.2 or 5.3 (spoonbill was on 5.1 before).



Stefan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #8128: pg_dump (= 9.1) failed while dumping a scheme named old from PostgreSQL 8.4

2013-05-01 Thread Stefan Kaltenbrunner
On 05/01/2013 04:26 PM, David Fetter wrote:
 On Tue, Apr 30, 2013 at 07:53:27PM -0400, Tom Lane wrote:
 adrian.vondendrie...@credativ.de writes:
 [ recent pg_dump fails against an 8.4 server if old is used as a
 name ]

 Yeah.  The reason for this is that old was considered a reserved
 word in 8.4 and before, but since 9.0 it is not reserved (indeed it
 isn't a keyword at all anymore), so 9.0 and later pg_dump don't
 think they need to quote it in commands.
 
 According to SQL:2003 and SQL:2008 (and the draft standard, if that
 matters) in section 5.2 of Foundation, both NEW and OLD are reserved
 words, so we're going to need to re-reserve them to comply.

erm? I don't really see why we have any need to reserve something _on
purpose_ when there is no technical reason to do so...

 
 Sadly, this will cause problems for people who have tables with those
 names, but we've introduced incompatibilities (in 8.3, e.g.) that hit
 a much bigger part of our user base much harder than this.  When we do
 re-reserve, we'll need to come up with a migration path.

so why again do we want to create an(other) incompatibility hazard?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #8128: pg_dump (= 9.1) failed while dumping a scheme named old from PostgreSQL 8.4

2013-05-01 Thread Stefan Kaltenbrunner
On 05/01/2013 06:14 PM, David Fetter wrote:
 On Wed, May 01, 2013 at 11:12:28AM -0400, Tom Lane wrote:
 David Fetter da...@fetter.org writes:
 According to SQL:2003 and SQL:2008 (and the draft standard, if
 that matters) in section 5.2 of Foundation, both NEW and OLD are
 reserved words, so we're going to need to re-reserve them to
 comply.

 We don't and won't.
 
 Not so fast or so definite, if you please.
 
 I've got a GSoC project in that implements things with both of these
 keywords, and doubtless others will use other keywords either this
 coming (9.4) cycle or in a later one.

past history has shown that this is relatively rare and almost always it
was possible to find a way around - not sure why we need to panic in
advance?

 
 If you want to have a discussion about the timing, that is a perfectly
 reasonable discussion to have.  Peremptorily saying, don't and won't
 is not a great way to operate, however tempting it may be for you.
 
 There is a case to be made, and I'm making it here, for pre-reserving
 all the keywords and erroring out with Feature not implemented for
 those not yet implemented.  This would keep us, and more importantly
 our user base, from wondering when the next random change to the SQL
 language would affect them.

as per the discussion on IRC - this would break applications left and
right for no real reason and no good, and I don't think hypothetical
features that have not even fully discused warrant anything like that.
Also this would be an uphill battle for no good (ie every few years when
a new spec comes out we break apps for a feature we might geht 10 years
later?)


 
 I'd suggest doing this over about 3 releases in the sense of warning
 people at the appropriate juncture--I'm guessing at least CREATE,
 ALTER, pg_dump(all) and pg_upgrade would be involved.  Three releases
 is just a suggestion intended to start a discussion.
 
 There are very many other keywords that are less reserved in
 Postgres than in the spec; this is a good thing.
 
 How is it a good thing?  Help me understand.

why is breaking random applications or making it harder for people to
migrate from other databases without any reason a good thing?



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing parallel pg_restore from pipe

2013-04-24 Thread Stefan Kaltenbrunner
On 04/24/2013 09:51 PM, Andrew Dunstan wrote:
 
 On 04/24/2013 03:49 PM, Andrew Dunstan wrote:

 On 04/24/2013 03:40 PM, Dimitri Fontaine wrote:
 Andrew Dunstan and...@dunslane.net writes:
 On 04/23/2013 07:53 PM, Timothy Garnett wrote:
 Anyways, the question is if people think this is generally useful. 
 If so
 I can clean up the preferred choice a bit and rebase it off of master,
 etc.
 I find this idea very useful yes.

 Another idea would be to allow for parallel pg_dump output to somehow be
 piped into a parallel pg_restore. I don't know how to solve that at all,
 it just sound something worthy of doing too.



 That's not going to work, the output from parallel pg_dump is
 inherently multiple streams. That's why it ONLY supports directory
 format, and not even custom format on disk, let alone a pipe.

 
 
 What might make sense is something like pg_dump_restore which would have
 no intermediate storage at all, just pump the data etc from one source
 to another in parallel. But I pity the poor guy who has to write it :-)

hmm pretty sure that Joachims initial patch for parallel dump actually
had a PoC for something very similiar to that...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Get rid of USE_WIDE_UPPER_LOWER dependency in trigram constructi

2013-04-10 Thread Stefan Kaltenbrunner
On 04/08/2013 10:11 AM, Dimitri Fontaine wrote:
 Tom Lane t...@sss.pgh.pa.us writes:
 If there is anybody still using Postgres on machines without wcstombs() or
 towlower(), and they have non-ASCII data indexed by pg_trgm, they'll need
 to REINDEX those indexes after pg_upgrade to 9.3, else searches may fail
 incorrectly. It seems likely that there are no such installations, though.
 
 Those conditions seem just complex enough to require a test script that
 will check that for you. What if we created a new binary responsible for
 auto checking all those release-note items that are possible to machine
 check, then issue a WARNING containing the URL to the release notes you
 should be reading, and a SQL script (ala pg_upgrade) to run after
 upgrade?
 
   $ pg_checkupgrade -d connection=string  upgrade.sql
   NOTICE: checking 9.3 upgrade release notes
   WARNING: RN-93-0001 index idx_trgm_abc is not on-disk compatible with 9.3
   WARNING: TN-93-0012 …
   WARNING: This script is NOT comprehensive, read release notes at …
 
 The target version would be hard coded on the binary itself for easier
 maintaining of it, and that proposal includes a unique identifier for
 any release note worthy warning that we know about, that would be
 included in the output of the program.
 
 I think most of the checks would only have to be SQL code, and some of
 them should include running some binary code the server side. When
 that's possible, we could maybe expose that binary code in a server side
 extension so as to make the client side binary life's easier. That would
 also be an excuse for the project to install some upgrade material on
 the old server, which has been discussed in the past for preparing
 pg_upgrade when we have a page format change.

given something like this also will have to be dealt with by pg_upgrade,
why not fold it into that (like into -c) completly and recommend running
that? on the flipside if people don't read the release notes they will
also not run any kind of binary/script mentioned there...



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spoonbill vs. -HEAD

2013-04-03 Thread Stefan Kaltenbrunner
On 04/03/2013 12:59 AM, Tom Lane wrote:
 I wrote:
 I think the simplest fix is to insert PG_SETMASK(UnBlockSig) into
 StatementCancelHandler() and any other handlers that might exit via
 longjmp.  I'm a bit inclined to only do this on platforms where a
 problem is demonstrable, which so far is only OpenBSD.  (You'd
 think that all BSDen would have the same issue, but the buildfarm
 shows otherwise.)
 
 BTW, on further thought it seems like maybe this is an OpenBSD bug,
 at least in part: what is evidently happening is that the temporary
 blockage of SIGINT during the handler persists even after we've
 longjmp'd back to the main loop.  But we're using sigsetjmp(..., 1)
 to establish that longjmp handler --- so why isn't the original signal
 mask reinstalled when we return to the main loop?
 
 If (your version of) OpenBSD is getting this wrong, it'd explain why
 we've not seen similar behavior elsewhere.

hmm trolling the openbsd cvs history brings up this:

http://www.openbsd.org/cgi-bin/cvsweb/src/sys/arch/sparc64/sparc64/machdep.c?r1=1.143;sortby=date#rev1.143


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spoonbill vs. -HEAD

2013-03-27 Thread Stefan Kaltenbrunner
On 03/26/2013 11:30 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 hmm - will look into that in a bit - but I also just noticed that on the
 same day spoonbill broke there was also a commit to that file
 immediately before that code adding the fflush() calls.
 
 It's hard to see how those would be related to this symptom.  My bet
 is that the new fk-deadlock test exposed some pre-existing issue in
 isolationtester.  Not quite clear what yet, though.

yeah removing them does not seem to change the behaviour at all


 
 A different line of thought is that the cancel was received by the
 backend but didn't succeed in cancelling the query for some reason.

I added the pgcancel failed codepath you suggested but it does not
seem to get triggered at all so the above might actually be what is
happening...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spoonbill vs. -HEAD

2013-03-27 Thread Stefan Kaltenbrunner
On 03/26/2013 10:18 PM, Tom Lane wrote:
 Andrew Dunstan and...@dunslane.net writes:
 There is some timeout code already in the buildfarm client. It was 
 originally put there to help us when we got CVS hangs, a not infrequent 
 occurrence in the early days, so it's currently only used if configured 
 for the checkout phase, but it could easily be used to create a build 
 timeout which would kill the whole process group if the timeout expired. 
 It wouldn't work on Windows, and of course it won't solve whatever 
 problem caused the hang in the first place, but it still might be worth 
 doing.
 
 +1 --- at least then we'd get reports of failures, rather than the
 current behavior where the animal just stops reporting.

yeah I have had multiple buildfarm members running into similiar issues
(like the still-unexplained issues on spoonbill from a year back:
http://www.postgresql.org/message-id/4fe4b674.3020...@kaltenbrunner.cc)
so I would really like to see an option for a global timeout.



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] spoonbill vs. -HEAD

2013-03-26 Thread Stefan Kaltenbrunner
Hi all!


I finally started to investigate why spoonbill stopped reporting to the
buildfarm feedback about 2 months ago.
It seems that the foreign-keys locking patch (or something commity very
close to January 23th) broke it in a fairly annoying way - running the
buildfarm script seems to
consistently stall during the isolationtester part of the regression
testing leaving the postgresql instance running causing all future
buildfarm runs to fail...


The process listing at that time looks like:

https://www.kaltenbrunner.cc/files/process_listing.txt


pg_stats_activity of the running instance:


https://www.kaltenbrunner.cc/files/pg_stat_activity.txt


pg_locks:

https://www.kaltenbrunner.cc/files/pg_locks.txt


backtraces of the three backends:

https://www.kaltenbrunner.cc/files/bt_20467.txt
https://www.kaltenbrunner.cc/files/bt_20897.txt
https://www.kaltenbrunner.cc/files/bt_24285.txt




Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spoonbill vs. -HEAD

2013-03-26 Thread Stefan Kaltenbrunner
On 03/26/2013 08:45 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 I finally started to investigate why spoonbill stopped reporting to the
 buildfarm feedback about 2 months ago.
 It seems that the foreign-keys locking patch (or something commity very
 close to January 23th) broke it in a fairly annoying way - running the
 buildfarm script seems to
 consistently stall during the isolationtester part of the regression
 testing leaving the postgresql instance running causing all future
 buildfarm runs to fail...
 
 It looks from here like the isolationtester client is what's dropping
 the ball --- the backend states are unsurprising, and two of them are
 waiting for a new client command.  Can you get a stack trace from the
 isolationtester process?


https://www.kaltenbrunner.cc/files/isolationtester.txt


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spoonbill vs. -HEAD

2013-03-26 Thread Stefan Kaltenbrunner
On 03/26/2013 09:33 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 On 03/26/2013 08:45 PM, Tom Lane wrote:
 It looks from here like the isolationtester client is what's dropping
 the ball --- the backend states are unsurprising, and two of them are
 waiting for a new client command.  Can you get a stack trace from the
 isolationtester process?
 
 https://www.kaltenbrunner.cc/files/isolationtester.txt
 
 Hmm ... isolationtester.c:584 is in the code that tries to cancel the
 current permutation (test case) after realizing that it's constructed
 an invalid permutation.  It looks like the preceding PQcancel() failed
 for some reason, since the waiting backend is still waiting.  The
 isolationtester code doesn't bother to check for an error result there,
 which is kinda bad, not that it's clear what it could do about it.
 Could you look at the contents of the local variable buf in the
 run_permutation stack frame?  Or else try modifying the code along the
 lines of
 
 -PQcancel(cancel, buf, sizeof(buf));
 +if (!PQcancel(cancel, buf, sizeof(buf)))
 +  fprintf(stderr, PQcancel failed: %s\n, buf);
 
 and see if it prints anything interesting before hanging up.

hmm - will look into that in a bit - but I also just noticed that on the
same day spoonbill broke there was also a commit to that file
immediately before that code adding the fflush() calls.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting post-mortem on a near disaster with git

2013-03-25 Thread Stefan Kaltenbrunner
On 03/24/2013 11:22 PM, Andrew Dunstan wrote:
 
 On 03/24/2013 06:06 PM, Michael Paquier wrote:
 On Mon, Mar 25, 2013 at 12:52 AM, Tom Lane t...@sss.pgh.pa.us
 mailto:t...@sss.pgh.pa.us wrote:

 Over the weekend, KDE came within a gnat's eyelash of losing *all*
 their authoritative git repos, despite having seemingly-extensive
 redundancy.  Read about it here:
 http://jefferai.org/2013/03/24/too-perfect-a-mirror/

 It is really great that KDE people are actually sharing this
 experience. This is really profitable for other projects as well as
 individuals.
 And thanks for sharing it here.


 We should think about protecting our own repo a bit better,
 especially
 after the recent unpleasantness with a bogus forced update.  The idea
 of having clones that are deliberately a day or two behind seems
 attractive ...

 Just an idea here: why not adding a new subdomain in postgresql.org
 http://postgresql.org for mirrors of the official GIT repository
 similar to the buildfarm?
 People registered in this service could publish themselves mirrors and
 decide by themselves the delay their
 clone keeps with the parent repo. The scripts used by each mirror
 maintainer (for backup, sync repo with
 a given delay) could be centralized in a way similar to buildfarm code
 so as everybody in the community could
 use it and publish it if they want.

 Also, the mirrors published should be maintained by people that are
 well-known inside the community,
 and who would not add extra commits which would make the mirror
 out-of-sync with the parent repo.

 Such an idea is perhaps too much if the point is to maintain 2-3
 mirrors of the parent repo, but gives
 enough transparency to actually know where the mirrors are and what is
 the sync delay maintained.
 
 
 
 This strikes me as being overkill. The sysadmins seem to have it covered.
 
 Back when we used CVS for quite a few years I kept 7 day rolling
 snapshots of the CVS repo, against just such a difficulty as this. But
 we seem to be much better organized with infrastructure these days so I
 haven't done that for a long time.

well there is always room for improvement(and for learning from others)
- but I agree that this proposal seems way overkill. If people think we
should keep online delayed mirrors we certainly have the resources to
do that on our own if we want though...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting post-mortem on a near disaster with git

2013-03-24 Thread Stefan Kaltenbrunner
On 03/24/2013 05:08 PM, Martijn van Oosterhout wrote:
 On Sun, Mar 24, 2013 at 11:52:17AM -0400, Tom Lane wrote:
 Over the weekend, KDE came within a gnat's eyelash of losing *all*
 their authoritative git repos, despite having seemingly-extensive
 redundancy.  Read about it here:
 http://jefferai.org/2013/03/24/too-perfect-a-mirror/

 We should think about protecting our own repo a bit better, especially
 after the recent unpleasantness with a bogus forced update.  The idea
 of having clones that are deliberately a day or two behind seems
 attractive ...
 
 I think the lesson here is that a mirror is not a backup. RAID, ZFS,
 and version control are all not backups.
 
 Taking a tarball of the entire repository and storing it on a different
 machine would solve just about any problem you can think of in this
 area.

fwiw - the sysadmin team has file-level backups of all pginfra hosts
(two backups/day, one per day for a week and a full per week for 4 weeks
of history).


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest progress

2013-03-03 Thread Stefan Kaltenbrunner
On 03/03/2013 08:37 PM, Josh Berkus wrote:
 
 Works for me, since I haven't been able to find time for it during the
 week.
 
 Set aside a couple hours to deal with it this AM, foiled because my
 community account is broken.  Grrr.

we might be able to fix this if you could tell us what exactly the
problem is?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump transaction's read-only mode

2013-01-22 Thread Stefan Kaltenbrunner

On 01/21/2013 08:45 PM, Tom Lane wrote:

Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:

On 01/21/2013 02:00 PM, Tom Lane wrote:

(It's entirely likely that the 7.0 server I keep around for testing this
is the last one in captivity anywhere.  But IIRC, we've heard fairly
recent reports of people still using 7.2.  We'd have to desupport before
7.3 to save any meaningful amount of pg_dump code, so I'm not convinced
it's time to pull the trigger on this quite yet.)



old versions are still alive - just yesterday we had someone on IRC
trying to build 7.1.3 on a modern ubuntu installation because he has an
old app depending on it., and especially 7.2 shows up regulary as well.



If the effort to keep pg_dump support for those alive is not too bad, we
should try to support them as long as we can.


Of course, the counter-argument is that people who are still using those
versions are probably not going to be interested in upgrading to a
modern version anyway.  Or if they are, dumping with their existing
version of pg_dump is likely to not be much worse than dumping with the
target version's pg_dump --- as Robert mentioned upthread, if you're
migrating from such an old version you're in for a lot of compatibility
issues anyhow, most of which pg_dump can't save you from.


sure - but having at least an easy way to properly get your schema and 
data over from such an old version helps. That only leaves you with 
the compatibility issues in the app - having to do both would be worse.





Having said that, I'm not in a hurry to pull pg_dump support for old
versions.  But we can't keep it up forever.  In particular, once that
old HPUX box dies, I'm not likely to want to try to get 7.0 to compile
on a newer box just so I can keep checking pg_dump compatibility.


yeah I'm not saying we need to keep this forever, if we say drop 
everyting including 7.3 with 9.4 that would mean that we would support 
direct upgrades using pg_dump for over 10 years (7.3s last supported 
release was in january 2008 and assuming we get 9.3 out the door this 
year) for indirekt upgrades (ie using one intermediate step) we would be 
talking more like 15-20 years if we keep the current level of backwards 
compatibility - this seems plenty enough for me...




Stefan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump transaction's read-only mode

2013-01-21 Thread Stefan Kaltenbrunner
On 01/21/2013 02:00 PM, Tom Lane wrote:
 Pavan Deolasee pavan.deola...@gmail.com writes:
 On Sun, Jan 20, 2013 at 4:29 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 As submitted, this broke pg_dump for dumping from pre-8.0 servers.
 (7.4 didn't accept commas in SET TRANSACTION syntax, and versions
 before that didn't have the READ ONLY option at all.)
 
 My bad. I did not realize that pg_dump is still supported for pre-8.0 
 releases.
 
 It supports servers back to 7.0.  At some point we ought to consciously
 desupport old servers and pull out the corresponding code, but that
 needs to be an act of intention not omission ;-)
 
 (It's entirely likely that the 7.0 server I keep around for testing this
 is the last one in captivity anywhere.  But IIRC, we've heard fairly
 recent reports of people still using 7.2.  We'd have to desupport before
 7.3 to save any meaningful amount of pg_dump code, so I'm not convinced
 it's time to pull the trigger on this quite yet.)

old versions are still alive - just yesterday we had someone on IRC
trying to build 7.1.3 on a modern ubuntu installation because he has an
old app depending on it., and especially 7.2 shows up regulary as well.

If the effort to keep pg_dump support for those alive is not too bad, we
should try to support them as long as we can.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Porting to Haiku

2013-01-12 Thread Stefan Kaltenbrunner
On 01/12/2013 10:07 PM, Tom Lane wrote:
 Mark Hellegers mhell...@xs4all.nl writes:
 I have only one server available running Haiku. Can I also run a normal 
 Postgresql installation on that same machine? If so, I will be able to 
 run the build multiple times a day.
 
 I believe that works at the moment, although there were discussions just
 yesterday about whether we really wanted to support it.  (The point
 being that the buildfarm script would have to be careful not to kill the
 live postmaster when cleaning up after a test failure.  I would
 definitely advise that you not run the buildfarm under the same userid
 as any live server, so that no accidental damage is possible.)

iirc Haiku is very strange in that regard - it is basically a
single-user operating system, which I think makes it basically useless
as a server and a horror from a security pov.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] strange isolation test buildfarm failure on guaibasaurus

2012-12-05 Thread Stefan Kaltenbrunner
Hi all!

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=guaibasaurusdt=2012-12-05%2016%3A17%3A01


seems like a rather odd failure in the isolation test (client)


Stefan



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] Maintenance announcement for trill.postgresql.org

2012-11-20 Thread Stefan Kaltenbrunner
On 11/19/2012 06:24 PM, Stefan Kaltenbrunner wrote:
 Hi all!
 
 
 There will be planned hardware/software maintenance this tomorrow
 Tuesday (20th November 2012) from starting at 16:30 CET - affecting some
 redundant services (ftp and www mirrors) as well as the following public
 hosts:
 
  * yridian.postgresql.org (www.postgresql.eu)
  * antos.postgresql.org (anoncvs.postgresql.org)
  * malur.postgresql.org (mailinglists)
 
 During this maintenance window we will be doing some software and
 hardware changes involving a number of reboots and we expect a maximum
 outage of an hour.

this was completed at around 17:00 CET without any incidents, and all
systems should be back online again.




Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Maintenance announcement for trill.postgresql.org

2012-11-19 Thread Stefan Kaltenbrunner
Hi all!


There will be planned hardware/software maintenance this tomorrow
Tuesday (20th November 2012) from starting at 16:30 CET - affecting some
redundant services (ftp and www mirrors) as well as the following public
hosts:

 * yridian.postgresql.org (www.postgresql.eu)
 * antos.postgresql.org (anoncvs.postgresql.org)
 * malur.postgresql.org (mailinglists)

During this maintenance window we will be doing some software and
hardware changes involving a number of reboots and we expect a maximum
outage of an hour.



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-11 Thread Stefan Kaltenbrunner
On 09/10/2012 05:19 PM, Bruce Momjian wrote:
 On Mon, Sep 10, 2012 at 12:06:18PM -0300, Alvaro Herrera wrote:
 It is this kind of run-around that caused me to generate my own doc
 build in the past;  maybe I need to return to doing my own doc build.

 You keep threatening with that.  You are free, of course, to do anything
 you want, and no one will break sweat about it.  I already said I will
 work on getting this up and running, but I can't give you a deadline for
 when it'll be working.
 
 My point is that this frequent doc build feature was removed with no
 discussion, and adding it seems to be some herculean job that requires
 red tape only a government worker would love.

Not sure how you got that impression - but understand all requirements
to something is usually key to implementing a solution, so discussing
those requirements seems like a sensible thing to do.
sysadmin is a volunteer effort and we do our best to deal with both
keeping the existing infrastructure up and improving as we can but
resources are limited and we need to consider the time/effort ration of
stuff.
Anyway alvaro clearly stated he would deal with it but obviously
thatthat is not enough for your urgent demands so there is really not
much we can do about it...

 
 I have already started working on updating my script for git  --- should
 be done shortly, so you can remove my request.

ok


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-09 Thread Stefan Kaltenbrunner
On 09/06/2012 12:13 AM, Peter Eisentraut wrote:
 On 8/29/12 11:52 PM, Andrew Dunstan wrote:
 Why does this need to be tied into the build farm?  Someone can surely
 set up a script that just runs the docs build at every check-in, like it
 used to work.  What's being proposed now just sounds like a lot of
 complication for little or no actual gain -- net loss in fact.

 It doesn't just build the docs. It makes the dist snapshots too.
 
 Thus making the turnaround time on a docs build even slower ... ?
 
 And the old script often broke badly, IIRC.
 
 The script broke on occasion, but the main problem was that it wasn't
 monitored.  Which is something that could have been fixed.
 
 The current setup doesn't install
 anything if the build fails, which is a distinct improvement.
 
 You mean it doesn't build the docs if the code build fails?  Would that
 really be an improvement?

why would we want to publish docs for something that fails to build
and/or fails to pass regression testing - to me code and the docs for it
are a combined thing and there is no point in pushing docs for something
that fails even basic testing...


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-09 Thread Stefan Kaltenbrunner
On 09/06/2012 03:43 AM, Bruce Momjian wrote:
 On Wed, Sep  5, 2012 at 09:33:35PM -0400, Andrew Dunstan wrote:

 On 09/05/2012 09:25 PM, Bruce Momjian wrote:
 On Wed, Sep  5, 2012 at 09:56:32PM -0300, Alvaro Herrera wrote:
 Excerpts from Tom Lane's message of mié sep 05 20:24:08 -0300 2012:
 Andrew Dunstan and...@dunslane.net writes:
 The only reason there is a significant delay is that the administrators
 have chosen not to run the process more than once every 4 hours. That's
 a choice not dictated by the process they are using, but by other
 considerations concerning the machine it's being run on. Since I am not
 one of the admins and don't really want to take responsibility for it I
 am not going to second guess them. On the very rare occasions when I
 absolutely have to have the totally up to date docs I build them myself
 - it takes about 60 seconds on my modest hardware.
 I think the argument for having a quick docs build service is not about
 the time needed, but the need to have all the appropriate tools
 installed.  While I can understand that argument for J Random Hacker,
 I'm mystified why Bruce doesn't seem to have bothered to get a working
 SGML toolset installed.  It's not like editing the docs is a one-shot
 task for him.
 As far as I understand, Bruce's concern is not about seeing the docs
 built himself, but having an HTML copy published somewhere that he can
 point people to, after applying some patch.  To me, that's a perfectly
 legitimate reason to want to have them quickly.
 Correct.  I have always had a working SGML toolset.  If we are not going
 to have the developer site run more often, I will just go back to
 setting up my own public doc build, like I used to do.  I removed mine
 when the official one was more current/reliable --- if that has changed,
 I will return to my old setup, and publish my own URL for users to
 verify doc changes.

 How often do you want? After all,
 http://developer.postgresql.org/docs/postgres/index.html is
 presumably going to keep pointing to where it now points.
 
 Well, the old code checked every five minutes, and it rebuilt in 4
 minutes, so there was a max of 10 minutes delay.

the new code gives you a lot more though - it makes sure that the code
the docs refer to actually builds and passes testing, it uses the exact
same toolchain and setup/infrastructure that we build the official
snapshots/tarballs, the official PDFs and reuses an important piece of
our environment - the buildfarm-client.
I'm having a hard time understanding why getting a bit more frequency
for the odd docs only+need to show somebody the html and not the
patch requirement is really something we need.


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-09 Thread Stefan Kaltenbrunner
On 09/07/2012 06:50 PM, Andrew Dunstan wrote:
 
 On 09/07/2012 09:57 AM, Magnus Hagander wrote:
 On Thu, Sep 6, 2012 at 1:06 AM, Andrew Dunstan and...@dunslane.net
 wrote:

 A complete run of this process takes less than 15 minutes. And as I have
 pointed out elsewhere that could be reduced substantially by skipping
 certain steps. It's as simple as changing the command line in the
 crontab
 entry.
 Is it possible to run it only when the *docs* have changed, and not
 when it's just a code-commit? meaning, is the detection smart enough
 for that?


 
 
 There is a filter mechanism used in detecting is a run is needed, and in
 modern versions of the client (Release 4.7, one version later than
 guaibasaurus is currently using) it lets you have both include and
 exclude filters. For example, you could have this config setting:
 
 trigger_include = qr(/doc/src/),
 
 and it would then only match changed files in the docs tree.
 
 It's a global mechanism, not per step. So it will run all the steps
 (other than those you have told it to skip) if it finds any files
 changed that match the filter conditions.
 
 If you do that you would probably want to have two animals, one doing
 docs builds only and running frequently, one doing the dist stuff much
 less frequently.

hmm that might work, but it will only be a bandaid for what people
really seem to advocate for ie commit triggered docs builds?


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Cross-link to doc build requirements from install requirements.

2012-09-01 Thread Stefan Kaltenbrunner
On 09/01/2012 12:28 PM, Robert Haas wrote:
 Cross-link to doc build requirements from install requirements.
 
 Jeff Janes
 
 Branch
 --
 master
 
 Details
 ---
 http://git.postgresql.org/pg/commitdiff/e8d6c98c2f082bead1202b23e9d70e0fbde49129
 
 Modified Files
 --
 doc/src/sgml/installation.sgml |8 
 1 files changed, 8 insertions(+), 0 deletions(-)
 

this seems to have broken the docs build:


http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=guaibasaurusdt=2012-09-01%2012%3A17%3A01



Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Mailsystem maintenance/migration announcement

2012-08-06 Thread Stefan Kaltenbrunner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all!

We are currently planning to finalize the ongoing work on the mailsystem
migration we started earlier this year by migrating the
two remaining components of the postgresql.org mailsystem infrastructure
to new systems.
Those parts (listserver and mailbox hosting) will be moved to new
systems in an maintenance window on:

Friday, 10th of august starting 15:00 GMT

The migration is expected to take about 2 hours, in that time period all
mails will be held queued on our inbound systems (which are already on
the new infrastructure) and no outbound mails will be sent (or can be
sent using the webmail system).
We expect no loss of in-transit emails at all and for the mailbox users
with local storage we are going to complete migrate all the content of
their mailboxes per that date.

People using mailboxes (as in have an @postgresql.org address) and do
NOT have a forwards will have to make modifications to their
configuration and will get a seperate email with appropriate details on
what (and if) they have to change anything.
Apart from that we do not expect any user-visible behaviour changes with
regards to the list-service itself




Stefan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAlAf/SYACgkQr1aG+WhhYQGgxACfVDQ+l4K52zoZYUlrD4jRQozK
/0YAn1V5QU99KWEqDl1f2zFAcN2dzkxZ
=frEs
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] compiler warnings on the buildfarm

2012-07-01 Thread Stefan Kaltenbrunner
seeing some of the latest commits about fixing compiler warnings I took
a look at the buildfarm to see if there are any interesting ones there
(in total we have a thousends of warnings on the buildfarm but most of
those are from very noisy compilers).

so in case anybody is interested those are a selection of the ones that
at least look somewhat interesting(duplicates mostly removed, windows
ignored):

animal: grebe
Snapshot: 2012-07-01 150224
scan.c: In function 'yy_try_NUL_trans':
scan.c:16243: warning: unused variable 'yyg'
auth.c: In function 'auth_peer':
auth.c:1775: warning: implicit declaration of function 'getpeereid'
ip.c: In function 'getaddrinfo_unix':
ip.c:228: warning: large integer implicitly truncated to unsigned type
Extra instructions are being generated for each reference to a TOC
symbol if the symbol is in the TOC overflow area.
fe-connect.c: In function 'PQconnectPoll':
fe-connect.c:1913: warning: implicit declaration of function 'getpeereid'
ip.c: In function 'getaddrinfo_unix':
ip.c:228: warning: large integer implicitly truncated to unsigned type


animal: spoonbill
Snapshot: 2012-07-01 110005
tuptoaster.c: In function 'heap_tuple_untoast_attr_slice':
tuptoaster.c:198: warning: array size (1) smaller than bound length (16)
tuptoaster.c:198: warning: array size (1) smaller than bound length (16)
tuptoaster.c: In function 'toast_raw_datum_size':
tuptoaster.c:275: warning: array size (1) smaller than bound length (16)
tuptoaster.c:275: warning: array size (1) smaller than bound length (16)
tuptoaster.c: In function 'toast_datum_size':
tuptoaster.c:320: warning: array size (1) smaller than bound length (16)
tuptoaster.c:320: warning: array size (1) smaller than bound length (16)
tuptoaster.c: In function 'toast_save_datum':
tuptoaster.c:1344: warning: array size (1) smaller than bound length (16)
tuptoaster.c:1344: warning: array size (1) smaller than bound length (16)
tuptoaster.c:1458: warning: array size (1) smaller than bound length (16)
tuptoaster.c:1458: warning: array size (1) smaller than bound length (16)
tuptoaster.c: In function 'toast_delete_datum':
tuptoaster.c:1485: warning: array size (1) smaller than bound length (16)
tuptoaster.c:1485: warning: array size (1) smaller than bound length (16)
tuptoaster.c: In function 'toast_fetch_datum':
tuptoaster.c:1610: warning: array size (1) smaller than bound length (16)
tuptoaster.c:1610: warning: array size (1) smaller than bound length (16)
tuptoaster.c: In function 'toast_fetch_datum_slice':
tuptoaster.c:1779: warning: array size (1) smaller than bound length (16)
tuptoaster.c:1779: warning: array size (1) smaller than bound length (16)
relmapper.c: In function 'relmap_redo':
relmapper.c:876: warning: array size (1) smaller than bound length (512)
relmapper.c:876: warning: array size (1) smaller than bound length (512)
elog.c: In function 'write_pipe_chunks':
elog.c:2541: warning: array size (1) smaller than bound length (503)
elog.c:2541: warning: array size (1) smaller than bound length (503)


animal: jaguarundi
Snapshot: 2012-07-01 031500
plpy_exec.c: In function 'PLy_procedure_call':
plpy_exec.c:818: warning: null format string

animal: rover_firefly
Snapshot: 2012-07-01 030400
float.c: In function 'is_infinite':
float.c:167:2: warning: implicit declaration of function 'isinf'
[-Wimplicit-function-declaration]
geo_ops.c: In function 'pg_hypot':
geo_ops.c:5455:2: warning: implicit declaration of function 'isinf'
[-Wimplicit-function-declaration]
execute.c: In function 'sprintf_double_value':
execute.c:473:2: warning: implicit declaration of function 'isinf'
[-Wimplicit-function-declaration]


animal: nightjar
Snapshot: 2012-07-01 023700
In file included from
/usr/home/andrew/bf/root/HEAD/pgsql.66715/../pgsql/src/backend/parser/gram.y:13338:
scan.c: In function 'yy_try_NUL_trans':
scan.c:16243: warning: unused variable 'yyg'
/usr/home/andrew/bf/root/HEAD/pgsql.66715/../pgsql/src/pl/plpython/plpy_exec.c:
In function 'PLy_procedure_call':
/usr/home/andrew/bf/root/HEAD/pgsql.66715/../pgsql/src/pl/plpython/plpy_exec.c:818:
warning: null format string
/usr/home/andrew/bf/root/HEAD/pgsql.66715/../pgsql/src/pl/tcl/pltcl.c:
In function '_PG_init':
/usr/home/andrew/bf/root/HEAD/pgsql.66715/../pgsql/src/pl/tcl/pltcl.c:343:
warning: assignment from incompatible pointer type
/usr/home/andrew/bf/root/HEAD/pgsql.66715/../pgsql/src/pl/tcl/pltcl.c:344:
warning: assignment from incompatible pointer type


animal: locust
Snapshot: 2012-07-01 023122
xlog.c: In function 'StartupXLOG':
xlog.c:5988: warning: 'checkPointLoc' may be used uninitialized in this
function
pgstat.c: In function 'pgstat_report_activity':
pgstat.c:2538: warning: passing argument 1 of
'__dtrace_probe$postgresql$statement__status$v1$63686172202a' discards
qualifiers from pointer target type
In file included from repl_gram.y:172:
postgres.c: In function 'pg_parse_query':
postgres.c:559: warning: passing argument 1 of

Re: [HACKERS] random failing builds on spoonbill - backends not exiting...

2012-06-23 Thread Stefan Kaltenbrunner
On 06/22/2012 11:53 PM, Tom Lane wrote:
 oh, and just for comparison's sake, what do the postmaster's signal
 masks look like?


#  ps -o pid,sig,sigcatch,sigignore,sigmask,command -p 18020

  PID  PENDING   CAUGHT  IGNORED  BLOCKED COMMAND
180200 74084007 8972b0000
/home/pgbuild/pgbuildfarm/HEAD/pgsql.5709/src/interfaces/ecpg/test/./t...


Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] random failing builds on spoonbill - backends not exiting...

2012-06-23 Thread Stefan Kaltenbrunner
On 06/22/2012 11:47 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 PID  PENDING   CAUGHT  IGNORED  BLOCKED COMMAND
 12480 20004004 34084005 c942b002 fffefeff postgres: writer process
 9841 20004004 34084007 c942b000 fffefeff postgres: wal writer process
 
 this seems to be SIGUSR1,SIGTERM and SIGQUIT
 
 OK, I looked up OpenBSD's signal numbers on the web.  It looks to me
 like these two processes have everything blocked except KILL and STOP
 (which are unblockable of course).  I do not see any place in the PG
 code that could possibly set such a mask (note that BlockSig should
 have more holes in it than that).  So I'm thinking these must be
 blocked inside some system function that's installed a restrictive
 signal mask, or some such function forgot to restore the mask on exit.
 Could you gdb each of these processes and get a stack trace?


background writer (12480):

(gdb) bt
#0  0x000208eb5928 in poll () from /usr/lib/libc.so.62.0
#1  0x00020a972b88 in _thread_kern_poll (wait_reqd=Variable
wait_reqd is not available.
) at /usr/src/lib/libpthread/uthread/uthread_kern.c:784
#2  0x00020a973d04 in _thread_kern_sched (scp=0x0) at
/usr/src/lib/libpthread/uthread/uthread_kern.c:384
#3  0x00020a96b35c in poll (fds=0xfffefa80, nfds=Variable
nfds is not available.
) at /usr/src/lib/libpthread/uthread/uthread_poll.c:94
#4  0x00395538 in WaitLatchOrSocket (latch=0x212bdc97c,
wakeEvents=25, sock=-1, timeout=1) at pg_latch.c:286
#5  0x00399800 in BackgroundWriterMain () at bgwriter.c:325
#6  0x00201850 in AuxiliaryProcessMain (argc=2,
argv=0xfffefd98) at bootstrap.c:419
#7  0x003a1534 in StartChildProcess (type=BgWriterProcess) at
postmaster.c:4518
#8  0x003a7574 in reaper (postgres_signal_arg=Variable
postgres_signal_arg is not available.
) at postmaster.c:2385
#9  0x00020a974528 in _dispatch_signal (sig=20,
scp=0x03e0) at /usr/src/lib/libpthread/uthread/uthread_sig.c:408
#10 0x00020a97462c in _dispatch_signals (scp=0x03e0) at
/usr/src/lib/libpthread/uthread/uthread_sig.c:437
#11 0x00020a974e28 in _thread_sig_handler (sig=20,
info=0x0420, scp=0x03e0) at
/usr/src/lib/libpthread/uthread/uthread_sig.c:139
#12 signal handler called
#13 _thread_kern_set_timeout (timeout=0x0630) at
/usr/src/lib/libpthread/uthread/uthread_kern.c:989
#14 0x00020a96bc8c in select (numfds=9, readfds=0x0730,
writefds=0x0, exceptfds=0x0, timeout=Variable timeout is not available.
) at /usr/src/lib/libpthread/uthread/uthread_select.c:85
#15 0x003a2894 in ServerLoop () at postmaster.c:1321
#16 0x003a45ac in PostmasterMain (argc=Variable argc is not
available.
) at postmaster.c:1121
#17 0x00326df8 in main (argc=6, argv=0x14f8) at
main.c:199


wal writer(9841):

#0  0x000208eb5928 in poll () from /usr/lib/libc.so.62.0
#1  0x00020a972b88 in _thread_kern_poll (wait_reqd=Variable
wait_reqd is not available.
) at /usr/src/lib/libpthread/uthread/uthread_kern.c:784
#2  0x00020a973d04 in _thread_kern_sched (scp=0x0) at
/usr/src/lib/libpthread/uthread/uthread_kern.c:384
#3  0x00020a96b35c in poll (fds=0xfffefa80, nfds=Variable
nfds is not available.
) at /usr/src/lib/libpthread/uthread/uthread_poll.c:94
#4  0x00395538 in WaitLatchOrSocket (latch=0x212bdc69c,
wakeEvents=25, sock=-1, timeout=5000) at pg_latch.c:286
#5  0x003aa794 in WalWriterMain () at walwriter.c:301
#6  0x00201878 in AuxiliaryProcessMain (argc=2,
argv=0xfffefd98) at bootstrap.c:430
#7  0x003a1534 in StartChildProcess (type=WalWriterProcess) at
postmaster.c:4518
#8  0x003a7564 in reaper (postgres_signal_arg=Variable
postgres_signal_arg is not available.
) at postmaster.c:2387
#9  0x00020a974528 in _dispatch_signal (sig=20,
scp=0x03e0) at /usr/src/lib/libpthread/uthread/uthread_sig.c:408
#10 0x00020a97462c in _dispatch_signals (scp=0x03e0) at
/usr/src/lib/libpthread/uthread/uthread_sig.c:437
#11 0x00020a974e28 in _thread_sig_handler (sig=20,
info=0x0420, scp=0x03e0) at
/usr/src/lib/libpthread/uthread/uthread_sig.c:139
#12 signal handler called
#13 _thread_kern_set_timeout (timeout=0x0630) at
/usr/src/lib/libpthread/uthread/uthread_kern.c:989
#14 0x00020a96bc8c in select (numfds=9, readfds=0x0730,
writefds=0x0, exceptfds=0x0, timeout=Variable timeout is not available.
) at /usr/src/lib/libpthread/uthread/uthread_select.c:85
#15 0x003a2894 in ServerLoop () at postmaster.c:1321
#16 0x003a45ac in PostmasterMain (argc=Variable argc is not
available.
) at postmaster.c:1121
#17 0x00326df8 in main (argc=6, argv=0x14f8) at
main.c:199




Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org

[HACKERS] random failing builds on spoonbill - backends not exiting...

2012-06-22 Thread Stefan Kaltenbrunner
It has now happened at least twice that builds on spponbill started to
fail after it failed during ECPGcheck:

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbilldt=2012-06-19%2023%3A00%3A04

the first failure was:

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbilldt=2012-05-24%2023%3A00%3A05


so in both cases the postmaster was not shuting down properly and it was
in fact still running - I have attached gdb to to the still running backend:


(gdb) bt
#0  0x000208eb5928 in poll () from /usr/lib/libc.so.62.0
#1  0x00020a972b88 in _thread_kern_poll (wait_reqd=Variable
wait_reqd is not available.
) at /usr/src/lib/libpthread/uthread/uthread_kern.c:784
#2  0x00020a973d04 in _thread_kern_sched (scp=0x0) at
/usr/src/lib/libpthread/uthread/uthread_kern.c:384
#3  0x00020a96c080 in select (numfds=Variable numfds is not available.
) at /usr/src/lib/libpthread/uthread/uthread_select.c:170
#4  0x003a2894 in ServerLoop () at postmaster.c:1321
#5  0x003a45ac in PostmasterMain (argc=Variable argc is not
available.
) at postmaster.c:1121
#6  0x00326df8 in main (argc=6, argv=0x14f8) at
main.c:199
(gdb) print Shutdown
$2 = 2
(gdb) print pmState
$3 = PM_WAIT_BACKENDS
(gdb) p *(Backend *) (BackendList-dll_head)
Cannot access memory at address 0x0
(gdb) p *BackendList
$9 = {dll_head = 0x0, dll_tail = 0x0}

all processes are still running:

pgbuild  18020  0.0  1.2  5952 12408 ??  I Wed04AM0:03.98
/home/pgbuild/pgbuildfarm/HEAD/pgsql.5709/src/interfaces/ecpg/test/./tmp_check/install//home/pgbuild/pgbuildfarm/HEAD/inst/bin/postgres
-D /
pgbuild  21483  0.0  0.7  6088  7296 ??  IsWed04AM0:00.68
postgres: checkpointer process(postgres)
pgbuild  12480  0.0  0.4  5952  4464 ??  SsWed04AM0:06.88
postgres: writer process(postgres)
pgbuild   9841  0.0  0.5  5952  4936 ??  SsWed04AM0:06.92
postgres: wal writer process(postgres)
pgbuild623  0.1  0.6  7424  6288 ??  SsWed04AM4:16.76
postgres: autovacuum launcher process(postgres)
pgbuild  30949  0.0  0.4  6280  3896 ??  SsWed04AM0:40.94
postgres: stats collector process(postgres)


sending a manual kill -15 to either of them does not seem to make them
exit either...

I did some further investiagations with robert on IM but I don't think
he has any further ideas other than that I have a weird OS :)
It seems worth noticing that this is OpenBSD 5.1 on Sparc64 which has a
new threading implementation compared to older OpenBSD versions.


Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] random failing builds on spoonbill - backends not exiting...

2012-06-22 Thread Stefan Kaltenbrunner
On 06/22/2012 08:34 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 It has now happened at least twice that builds on spponbill started to
 fail after it failed during ECPGcheck:
 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbilldt=2012-06-19%2023%3A00%3A04
 the first failure was:
 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=spoonbilldt=2012-05-24%2023%3A00%3A05
 so in both cases the postmaster was not shuting down properly
 
 panther has been showing similar postmaster-does-not-shut-down failures
 every so often, though IIRC always in the IsolationCheck step not ECPG.

hmm

 
 I did some further investiagations with robert on IM but I don't think
 he has any further ideas other than that I have a weird OS :)
 It seems worth noticing that this is OpenBSD 5.1 on Sparc64 which has a
 new threading implementation compared to older OpenBSD versions.
 
 But we don't use threading ...
 
 Still, panther is NetBSD so there may be some general BSD flavor to
 whatever's going on here.

yeah the threading reference was mostly because all backtraces contain
references to threading libs and because the threading tests are the
last ones done by the ECPG changes...


Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] random failing builds on spoonbill - backends not exiting...

2012-06-22 Thread Stefan Kaltenbrunner
On 06/22/2012 09:39 PM, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
 On Friday, June 22, 2012 08:51:55 PM Robert Haas wrote:
 I remarked to Stefan that the symptoms seem consistent with the idea
 that the children have signals blocked.  But I don't know how that
 could happen.
 
 You cannot block sigkill.
 
 sigterm is at issue, not sigkill.  But I don't care for the
 signals-blocked theory either, at least not in three different children
 at the same time.
 
 (Hey Stefan, is there a way on BSD to check a process's signals-blocked
 state from outside?  If so, next time this happens you should try to
 determine the children's signal state.)

with help from RhodiumToad on IRC:

#  ps -o pid,sig,sigcatch,sigignore,sigmask,command -p 12480

  PID  PENDING   CAUGHT  IGNORED  BLOCKED COMMAND
12480 20004004 34084005 c942b002 fffefeff postgres: writer process
(postgres)

#  ps -o pid,sig,sigcatch,sigignore,sigmask,command -p 9841
  PID  PENDING   CAUGHT  IGNORED  BLOCKED COMMAND
 9841 20004004 34084007 c942b000 fffefeff postgres: wal writer process
  (postgres)


Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] random failing builds on spoonbill - backends not exiting...

2012-06-22 Thread Stefan Kaltenbrunner
On 06/22/2012 11:02 PM, Tom Lane wrote:
 Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 On 06/22/2012 09:39 PM, Tom Lane wrote:
 (Hey Stefan, is there a way on BSD to check a process's signals-blocked
 state from outside?  If so, next time this happens you should try to
 determine the children's signal state.)
 
 with help from RhodiumToad on IRC:
 
 #  ps -o pid,sig,sigcatch,sigignore,sigmask,command -p 12480
 
   PID  PENDING   CAUGHT  IGNORED  BLOCKED COMMAND
 12480 20004004 34084005 c942b002 fffefeff postgres: writer process
 (postgres)
 
 #  ps -o pid,sig,sigcatch,sigignore,sigmask,command -p 9841
   PID  PENDING   CAUGHT  IGNORED  BLOCKED COMMAND
  9841 20004004 34084007 c942b000 fffefeff postgres: wal writer process
   (postgres)
 
 Well, the nonzero PENDING masks sure look like a smoking gun, but why
 are there multiple pending signals?  And I'm not sure I know OpenBSD's
 signal numbers by heart.  Could you convert those masks into text signal
 name lists for us?

this seems to be SIGUSR1,SIGTERM and SIGQUIT



Stefan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Could we replace SysV semaphores with latches?

2012-06-07 Thread Stefan Kaltenbrunner
On 06/07/2012 06:09 AM, Tom Lane wrote:
 There has been regular griping in this list about our dependence on SysV
 shared memory, but not so much about SysV semaphores, even though the
 latter cause their fair share of issues; as seen for example in
 buildfarm member spoonbill's recent string of failures:
 
 creating template1 database in 
 /home/pgbuild/pgbuildfarm/HEAD/pgsql.25563/src/test/regress/./tmp_check/data/base/1
  ... FATAL:  could not create semaphores: No space left on device
 DETAIL:  Failed system call was semget(1, 17, 03600).
 HINT:  This error does *not* mean that you have run out of disk space.  It 
 occurs when either the system limit for the maximum number of semaphore sets 
 (SEMMNI), or the system wide maximum number of semaphores (SEMMNS), would be 
 exceeded.  You need to raise the respective kernel parameter.  Alternatively, 
 reduce PostgreSQL's consumption of semaphores by reducing its max_connections 
 parameter.
   The PostgreSQL documentation contains more information about 
 configuring your system for PostgreSQL.
 child process exited with exit code 1

hmm now that you mention that I missed the issue completely - the
problem here is that spoonbill only has resources for one running
postmaster and the failure on 24.5.2012 caused a left over postmaster
instance - should be fixed now


Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   3   4   5   6   7   8   >