[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Windows service is not starting so there’s message in log: FATAL: "could not create shared memory segment “Global/PostgreSQL.851401618

2016-05-15 Thread Michael Paquier
On Sun, May 15, 2016 at 3:34 PM, Amit Kapila  wrote:
> On Sat, May 14, 2016 at 7:33 PM, Robert Haas  wrote:
>>
>> On Tue, Mar 22, 2016 at 12:56 AM, Amit Kapila 
>> wrote:
>> >> >> Yes, same random number generation is not the problem. In windows
>> >> >> apart
>> >> >> from EEXIST error, EACCES also needs to be validated and returned
>> >> >> for
>> >> >> new random number generation, instead of throwing an error.
>> >> >
>> >> > Doing the same handling for EACCES doesn't seem to be sane because if
>> >> > EACCES
>> >> > came for reason other than duplicate dsm name, then we want to report
>> >> > the
>> >> > error instead of trying to regenerate the name.  I think here fix
>> >> > should
>> >> > be
>> >> > to append data_dir path as we do for main shared memory.
>> >>
>> >> Yes, EACCES may be possible other than duplicate dsm name.
>> >
>> > So as far as I can see there are two ways to resolve this issue, one is
>> > to
>> > retry generation of dsm name if CreateFileMapping returns EACCES and
>> > second
>> > is to append data_dir name to dsm name as the same is done for main
>> > shared
>> > memory, that will avoid the error to occur.  First approach has minor
>> > flaw
>> > that if CreateFileMapping returns EACCES due to reason other then
>> > duplicate
>> > dsm name which I am not sure is possible to identify, then we should
>> > report
>> > error instead try to regenerate the name
>> >
>> > Robert and or others, can you share your opinion on what is the best way
>> > to
>> > proceed for this issue.
>>
>> I think we should retry on EACCES.  Possibly we should do other things
>> too, but at least that.  It completely misses the point of retrying on
>> EEXIST if we don't retry on other error codes that can also be
>> generated when the segment already exists.
>>

Well, if we don't care about segment uniqueness more than that... I
guess I will just throw the white flag. By retrying with a new segment
name at each loop, there is no risk to retry infinitely and remain
stuck, so let's just use something like
1444921511.3661.13.ca...@postgrespro.ru as a fix and call that a deal
(with a fatter comment). CreateFileMapping would return a handle only
with ERROR_ALREADY_EXISTS per the docs.

> Sounds sensible, but if we want to that route, shall we have some mechanism
> such that if retrying it for 10 times (10 is somewhat arbitrary, but we
> retry 10 times in PGSharedMemoryCreate, so may be there is some consistency)
> doesn't give us unique name and we are getting EACCES error, then just throw
> the error instead of more retries.  This is to ensure that if the API is
> returning EACCES due to reason other than duplicate handle, then we won't
> retry indefinitely.

The logic in win32_shmem.c relies on the fact that a segment will be
recycled, and the retry is here because it may take time at OS level.
On top of that it relies on the segment names being unique across
systems. So it seems to me that it is not worth the complication to
duplicate that logic in the dsm implementation.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-15 Thread Jim Nasby

On 5/13/16 5:01 PM, Tom Lane wrote:

If we do decide to change the numbering strategy, there are quite a
few small details that probably ought to be fixed while we're at it.
I think it'd be a good idea to start separating "devel" or "betaN"
with a dot, for instance, like "10.devel" not "10devel".  But it's
likely premature to get into those sorts of details, since it's not
clear to me that we have a consensus to change at all.


It would be interesting to actually release snapshots after commit 
fests, ie: 10.dev.0, 10.dev.1, etc.


And +1 for ditching major.major.minor in favor of major.minor.0. It's 
high time we stop this silliness.


IMHO the beginning of parallelism that we have now is more than enough 
to justify 10.0, but that consideration pales compared to fixing the 
version numbering system itself.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-05-15 Thread Jim Nasby

On 4/29/16 10:37 AM, Joshua D. Drake wrote:

5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
without pg_upgrade or other modification).


Technically, this is exactly what pg_upgrade does.  I think what you
really mean is for the backend binary to be able to read the system
tables and WAL files of the old clusters --- something I can't see us
implementing anytime soon.



For the most part, pg_upgrade is good enough. There are exceptions and
it does need a more thorough test suite but as a whole, it works. As
nice as being able to install 9.6 right on top of 9.5 and have 9.6
magically work, it is certainly not a *requirement* anymore.


My 2 issues with pg_upgrade are:

1) It's prone to bugs, because it operates at the binary level. This is 
similar to how it's MUCH easier to mess up PITR than pg_dump. Perhaps 
there's no way to avoid this.


2) There's no ability at all to revert, other than restore a backup. 
That means if you pull the trigger and discover some major performance 
problem, you have no choice but to deal with it (you can't switch back 
to the old version without losing data).


For many users those issues just don't matter; but in my work with 
financial data it's why I've never actually used it. #2 especially was 
good to have (in our case, via londiste). It also made it a lot easier 
to find performance issues beforehand, by switching reporting replicas 
over to the new version first.


One other consideration is cut-over time. Swapping logical master and 
replica can happen nearly instantly, while pg_upgrade needs some kind of 
outage window.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Just-in-time compiling things (was: asynchronous and vectorized execution)

2016-05-15 Thread Oleg Bartunov
On Sat, May 14, 2016 at 12:10 PM, Andreas Seltenreich
 wrote:
> Konstantin Knizhnik writes:
>
>> Latest information from ISP RAS guys: them have made good progress
>> since February: them have rewritten most of methods of Scan, Aggregate
>> and Join to LLVM API.
>
> Is their work available somewhere?  I'm experimenting in that area as
> well, although I'm using libFirm instead of LLVM.  I wonder what their
> motivation to rewrite backend code in LLVM IR was, since I am following
> the approach of keeping the IR around when compiling the vanilla
> postgres C code, possibly inlining it during JIT and then doing
> optimizations on this IR.  That way the logic doesn't have to be
> duplicated.

I have discussed availability of their work and the consensus was that
eventually their code will be open source, but not right now, since it
is not ready to be  published. I'll meet (after PGCon)  their
management staff and discuss how we can work together.

>
> regrads
> Andreas
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel.c is not marked as test covered

2016-05-15 Thread Clément Prévost
On Mon, May 9, 2016 at 4:50 PM Andres Freund  wrote:

> I think it's a good idea to run a force-parallel run on some buildfarm
> members. But I'm rather convinced that the core tests run by all animals
> need some minimal coverage of parallel queries. Both because otherwise
> it'll be hard to get some coverage of unusual platforms, and because
> it's imo something rather relevant to test during development.
>
Good point.

After some experiments, I found out that, for my setup (9b7bfc3a88ef7b), a
parallel seq scan is used given both parallel_setup_cost
and parallel_tuple_cost are set to 0 and given that the table is at least 3
times as large as the biggest test table tenk1.

The attached patch is a regression test using this method that is
reasonably small and fast to run. I also hid the workers count from the
explain output when costs are disabled as suggested by Tom Lane and Robert
Haas on this same thread (
http://www.postgresql.org/message-id/CA+TgmobBQS4ss3+CwoZOKgbsBqKfRndwc=hlialaep5axqc...@mail.gmail.com
).

Testing under these conditions does not test the planner job at all but at
least some parallel code can be run on the build farm and the test suite
gets 2643 more lines and 188 more function covered.

I don't know however if this test will be reliable on other platforms, some
more feedback is needed here.


select_parallel_regress.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-15 Thread Álvaro Hernández Tortosa



On 15/05/16 14:42, Magnus Hagander wrote:



On Sun, May 15, 2016 at 2:29 PM, Álvaro Hernández Tortosa 
> wrote:




On 14/05/16 20:02, Petr Jelinek wrote:

+1 for going with 10.0 after 9.6 and 11.0 afterwards, etc.

It will hopefully both end these discussions and remove the
confusion the current versioning scheme has (I too heard way
to many times about people using postgres8 or postgres9).


Even worse: I've been told that a company was using
"PostgreSQL 8.5" ^_^


That's not necessarily the version numbers fault. That's them using an 
alpha version.. (Yes, I've run into a customer just a couple of years 
ago that were still on 8.5 alpha)





It was their fault, obviously. There were not using the alpha 
version, they were using 8.3 but they thought it was 8.5 (and yes, 
that's terrible that they provide information without checking it). 
Anyway, and not being version number's fault, having one less number may 
have helped here and probably in other cases too.


Álvaro

--
Álvaro Hernández Tortosa


---
8Kdata



Re: [HACKERS] 10.0

2016-05-15 Thread Magnus Hagander
On Sun, May 15, 2016 at 2:29 PM, Álvaro Hernández Tortosa 
wrote:

>
>
> On 14/05/16 20:02, Petr Jelinek wrote:
>
>> +1 for going with 10.0 after 9.6 and 11.0 afterwards, etc.
>>
>> It will hopefully both end these discussions and remove the confusion the
>> current versioning scheme has (I too heard way to many times about people
>> using postgres8 or postgres9).
>>
>
> Even worse: I've been told that a company was using "PostgreSQL 8.5"
> ^_^


That's not necessarily the version numbers fault. That's them using an
alpha version.. (Yes, I've run into a customer just a couple of years ago
that were still on 8.5 alpha)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] 10.0

2016-05-15 Thread Álvaro Hernández Tortosa



On 14/05/16 20:02, Petr Jelinek wrote:

+1 for going with 10.0 after 9.6 and 11.0 afterwards, etc.

It will hopefully both end these discussions and remove the confusion 
the current versioning scheme has (I too heard way to many times about 
people using postgres8 or postgres9).


Even worse: I've been told that a company was using "PostgreSQL 
8.5" ^_^



Álvaro

--
Álvaro Hernández Tortosa


---
8Kdata



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-15 Thread Michael Paquier
On Sun, May 15, 2016 at 11:59 AM, Tom Lane  wrote:
> "Greg Sabino Mullane"  writes:
>> I think moving to a two-number format is a mistake: what exactly will
>> PQserverVersion() return in that case?
>
> For, say, 10.2 it would be 12, equivalent to 10.0.2 under old style.
>
> We could redefine it as being major plus four-digit minor, really.
> Under the current maintenance scheme we never get anywhere near minor
> release 99 before a branch dies ... but having some more breathing room
> there would not be a bad thing.

Perhaps that would be a good topic for the developer meeting in
Ottawa? That's just in two days, so it looks like a good timing.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Windows service is not starting so there’s message in log: FATAL: "could not create shared memory segment “Global/PostgreSQL.851401618”: Permission

2016-05-15 Thread Amit Kapila
On Sat, May 14, 2016 at 7:33 PM, Robert Haas  wrote:
>
> On Tue, Mar 22, 2016 at 12:56 AM, Amit Kapila 
wrote:
> >> >> Yes, same random number generation is not the problem. In windows
apart
> >> >> from EEXIST error, EACCES also needs to be validated and returned
for
> >> >> new random number generation, instead of throwing an error.
> >> >
> >> > Doing the same handling for EACCES doesn't seem to be sane because if
> >> > EACCES
> >> > came for reason other than duplicate dsm name, then we want to report
> >> > the
> >> > error instead of trying to regenerate the name.  I think here fix
should
> >> > be
> >> > to append data_dir path as we do for main shared memory.
> >>
> >> Yes, EACCES may be possible other than duplicate dsm name.
> >
> > So as far as I can see there are two ways to resolve this issue, one is
to
> > retry generation of dsm name if CreateFileMapping returns EACCES and
second
> > is to append data_dir name to dsm name as the same is done for main
shared
> > memory, that will avoid the error to occur.  First approach has minor
flaw
> > that if CreateFileMapping returns EACCES due to reason other then
duplicate
> > dsm name which I am not sure is possible to identify, then we should
report
> > error instead try to regenerate the name
> >
> > Robert and or others, can you share your opinion on what is the best
way to
> > proceed for this issue.
>
> I think we should retry on EACCES.  Possibly we should do other things
> too, but at least that.  It completely misses the point of retrying on
> EEXIST if we don't retry on other error codes that can also be
> generated when the segment already exists.
>

Sounds sensible, but if we want to that route, shall we have some mechanism
such that if retrying it for 10 times (10 is somewhat arbitrary, but we
retry 10 times in PGSharedMemoryCreate, so may be there is some
consistency) doesn't give us unique name and we are getting EACCES error,
then just throw the error instead of more retries.  This is to ensure that
if the API is returning EACCES due to reason other than duplicate handle,
then we won't retry indefinitely.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Losing memory references - SRF + SPI

2016-05-15 Thread Michael Paquier
On Sun, May 15, 2016 at 10:22 AM, Anderson Carniel  wrote:
> Thank you very much Joe.
>
> I have followed the crosstab() implementation and understood the idea of per
> query memory context. Now, I am using a unique SPI instance (which I perform
> several sql queries), process the result, transform my result into a
> tuplestore, close the SPI and done. It works perfectly.
>
> I have a curiosity with regard to the tuplestore: is there a problem with
> performance if my tuplestore form a big table with million of tuples? Other
> question is regarding to SPI: is there a problem to use only  one instance
> of SPI (for instance, if multiple users call the same function)?

When using a tuplestore, one concern for performance is the moment
data is going to spill into disk, something that is set with maxKBytes
in tuplestore_begin_heap(). Using work_mem is the recommendation,
though you could tune it better depending on your needs.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers