Re: [GENERAL] Let's Do the CoC Right

2016-01-21 Thread Brar Piening

Am 22.01.2016 um 08:00 schrieb Rajeev Bhatta:


I did not read the remainder of the email as classifying someone by 
anything is inappropriate.



Wow!
#3 of current CoC
"When interpreting the words and actions of others, participants should 
always assume good intentions."


I can see those intentions and I read the whole text.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] long transfer time for binary data

2016-01-21 Thread Albe Laurenz
Andy Colson wrote:
> On 01/21/2016 03:59 PM, Johannes wrote:
>> Here are some transferring measurements (from server to client) with the
>> same file.
>>
>> scp
>> +ssl -compression 1.3 sec
>> +ssl +compression 4.6 sec
>>
>> pgadmin
>> select lo_get(12345);
>> -ssl  3.4 sec
>> +ssl +compression 5.5 sec
>> +ssl -compression 4.5 sec
>>
>> psql
>> select lo_get(12345);
>> +ssl -compression 6.0 sec
>> -ssl  4.4 sec
>>
>> java/jdbc
>> only while(in.read(buf,0,len))
>> +ssl -compression 6.0 sec
>> -ssl  3.0 sec (+ 1.8 sec for new Image())
>>
>> Here is a link for insecure ssl compression:
>> https://en.wikipedia.org/wiki/Comparison_of_TLS_implementations#Compression

> Thanks for the link on ssl compression, I'd not seen that before.  I'm going 
> to have to read up.
> 
> Your numbers ... look ...  odd.  scp +compression is slower?  pgadmin -ssl 
> and psql -ssl and java -ssl
> are all different speeds?  ssl always adds extra time?  Maybe a high latency 
> thing?  If you ping the
> other box what sort of time's do you get?  Maybe the extra ssl handshakes up 
> front + high latency is
> causing it.  You could try a shared/cached ssh connection to avoid the 
> overhead.

Johannes' measurements make sense to me.

In situations where network bandwith is not the bottleneck, you will be slower 
with
SSL compression than without.  The time you lose is the time the CPU needs to 
compress
and decompress the data.  I observed that behaviour in one of our systems that 
transferred
byteas with images over SSL, which led me to introduce the "sslcompression" 
connection parameter
into PostgreSQL.

The variation of times between different clients could be randon (are these 
differences
conststent across repeated runs?), could be caused by different SSL 
implementations
in Java and OpenSSL or by additional processing in pgAdmin III.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building PostgreSQL 9.6devel sources with Microsoft Visual C++ 2015?

2016-01-21 Thread Yury Zhuravlev

Michael Paquier wrote:

How long do you think it would take for MS 1820 to be fixed and out?
Maybe never. 


I
wouldn't personally mind telling to people trying to compile with 1800
that we cannot support it because it is buggy. That's one less wart to
have forever in the code.
For the user, this is a bad response. In addition, many of the new features 
msvc, due to which the user must use it. 
Also, I think the current code looks like a hack. It's okay if we for some 
time to add one more hack. But as I wrote above, I will try to find a 
better solution.

While I was not going to back down: CMake+MSVC2015

Thanks.

--
Yury Zhuravlev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Let's Do the CoC Right

2016-01-21 Thread Rajeev Bhatta

On Friday 22 January 2016 10:55 AM, David E. Wheeler wrote:

Fellow PostgreSQLers,

I can’t help that there are a whole lot of white guys working on this document, 
with very little feedback from the people who it’s likely to benefit (only 
exception I spotted in a quick scan was Regina; sorry if I missed you). I 
suspect that most of you, like me, have never been the target of the kinds os 
behaviors we want to forbid. Certainly not to the level of many women, 
transgendered, and people of color I know of personally, in this community and 
others, who have. If those people are not speaking up here, I suspect it’s 
because they don’t expect to be heard. A bunch of white guys who run the 
project have decided what it’s gonna be, and mostly cut things out since these 
threads started.

But a *whole* lot of thought has gone into the creation of CoCs by the people 
who need them, and those who care about them. They have considered what sorts 
of things should be covered, what topics specifically addressed, and how to 
word them so as to enable the most people possible to feel safe, and to 
appropriately address issues when they inevitably arise, so that people 
continue to feel safe.

So I’d like to propose that we not try to do this ourselves. Instead, I propose 
that we take advantage of the ton of thought others have already put into this, 
and simply:

* Follow the example of many other successful communities (Swift, Mono, Rails, 
and 10,000 others) and adopt the open-source Contributor Covenant, unmodified.

   http://contributor-covenant.org

* Put this document in the root directory of the project as CODE_OF_CONDUCT.md, 
so that anyone who wants to contribute can. It should also be listed on the 
main web site and referenced from appropriate places (such as the mail lists 
pages).

* Spell out a policy and procedure for enforcement and include it as a separate 
document, again in the Git rep and on the site. The reporting address should be 
included in the Covenant. The Covenant web site has links to a number of 
existing guides we ought to crib from.

Best,

David

Hi David, whatever be the race of the select few who built the CoC, the 
categorization of them as white is inappropriate.. The CoC is meant to 
be allowing free communication across all members of the community 
irrespective of their color, race, sexuality, gender, nationality or for 
that matter whatever their personal viewpoint is.


Additionally the CoC emails were sent to the entire group so it was open 
for all. I did not read the remainder of the email as classifying 
someone by anything is inappropriate.


Thanks

Regards
Rajeev


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variable not found

2016-01-21 Thread Sachin Srivastava
Dear Adrian,

So, how the effective way to search this because I have around 1300 tables.

Regards,
SS

On Thu, Jan 21, 2016 at 8:48 PM, Adrian Klaver 
wrote:

> On 01/20/2016 07:35 PM, Sachin Srivastava wrote:
>
>> Dear Folks,
>>
>> I have a question about global variables in Oracle pl/sql package. Where
>> are these variables when package is converted to schema from Oracle to
>> Postgres through Ora2PG Tool?
>>
>>
>> For example, package > g_compliance_id. This package is converted to schema
>> ssj4_compliance_pkg, but I cannot find where is the variable
>> g_compliance_id. It must be somewhere because there is no error in
>> functions which reference this variable. Please suggest where I will
>> fount it, any idea?
>>
>
> Best guess it is stored in a table in the schema.
>
>
>>
>> Regards,
>>
>> SS
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] Let's Do the CoC Right

2016-01-21 Thread David E . Wheeler
Fellow PostgreSQLers,

I can’t help that there are a whole lot of white guys working on this document, 
with very little feedback from the people who it’s likely to benefit (only 
exception I spotted in a quick scan was Regina; sorry if I missed you). I 
suspect that most of you, like me, have never been the target of the kinds os 
behaviors we want to forbid. Certainly not to the level of many women, 
transgendered, and people of color I know of personally, in this community and 
others, who have. If those people are not speaking up here, I suspect it’s 
because they don’t expect to be heard. A bunch of white guys who run the 
project have decided what it’s gonna be, and mostly cut things out since these 
threads started.

But a *whole* lot of thought has gone into the creation of CoCs by the people 
who need them, and those who care about them. They have considered what sorts 
of things should be covered, what topics specifically addressed, and how to 
word them so as to enable the most people possible to feel safe, and to 
appropriately address issues when they inevitably arise, so that people 
continue to feel safe.

So I’d like to propose that we not try to do this ourselves. Instead, I propose 
that we take advantage of the ton of thought others have already put into this, 
and simply:

* Follow the example of many other successful communities (Swift, Mono, Rails, 
and 10,000 others) and adopt the open-source Contributor Covenant, unmodified.

 http://contributor-covenant.org

* Put this document in the root directory of the project as CODE_OF_CONDUCT.md, 
so that anyone who wants to contribute can. It should also be listed on the 
main web site and referenced from appropriate places (such as the mail lists 
pages).

* Spell out a policy and procedure for enforcement and include it as a separate 
document, again in the Git rep and on the site. The reporting address should be 
included in the Covenant. The Covenant web site has links to a number of 
existing guides we ought to crib from.

Best,

David



smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Let's Do the CoC Right

2016-01-21 Thread David E. Wheeler
Fellow PostgreSQLers,

I can’t help that there are a whole lot of white guys working on this document, 
with very little feedback from the people who it’s likely to benefit (only 
exception I spotted in a quick scan was Regina; sorry if I missed you). I 
suspect that most of you, like me, have never been the target of the kinds os 
behaviors we want to forbid. Certainly not to the level of many women, 
transgendered, and people of color I know of personally, in this community and 
others, who have. If those people are not speaking up here, I suspect it’s 
because they don’t expect to be heard. A bunch of white guys who run the 
project have decided what it’s gonna be, and mostly cut things out since these 
threads started.

But a *whole* lot of thought has gone into the creation of CoCs by the people 
who need them, and those who care about them. They have considered what sorts 
of things should be covered, what topics specifically addressed, and how to 
word them so as to enable the most people possible to feel safe, and to 
appropriately address issues when they inevitably arise, so that people 
continue to feel safe.

So I’d like to propose that we not try to do this ourselves. Instead, I propose 
that we take advantage of the ton of thought others have already put into this, 
and simply:

* Follow the example of many other successful communities (Swift, Mono, Rails, 
and 10,000 others) and adopt the open-source Contributor Covenant, unmodified.

  http://contributor-covenant.org

* Put this document in the root directory of the project as CODE_OF_CONDUCT.md, 
so that anyone who wants to contribute can. It should also be listed on the 
main web site and referenced from appropriate places (such as the mail lists 
pages).

* Spell out a policy and procedure for enforcement and include it as a separate 
document, again in the Git rep and on the site. The reporting address should be 
included in the Covenant. The Covenant web site has links to a number of 
existing guides we ought to crib from.

Best,

David



smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] How can i add a postgresql connection using inno setup?

2016-01-21 Thread yang
Hi Andreas

I installed My project using inno setup program.
(Tomcat, PostgreSQL, etc)

Oddly, my db was not registered in postgresql.
Because the localhost server does not exist on the newly installed
postgreSQL.

If I add the localhost server and install the  project, my db was added to
localhost server.

So in my opinion I need to know how to add a localhost server when
installing.

(I'm using this code in inno setup. 

[Run]
Filename: "C:\something\PostgreSQL\bin\createdb.exe"; Parameters: " -U
postgres -E UTF8 -O postgres -w warFolderName"; Components: ProjectName;
Flags: runhidden)



--
View this message in context: 
http://postgresql.nabble.com/How-can-i-add-a-postgresql-connection-using-inno-setup-tp5882672p5883395.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] long transfer time for binary data

2016-01-21 Thread Andy Colson

Am 21.01.2016 um 03:33 schrieb Andy Colson:

On 01/20/2016 03:29 PM, Johannes wrote:

I noticed transferring a large object or bytea data between client and
server takes a long time.
For example: An image with a real size of 11 MB could be read on server
side (explain analyze) in 81ms. Fine.

But on client side the result was completed after 6.7 seconds without
ssl compression and 4.5 seconds with ssl compression (both via 100MBit
ethernet).

SSL compression seems to be not a good idea anymore, since this had
become a security risk. Its still possible with pgadmin, but afaik not
with java/jdbc .

Are there any other solutions available to display my images in my
client application more quickly? Or are there planned improvements to
postgresql (transferring the real binary data)?

Best regards
Johannes



Yep, that's slow.  The ssl compression is very odd if the image is
jpeg'ish and already compressed.  If its a bitmap or uncompressed tif
then its not so surprising.

A few tests you could try:

1) copy the same 11 meg file from server to client via regular file copy
and time it.  At 100 Mbit/s it should take about a second.  If it takes
6 you have network problems, not PG problems.

2) try it via psql command line (or at least something other than java),
to see if its java thats the problem.

3) watch wireshark/tcpdump, maybe you'll see something glaring that'll
point you in the right direction.

-Andy

PS: I've never heard that ssl compression was a security risk, got
links/proof?






On 01/21/2016 03:59 PM, Johannes wrote:

Here are some transferring measurements (from server to client) with the
same file.

scp
+ssl -compression 1.3 sec
+ssl +compression 4.6 sec

pgadmin
select lo_get(12345);
-ssl  3.4 sec
+ssl +compression 5.5 sec
+ssl -compression 4.5 sec

psql
select lo_get(12345);
+ssl -compression 6.0 sec
-ssl  4.4 sec

java/jdbc
only while(in.read(buf,0,len))
+ssl -compression 6.0 sec
-ssl  3.0 sec (+ 1.8 sec for new Image())

Here is a link for insecure ssl compression:
https://en.wikipedia.org/wiki/Comparison_of_TLS_implementations#Compression

Best Regargs
Johannes



Please don't top post.

Thanks for the link on ssl compression, I'd not seen that before.  I'm going to 
have to read up.

Your numbers ... look ...  odd.  scp +compression is slower?  pgadmin -ssl and 
psql -ssl and java -ssl are all different speeds?  ssl always adds extra time?  
Maybe a high latency thing?  If you ping the other box what sort of time's do 
you get?  Maybe the extra ssl handshakes up front + high latency is causing it. 
 You could try a shared/cached ssh connection to avoid the overhead.

Best case though, your file copy was 1.3 seconds and with PG it was 3 seconds.  
Even getting ssl fixed, you probably wont get faster than 3 seconds.  Is that 
enough?

-Andy




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building PostgreSQL 9.6devel sources with Microsoft Visual C++ 2015?

2016-01-21 Thread Michael Paquier
On Fri, Jan 22, 2016 at 7:16 AM, Yury Zhuravlev
 wrote:
> Tom Lane wrote:
>> Ick.  Even if that works today, it seems absolutely guaranteed to fail
>> in future, as soon as Microsoft either puts back the visible declaration
>> or changes the struct contents.  If they've made a conscious decision
>> to not export the struct anymore, it's likely because they intend to
>> change it ... so I'd put the half-life of this "fix" at no more than one
>> Visual Studio release.
>
> Yes. You right. But at the moment, it's better than nothing. In addition, we
> can then do something like this:
> #if _MSC_VER >= 1800 && _MSC_VER < 1820
>
> after MS push fix.

Urg. That's just ugly.

>> Hopefully, if they removed the visible declaration intentionally, they
>> provided some other way to get at those locale names.  That's what we
>> need to be looking for, not hoping that direct access to undocumented
>> structures will continue to work.
>
> It's more like a simple bug after refactoring. But I will try find another
> way. (I don't like undocumented structures)

How long do you think it would take for MS 1820 to be fixed and out? I
wouldn't personally mind telling to people trying to compile with 1800
that we cannot support it because it is buggy. That's one less wart to
have forever in the code.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HeapTuple to JSON -- composite_to_json()?

2016-01-21 Thread Jason Dusek
heap_copy_tuple_as_datum looks promising…

http://doxygen.postgresql.org/heaptuple_8c.html#abfa9096cd7909cb17a6acfdc7b31b7ad


[GENERAL] HeapTuple to JSON -- composite_to_json()?

2016-01-21 Thread Jason Dusek
Hi All,

I am working on a logical decoder and would like to convert a heap tuple to
JSON (a HeapTuple obtained from ReorderBufferChange). It is the moral
equivalent of tuple_to_stringinfo in test_decode.c:

http://doxygen.postgresql.org/test__decoding_8c.html#a3986a57a0308de0150ebd45f7734d464

It seems like the easiest and best thing to do is to rely on Postgres’s
native JSON serialization. A HeapTuple is not a Datum; but reading
heaptuple.c they seem to have a close relationship. Do arrays of Datum
become HeapTuple? Is a HeapTuple basically Datum sans type information?
What is the best way to convert a HeapTuple to a Datum?

Best Regards,

  Jason Dusek
​


Re: [GENERAL] Building PostgreSQL 9.6devel sources with Microsoft Visual C++ 2015?

2016-01-21 Thread Yury Zhuravlev

Tom Lane wrote:

Ick.  Even if that works today, it seems absolutely guaranteed to fail
in future, as soon as Microsoft either puts back the visible declaration
or changes the struct contents.  If they've made a conscious decision
to not export the struct anymore, it's likely because they intend to
change it ... so I'd put the half-life of this "fix" at no more than one
Visual Studio release.


Yes. You right. But at the moment, it's better than nothing. In addition, 
we can then do something like this:

#if _MSC_VER >= 1800 && _MSC_VER < 1820

after MS push fix. 


Hopefully, if they removed the visible declaration intentionally, they
provided some other way to get at those locale names.  That's what we
need to be looking for, not hoping that direct access to undocumented
structures will continue to work.


It's more like a simple bug after refactoring. But I will try find another 
way. (I don't like undocumented structures)


Thanks.
--
Yury Zhuravlev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auotmated postgres failover

2016-01-21 Thread Andrew Sullivan
On Thu, Jan 21, 2016 at 11:34:18AM -0800, John R Pierce wrote:
> the most difficult part is reliably determining that A) the master has
> crashed, and B) fencing the failed old master so it doesn't wake up and
> think its still in charge.
> 

And, depending on your workload, C) that you actually want to fail over.

I've seen an awful lot of people want automatic failover who also
can't afford for the already-committed transactions on the master to
be lost.  Unless you're running synchronous, be sure you have the
workload that can actually accept lost writes.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] long transfer time for binary data

2016-01-21 Thread Johannes
Here are some transferring measurements (from server to client) with the
same file.

scp
+ssl -compression 1.3 sec
+ssl +compression 4.6 sec

pgadmin
select lo_get(12345);
-ssl  3.4 sec
+ssl +compression 5.5 sec
+ssl -compression 4.5 sec

psql
select lo_get(12345);
+ssl -compression 6.0 sec
-ssl  4.4 sec

java/jdbc
only while(in.read(buf,0,len))
+ssl -compression 6.0 sec
-ssl  3.0 sec (+ 1.8 sec for new Image())

Here is a link for insecure ssl compression:
https://en.wikipedia.org/wiki/Comparison_of_TLS_implementations#Compression

Best Regargs
Johannes

Am 21.01.2016 um 03:33 schrieb Andy Colson:
> On 01/20/2016 03:29 PM, Johannes wrote:
>> I noticed transferring a large object or bytea data between client and
>> server takes a long time.
>> For example: An image with a real size of 11 MB could be read on server
>> side (explain analyze) in 81ms. Fine.
>>
>> But on client side the result was completed after 6.7 seconds without
>> ssl compression and 4.5 seconds with ssl compression (both via 100MBit
>> ethernet).
>>
>> SSL compression seems to be not a good idea anymore, since this had
>> become a security risk. Its still possible with pgadmin, but afaik not
>> with java/jdbc .
>>
>> Are there any other solutions available to display my images in my
>> client application more quickly? Or are there planned improvements to
>> postgresql (transferring the real binary data)?
>>
>> Best regards
>> Johannes
>>
> 
> Yep, that's slow.  The ssl compression is very odd if the image is
> jpeg'ish and already compressed.  If its a bitmap or uncompressed tif
> then its not so surprising.
> 
> A few tests you could try:
> 
> 1) copy the same 11 meg file from server to client via regular file copy
> and time it.  At 100 Mbit/s it should take about a second.  If it takes
> 6 you have network problems, not PG problems.
> 
> 2) try it via psql command line (or at least something other than java),
> to see if its java thats the problem.
> 
> 3) watch wireshark/tcpdump, maybe you'll see something glaring that'll
> point you in the right direction.
> 
> -Andy
> 
> PS: I've never heard that ssl compression was a security risk, got
> links/proof?
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Building PostgreSQL 9.6devel sources with Microsoft Visual C++ 2015?

2016-01-21 Thread Tom Lane
Yury Zhuravlev  writes:
> +#if _MSC_VER >= 1800
> + //From VS2012.
> + typedef struct localerefcount
> + {
> + char *locale;
> + wchar_t *wlocale;
> + ... etc etc ...

Ick.  Even if that works today, it seems absolutely guaranteed to fail
in future, as soon as Microsoft either puts back the visible declaration
or changes the struct contents.  If they've made a conscious decision
to not export the struct anymore, it's likely because they intend to
change it ... so I'd put the half-life of this "fix" at no more than one
Visual Studio release.

Hopefully, if they removed the visible declaration intentionally, they
provided some other way to get at those locale names.  That's what we
need to be looking for, not hoping that direct access to undocumented
structures will continue to work.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Steve Litt
On Thu, 21 Jan 2016 10:43:26 +
Geoff Winkless  wrote:

> On 21 January 2016 at 10:37, Chris Travers 
> wrote:
> > At the end of the day this will require human judgment rather than
> > formulation.  
> 
> Then make it explicit.
> 
> * Disruption of the collaborative space, or patterns of behaviour
> which the majority of the core team consider to be harassment, will
> not be tolerated.

"Disruption of the collaborative space" is almost meaningless, and
almost guarantees selective enforcement.

On the other hand, "patterns of behaviour which the majority of the
core team consider to be harassment" is crystal clear. What would
happen if you just dropped "Disruption of the collaborative space"? If
not, I'd suggest a much more definitive substitute for that phrase.

SteveT

Steve Litt 
January 2016 featured book: Twenty Eight Tales of Troubleshooting
http://www.troubleshooters.com/28




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BDR with postgres 9.5

2016-01-21 Thread Merlin Moncure
On Wed, Jan 20, 2016 at 12:52 PM, Vik Fearing  wrote:
> On 01/20/2016 11:41 AM, Nikhil wrote:
>> Hello All,
>>
>>
>> What is the timeline for BDR with postgres 9.5 released version.
>
> Currently there are no plans for BDR with 9.5.
> https://github.com/2ndQuadrant/bdr/issues/157#issuecomment-172402366

9.6 looks like a possibility though.  I have big plans for BDR
personally, but for various reasons need to lay it on top of a stock
postgres.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread Adrian Klaver

On 01/21/2016 12:00 PM, John R Pierce wrote:

On 1/21/2016 11:46 AM, Devrim GÜNDÜZ wrote:

Hi,

On Thu, 2016-01-21 at 10:09 -0800, John R Pierce wrote:

>the yum packages for 9.5 apparently changed the path of the socket
>from  /tmp to /var/run/postgresql

I think we use both now. If not, that must be a packaging bug. On my
boxes, I can see that the unix_socket_directory points to both.




indeed, 9.5 is putting it in both places, but 9.5's libpq is looking for
it in /var/run/pgsql ...  the problem is the older versions are only
putting it in /tmp, so the newer libpq doesn't work.


Alternate method:

http://www.postgresql.org/docs/9.5/static/libpq-pgservice.html

If you are following what I assume is the pattern of 'naked' binaries 
being the most recent version, you could create a service file for that 
version that overrides the defaults. That being said using one version 
of a binary to connect to multiple Postgres versions seems fragile.




does unix_socket_directory allow dual locations in 9.1+  ?  I guess I
should just try that...






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread Adrian Klaver

On 01/21/2016 12:00 PM, John R Pierce wrote:

On 1/21/2016 11:46 AM, Devrim GÜNDÜZ wrote:

Hi,

On Thu, 2016-01-21 at 10:09 -0800, John R Pierce wrote:

>the yum packages for 9.5 apparently changed the path of the socket
>from  /tmp to /var/run/postgresql

I think we use both now. If not, that must be a packaging bug. On my
boxes, I can see that the unix_socket_directory points to both.




indeed, 9.5 is putting it in both places, but 9.5's libpq is looking for
it in /var/run/pgsql ...  the problem is the older versions are only
putting it in /tmp, so the newer libpq doesn't work.

does unix_socket_directory allow dual locations in 9.1+  ?  I guess I
should just try that...


No:

http://www.postgresql.org/docs/9.1/interactive/runtime-config-connection.html







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recurring bookings

2016-01-21 Thread cpolish
On 2016-01-21 02:15, John R Pierce wrote:
> >How can I have recurring bookings for a call?
> >Eg: if the call_frequency is weekly, how can I see a list of dates which
> >this account will be called upon?
I recommend "Developing time-oriented database applications in SQL", 
Richard T. Snodgrass, ISBN 1-55860-426-7, might save you a lot of
time and trouble.

Best regards,
-- 
Charles Polisher



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread John R Pierce

On 1/21/2016 12:00 PM, John R Pierce wrote:

On 1/21/2016 11:46 AM, Devrim GÜNDÜZ wrote:

Hi,

On Thu, 2016-01-21 at 10:09 -0800, John R Pierce wrote:

>the yum packages for 9.5 apparently changed the path of the socket
>from  /tmp to /var/run/postgresql

I think we use both now. If not, that must be a packaging bug. On my
boxes, I can see that the unix_socket_directory points to both.




indeed, 9.5 is putting it in both places, but 9.5's libpq is looking 
for it in /var/run/pgsql ...  the problem is the older versions are 
only putting it in /tmp, so the newer libpq doesn't work.


does unix_socket_directory allow dual locations in 9.1+  ?  I guess I 
should just try that...


ugh, its unix_socket_directories in 9.3-9.5, and _directory in 9.2, 
which presumably doesn't allow multiple choices?


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread John R Pierce

On 1/21/2016 11:46 AM, Devrim GÜNDÜZ wrote:

Hi,

On Thu, 2016-01-21 at 10:09 -0800, John R Pierce wrote:

>the yum packages for 9.5 apparently changed the path of the socket
>from  /tmp to /var/run/postgresql

I think we use both now. If not, that must be a packaging bug. On my
boxes, I can see that the unix_socket_directory points to both.




indeed, 9.5 is putting it in both places, but 9.5's libpq is looking for 
it in /var/run/pgsql ...  the problem is the older versions are only 
putting it in /tmp, so the newer libpq doesn't work.


does unix_socket_directory allow dual locations in 9.1+  ?  I guess I 
should just try that...




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread Devrim GÜNDÜZ

Hi again,

On Thu, 2016-01-21 at 21:46 +0200, Devrim GÜNDÜZ wrote:
> Sorry about that, but we had to do it.

Here is the reason why:

https://bugzilla.redhat.com/show_bug.cgi?id=825448

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread Devrim GÜNDÜZ

Hi,

On Thu, 2016-01-21 at 10:09 -0800, John R Pierce wrote:
> the yum packages for 9.5 apparently changed the path of the socket
> from  /tmp to /var/run/postgresql

I think we use both now. If not, that must be a packaging bug. On my
boxes, I can see that the unix_socket_directory points to both.

> I have several versions installed on the same dev system running on 
> different ports  for 9.1 through 9.4 this was no problem, but
> since  I installed 9.5, I now have to specify -h /tmp -p  to
> connect to an 
> earlier version, this has raised havoc with my scripts and stuff,
> for 
> instance this script invoked from crontab no longer works on the
> earlier 
> versions...

Sorry about that, but we had to do it.

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Auotmated postgres failover

2016-01-21 Thread John R Pierce

On 1/21/2016 11:07 AM, jwienc...@comcast.net wrote:



I'm looking for a tool to automate PostgreSQL cluster management 
failover in the event the master database were to become unavailable. 
  Currently are manually issuing a "pg_ctl promote"  once we become 
aware that the master database has crashed.



Is repmgr a via solution?  Please pass along experiences with repmgr.

Are there any other  tools available to automatically issue the 
"promote" in the event of a master database crash?



repmgr is a tool you could use in conjunction with a generic cluster 
management system like linuxha/heartbeat, vcs, etc.


the most difficult part is reliably determining that A) the master has 
crashed, and B) fencing the failed old master so it doesn't wake up and 
think its still in charge.




--
john r pierce, recycling bits in santa cruz



[GENERAL] Auotmated postgres failover

2016-01-21 Thread jwiencek3
Hello 


I'm looking for a tool to automate PostgreSQL cluster management failover in 
the event the master database were to become unavailable. Currently are 
manually issuing a "pg_ctl promote" once we become aware that the master 
database has crashed. 


Is repmgr a via solution? Please pass along experiences with repmgr. 

Are there any other tools available to automatically issue the "promote" in the 
event of a master database crash? 



Regards 

John Wiencek 



Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread Adrian Klaver

On 01/21/2016 10:09 AM, John R Pierce wrote:

the yum packages for 9.5 apparently changed the path of the socket from
/tmp to /var/run/postgresql

I have several versions installed on the same dev system running on
different ports  for 9.1 through 9.4 this was no problem, but since
I installed 9.5, I now have to specify -h /tmp -p  to connect to an
earlier version, this has raised havoc with my scripts and stuff, for
instance this script invoked from crontab no longer works on the earlier
versions...

#!/bin/bash
## postgres backup script
DD=$(date +%a)
PP=/backups/pgsql/svfis-kvm3
for p in 5432 5433 5434 5435; do
pg_dumpall -p $p -g -f $PP/pg_dumpall.$p.globals-$DD.sql
for db in $(psql -p $p -tc "select datname from pg_database where
not datistemplate"); do
pg_dump -Fc -p $p --schema-only -f $PP/pgdump.$p.$db.$DD.dump -d
$db
done
done




Curiosity got the better of me, so now I have tried it:

unix_socket_directories = '/tmp, /var/run/postgresql'


aklaver@panda:~> /usr/local/pgsql95/bin/psql -d postgres -p 5462 -h /tmp 
-U postgres

psql (9.5.0)
Type "help" for help.

postgres=# \q
aklaver@panda:~> /usr/local/pgsql95/bin/psql -d postgres -p 5462 -h 
/var/run/postgresql -U postgres

psql (9.5.0)
Type "help" for help.

postgres=#



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread Adrian Klaver

On 01/21/2016 10:09 AM, John R Pierce wrote:

the yum packages for 9.5 apparently changed the path of the socket from
/tmp to /var/run/postgresql

I have several versions installed on the same dev system running on
different ports  for 9.1 through 9.4 this was no problem, but since
I installed 9.5, I now have to specify -h /tmp -p  to connect to an
earlier version, this has raised havoc with my scripts and stuff, for
instance this script invoked from crontab no longer works on the earlier
versions...

#!/bin/bash
## postgres backup script
DD=$(date +%a)
PP=/backups/pgsql/svfis-kvm3
for p in 5432 5433 5434 5435; do
pg_dumpall -p $p -g -f $PP/pg_dumpall.$p.globals-$DD.sql
for db in $(psql -p $p -tc "select datname from pg_database where
not datistemplate"); do
pg_dump -Fc -p $p --schema-only -f $PP/pgdump.$p.$db.$DD.dump -d
$db
done
done


I have not actually done it but from here:

http://www.postgresql.org/docs/9.5/interactive/runtime-config-connection.html

unix_socket_directories (string)

Specifies the directory of the Unix-domain socket(s) on which the 
server is to listen for connections from client applications. Multiple 
sockets can be created by listing multiple directories separated by 
commas. Whitespace between entries is ignored; surround a directory name 
with double quotes if you need to include whitespace or commas in the 
name. An empty value specifies not listening on any Unix-domain sockets, 
in which case only TCP/IP sockets can be used to connect to the server. 
The default value is normally /tmp, but that can be changed at build 
time. This parameter can only be set at server start.


In addition to the socket file itself, which is named .s.PGSQL. 
where  is the server's port number, an ordinary file named 
.s.PGSQL..lock will be created in each of the 
unix_socket_directories directories. Neither file should ever be removed 
manually.


This parameter is irrelevant on Windows, which does not have 
Unix-domain sockets.



Seems it would be possible to change the above to point at both /tmp and 
/var/run/postgresql and keep everybody happy.









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] yum packages for 9.5 vs <=9.4

2016-01-21 Thread John R Pierce
the yum packages for 9.5 apparently changed the path of the socket from 
/tmp to /var/run/postgresql


I have several versions installed on the same dev system running on 
different ports  for 9.1 through 9.4 this was no problem, but since 
I installed 9.5, I now have to specify -h /tmp -p  to connect to an 
earlier version, this has raised havoc with my scripts and stuff, for 
instance this script invoked from crontab no longer works on the earlier 
versions...


#!/bin/bash
## postgres backup script
DD=$(date +%a)
PP=/backups/pgsql/svfis-kvm3
for p in 5432 5433 5434 5435; do
   pg_dumpall -p $p -g -f $PP/pg_dumpall.$p.globals-$DD.sql
   for db in $(psql -p $p -tc "select datname from pg_database where 
not datistemplate"); do

   pg_dump -Fc -p $p --schema-only -f $PP/pgdump.$p.$db.$DD.dump -d $db
   done
done



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building PostgreSQL 9.6devel sources with Microsoft Visual C++ 2015?

2016-01-21 Thread Yury Zhuravlev

Tom Lane wrote:

Somebody will have to research how one is
supposed to get the appropriate locale name now.
It's just a bug: 
https://connect.microsoft.com/VisualStudio/feedback/details/1882835/locale-t-compile-issues-with-vs2015

I spied a solution there and made a patch (in attachment).
I came across this error when building Postgres using the CMake for the 
MSVC 2015.


Thanks.

PS I do not know whether this patch needs to be added to commitfest?
--
Yury Zhuravlev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Companydiff --git a/src/include/port.h b/src/include/port.h
index 9fc79f4..98846ea 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -468,4 +468,38 @@ extern char *escape_single_quotes_ascii(const char *src);
 /* port/wait_error.c */
 extern char *wait_result_to_str(int exit_status);
 
+#if _MSC_VER >= 1800
+	//From VS2012.
+	typedef struct localerefcount
+	{
+		char *locale;
+		wchar_t *wlocale;
+		int *refcount;
+		int *wrefcount;
+	} locrefcount;
+
+	//From VS2012.
+	typedef struct __crt_locale_data
+	{
+		int refcount;
+		unsigned int lc_codepage;
+		unsigned int lc_collate_cp;
+		unsigned int lc_time_cp;
+		locrefcount lc_category[6];
+		int lc_clike;
+		int mb_cur_max;
+		int * lconv_intl_refcount;
+		int * lconv_num_refcount;
+		int * lconv_mon_refcount;
+		struct lconv * lconv;
+		int * ctype1_refcount;
+		unsigned short * ctype1;
+		const unsigned short * pctype;
+		const unsigned char * pclmap;
+		const unsigned char * pcumap;
+		struct __lc_time_data * lc_time_curr;
+		wchar_t * locale_name[6];
+	} threadlocinfo;
+#endif
+
 #endif   /* PG_PORT_H */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variable not found

2016-01-21 Thread Adrian Klaver

On 01/20/2016 07:35 PM, Sachin Srivastava wrote:

Dear Folks,

I have a question about global variables in Oracle pl/sql package. Where
are these variables when package is converted to schema from Oracle to
Postgres through Ora2PG Tool?


For example, package 

Best guess it is stored in a table in the schema.




Regards,

SS





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-21 Thread Adrian Klaver

On 01/21/2016 02:58 AM, Steve Rogerson wrote:

On 20/01/16 19:19, Vik Fearing wrote:

On 01/20/2016 04:24 PM, Steve Rogerson wrote:

Postgres doesn't store original TZ. It does recalculation to local TZ. If you
need original TZ, you have to store it separetely.



I know and that's what I'm trying to deal with. Given I know the origin TZ  -
as in Europe/Lisbon I'm trying to determine the short name so I can store it.


I would recommend against storing the abbreviation.  The abbreviations
are not globally unique and don't follow daylight savings.  If you want
to store the original time zone, I would use the full name.

Something like this might be relative to your interests:

 INSERT INTO tbl (ts, tz)two
 VALUES ('2016-01-20 00:00', current_setting('TimeZone'));

This will do the right thing regardless of where the client is (unless
it's set to "localtime" and then it's useless).



That doesn't work for two reasons.

1. In my application the data comes from several time zones, mostly European
but also Australia, the US, so the "current_setting" is often inapproriate.
2. There are two special times in the year, when the clocks change. The
awkward one is when the clocks go back. For example this year, for me,
the 2016-10-27 01:30 happens twice for my current setting, once as BST and
once as GMT.

We actually store UTC + the offset interval + the short name. The latter being
mostly for reporting purposes.

The issue is that we can't - from postgres - determine the appropriate short
name directly.


Probably just me, but I am not sure how if it is stored it cannot be 
retrieved?




I can by other means though, say the DateTime module in perl though anything
that talks to the Olson database on whatever system would do as well.

The only way I can think of doing it is by doing in pg is by the two step (ok
more that two if I wrap in a transaction or reset the TZ) method :

# SET TIME ZONE 'Europe/Lisbon';
# SELECT to_char('2016-07-20 00:00'::timestamp with time zone , 'TZ');
 to_char
-
 WEST
(1 row)

Steve





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Geoff Winkless
On 21 January 2016 at 12:36, Chris Travers  wrote:
> I still side with the Scandinavian approach of passing general laws and
> trusting judges to apply them in line with moral rather than purely legal
> principles.

I believe that it's generally accepted that people will unconsciously
apply their own prejudices in judging others unless there's a code
that stops them from doing so.

I _think_ that's where this whole CoC thing started, no? :)

G


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] adding a bdr node using bcv backup

2016-01-21 Thread (Daniel Stolf)
I'm sorry, I didn't send the correct information about node 2...

Here's what I get on all three nodes after I take a snapshot on node 1 and
bring it up on node 3...

*=== On Node 1: ===*
bdrdemo=# select bdr.bdr_get_local_nodeid();
 bdr_get_local_nodeid
---
 (*6239328434665526195,1,16385*)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node1
(1 row)

bdrdemo=# SELECT * FROM pg_replication_slots;
slot_name| plugin | slot_type | datoid |
database | active | xmin | catalog_xmin | restart_lsn
-++---++--++--+--+-
 bdr_16385_6241964183952916534_1_16385__ | bdr| logical   |  16385 |
bdrdemo  | t  |  | 4593 | 0/123E9808


*=== On Node 2: ===*
bdrdemo=# select bdr.bdr_get_local_nodeid();
 bdr_get_local_nodeid
---
 (*6241964183952916534,1,16385*)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node2
(1 row)

bdrdemo=# SELECT * FROM pg_replication_slots;
slot_name| plugin | slot_type | datoid |
database | active | xmin | catalog_xmin | restart_lsn
-++---++--++--+--+-
 bdr_16385_6239328434665526195_1_16385__ | bdr| logical   |  16385 |
bdrdemo  | t  |  | 3039 | 0/4EB0D28
(1 row)

*=== On Node 3: ===*
bdrdemo=#  select bdr.bdr_get_local_nodeid();
 bdr_get_local_nodeid
---
 (*6239328434665526195,1,16385*)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node1
(1 row)

bdrdemo=# SELECT * FROM pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | active | xmin |
catalog_xmin | restart_lsn
---++---++--++--+--+-
(0 rows)


On Thu, Jan 21, 2016 at 10:46 AM (Daniel Stolf)  wrote:

> Ok, I'm at work now and I have access to my lab...
>
> * On Node 1: *
> bdrdemo=# select bdr.bdr_get_local_nodeid();
>  bdr_get_local_nodeid
> ---
>  (6239328434665526195,1,16385)
> (1 row)
>
> bdrdemo=# select bdr.bdr_get_local_node_name();
>  bdr_get_local_node_name
> -
>  node1
> (1 row)
>
> bdrdemo=# select bdr.bdr_get_local_nodeid();
>  bdr_get_local_nodeid
> ---
>  (6239328434665526195,1,16385)
> (1 row)
>
> bdrdemo=# select bdr.bdr_get_local_node_name();
>  bdr_get_local_node_name
> -
>  node1
> (1 row)
>
> 
>
> *=== On Node 2: ===*
> bdrdemo=# select bdr.bdr_get_local_nodeid();
>  bdr_get_local_nodeid
> ---
>  (6239328434665526195,1,16385)
> (1 row)
>
> bdrdemo=# select bdr.bdr_get_local_node_name();
>  bdr_get_local_node_name
> -
>  node1
> (1 row)
> 
>
> Now, I take a snapshot from node1 and bring up a clone on node3... Here's
> what I got on node3:
>
> *=== On Node 3: ===*
>  bdr_get_local_nodeid
> ---
>  (6239328434665526195,1,16385)
> (1 row)
>
> bdrdemo=# select bdr.bdr_get_local_node_name();
>  bdr_get_local_node_name
> -
>  node1
> (1 row)
>
> bdrdemo=# SELECT * FROM pg_replication_slots;
>  slot_name | plugin | slot_type | datoid | database | active | xmin |
> catalog_xmin | restart_lsn
>
> ---++---++--++--+--+-
> (0 rows)
>
> 
>
> As you can see, when I brought up a clone of node1 on node3, it got the
> same node name and id as node1...
>
> So here's what I don't get:
>
> 1) if I have to create a new replication slots on node1 and 2 beforehand
> using "pg_create_physical_replication_slot" , don't they need the if of
> node3 on their name?
> 2) If node3 has the same name and if as node1, won't that introduce a
> conflic? Don't I need to clean that up before node3 can join the
> replication group?
>
> Regards,
> Daniel Stolf
>
>
> On Thu, Jan 21, 2016 at 8:34 AM (Daniel Stolf)  wrote:
>
>> Hi Craig, how are you?
>>
>> Thanks for your answer. It doesn't seems too complex... Also, it's just a
>> test scenario, I don't intend to use as a production setup or to recommend
>> as such, at least not until I'm 100% sure I got it right...
>>
>> So, assuming I get the snapshot right... The steps would be...
>>
>> 1) create replication slots on prior nodes before taking the snapshot
>> (not sure how to do that, which command would it be? );
>> 2) take the snapshot;
>> 3) bring it up on another server;
>> 4) use bdr_init_copy
>>
>> I'm not at work right now, but I remember two things...
>>
>> On node 3 I brough

Re: [GENERAL] adding a bdr node using bcv backup

2016-01-21 Thread (Daniel Stolf)
Ok, I'm at work now and I have access to my lab...

* On Node 1: *
bdrdemo=# select bdr.bdr_get_local_nodeid();
 bdr_get_local_nodeid
---
 (6239328434665526195,1,16385)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node1
(1 row)

bdrdemo=# select bdr.bdr_get_local_nodeid();
 bdr_get_local_nodeid
---
 (6239328434665526195,1,16385)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node1
(1 row)



*=== On Node 2: ===*
bdrdemo=# select bdr.bdr_get_local_nodeid();
 bdr_get_local_nodeid
---
 (6239328434665526195,1,16385)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node1
(1 row)


Now, I take a snapshot from node1 and bring up a clone on node3... Here's
what I got on node3:

*=== On Node 3: ===*
 bdr_get_local_nodeid
---
 (6239328434665526195,1,16385)
(1 row)

bdrdemo=# select bdr.bdr_get_local_node_name();
 bdr_get_local_node_name
-
 node1
(1 row)

bdrdemo=# SELECT * FROM pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | active | xmin |
catalog_xmin | restart_lsn
---++---++--++--+--+-
(0 rows)



As you can see, when I brought up a clone of node1 on node3, it got the
same node name and id as node1...

So here's what I don't get:

1) if I have to create a new replication slots on node1 and 2 beforehand
using "pg_create_physical_replication_slot" , don't they need the if of
node3 on their name?
2) If node3 has the same name and if as node1, won't that introduce a
conflic? Don't I need to clean that up before node3 can join the
replication group?

Regards,
Daniel Stolf


On Thu, Jan 21, 2016 at 8:34 AM (Daniel Stolf)  wrote:

> Hi Craig, how are you?
>
> Thanks for your answer. It doesn't seems too complex... Also, it's just a
> test scenario, I don't intend to use as a production setup or to recommend
> as such, at least not until I'm 100% sure I got it right...
>
> So, assuming I get the snapshot right... The steps would be...
>
> 1) create replication slots on prior nodes before taking the snapshot (not
> sure how to do that, which command would it be? );
> 2) take the snapshot;
> 3) bring it up on another server;
> 4) use bdr_init_copy
>
> I'm not at work right now, but I remember two things...
>
> On node 3 I brought up the copy, if I try get local node name, it says
> node1, which is the node I got the copy from, ... Wouldn't I also have to
> do something about that? Like, delete the previous information on bdr
> database that went along?
>
> Em qui, 21 de jan de 2016 00:50, Craig Ringer 
> escreveu:
>
>> On 21 January 2016 at 08:29, (Daniel Stolf)  wrote:
>>
>>> Hello there...
>>>
>>> I'm new to postgres and I'm trying out BDR replication...
>>>
>>> I know that when I issue the bdr.bdr_group_join command, it will copy
>>> the entire database from the host I specify on parameter 'join_using_dsn'
>>> and this may take a while depending on the network and the size of the
>>> database...
>>>
>>> What I wanted to know is if I can leverage a bcv backup... Is it
>>> possible?
>>>
>>
>> BCV seems to be an EMC backup system. It looks like a snapshot. If the
>> snapshot taken is consistent and atomic, and if it includes both pg_xlog
>> and the rest of the datadir and all tablespaces in the SAME snapshot taken
>> at the SAME instant, then you can treat it much like a pg_basebackup. In
>> that case you can use bdr_init_copy to bring it up as a new BDR node. You
>> must either stop all writes to all other nodes or pre-create the
>> replication slots *before* taking the snapshot though, otherwise the new
>> node won't be able to catch up to writes done after the snapshot and before
>> it was started.
>>
>> If this sounds too complex then stick to the documented methods that
>> work. Working from separately taken snapshots is hard to get right and
>> could lead to subtle data problems if you get it wrong.
>>
>> --
>>  Craig Ringer   http://www.2ndQuadrant.com/
>>  PostgreSQL Development, 24x7 Support, Training & Services
>>
>


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Chris Travers
On Thu, Jan 21, 2016 at 12:39 PM, Geoff Winkless 
wrote:

> On 21 January 2016 at 11:28, Chris Travers 
> wrote:
> > Resisting the urge to talk about how justice was actually seen in the
> Dark
> > Ages
>
> Pitchforks. Baying crowds dragging those they consider to be
> wrongdoers from their beds and tying them to four horses and pulling
> them apart in the town square, without worrying about proof or reason.
> Trial by battle, where "the winner must have been in the right because
> God would make sure that the right man won". Women being drowned
> because it's better to kill an innocent girl (who will go straight to
> Heaven anyway) rather than let an evil witch live amongst us. Stuff
> like that.
>

Decent description of early 18th century Europe.  Not so great description
of early 8th century Europe.

>
> > But seriously, I think human judgment is better than a code which those
> who
> > want to cause problems can and will use as a weapon against the rest.
>
> Well, ish. The idea of having a strong published code which is
> tempered by human reasonableness is fairly well established in most
> legal systems.
>

I still side with the Scandinavian approach of passing general laws and
trusting judges to apply them in line with moral rather than purely legal
principles.

>
> Geoff
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Geoff Winkless
On 21 January 2016 at 11:28, Chris Travers  wrote:
> Resisting the urge to talk about how justice was actually seen in the Dark
> Ages

Pitchforks. Baying crowds dragging those they consider to be
wrongdoers from their beds and tying them to four horses and pulling
them apart in the town square, without worrying about proof or reason.
Trial by battle, where "the winner must have been in the right because
God would make sure that the right man won". Women being drowned
because it's better to kill an innocent girl (who will go straight to
Heaven anyway) rather than let an evil witch live amongst us. Stuff
like that.

> But seriously, I think human judgment is better than a code which those who
> want to cause problems can and will use as a weapon against the rest.

Well, ish. The idea of having a strong published code which is
tempered by human reasonableness is fairly well established in most
legal systems.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Chris Travers
On Thu, Jan 21, 2016 at 11:43 AM, Geoff Winkless 
wrote:

> On 21 January 2016 at 10:37, Chris Travers 
> wrote:
> > At the end of the day this will require human judgment rather than
> > formulation.
>
> Then make it explicit.
>
> * Disruption of the collaborative space, or patterns of behaviour
> which the majority of the core team consider to be harassment, will
> not be tolerated.
>
> (I've depersonalised the sentence also, to make it clear that it's the
> action and not the actor that is not tolerated)
>
> > Human judgment may be flawed but in a culturally diverse group it is far
> > better than the alternative.
>
> It's better to let the baying crowd decide your fate rather than
> codifying acceptable behaviour?
>
> The Dark Ages called, they want their Justice model back :)
>

Resisting the urge to talk about how justice was actually seen in the Dark
Ages

But seriously, I think human judgment is better than a code which those who
want to cause problems can and will use as a weapon against the rest.

>
> Geoff
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] Postgres and timezones

2016-01-21 Thread Steve Rogerson
On 20/01/16 19:19, Vik Fearing wrote:
> On 01/20/2016 04:24 PM, Steve Rogerson wrote:
>>> Postgres doesn't store original TZ. It does recalculation to local TZ. If 
>>> you
>>> need original TZ, you have to store it separetely.
>>>
>>
>> I know and that's what I'm trying to deal with. Given I know the origin TZ  -
>> as in Europe/Lisbon I'm trying to determine the short name so I can store it.
> 
> I would recommend against storing the abbreviation.  The abbreviations
> are not globally unique and don't follow daylight savings.  If you want
> to store the original time zone, I would use the full name.
> 
> Something like this might be relative to your interests:
> 
> INSERT INTO tbl (ts, tz)two 
> VALUES ('2016-01-20 00:00', current_setting('TimeZone'));
> 
> This will do the right thing regardless of where the client is (unless
> it's set to "localtime" and then it's useless).
> 

That doesn't work for two reasons.

1. In my application the data comes from several time zones, mostly European
but also Australia, the US, so the "current_setting" is often inapproriate.
2. There are two special times in the year, when the clocks change. The
awkward one is when the clocks go back. For example this year, for me,
the 2016-10-27 01:30 happens twice for my current setting, once as BST and
once as GMT.

We actually store UTC + the offset interval + the short name. The latter being
mostly for reporting purposes.

The issue is that we can't - from postgres - determine the appropriate short
name directly.

I can by other means though, say the DateTime module in perl though anything
that talks to the Olson database on whatever system would do as well.

The only way I can think of doing it is by doing in pg is by the two step (ok
more that two if I wrap in a transaction or reset the TZ) method :

# SET TIME ZONE 'Europe/Lisbon';
# SELECT to_char('2016-07-20 00:00'::timestamp with time zone , 'TZ');
 to_char
-
 WEST
(1 row)

Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Geoff Winkless
On 21 January 2016 at 10:37, Chris Travers  wrote:
> At the end of the day this will require human judgment rather than
> formulation.

Then make it explicit.

* Disruption of the collaborative space, or patterns of behaviour
which the majority of the core team consider to be harassment, will
not be tolerated.

(I've depersonalised the sentence also, to make it clear that it's the
action and not the actor that is not tolerated)

> Human judgment may be flawed but in a culturally diverse group it is far
> better than the alternative.

It's better to let the baying crowd decide your fate rather than
codifying acceptable behaviour?

The Dark Ages called, they want their Justice model back :)

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Chris Travers
On Thu, Jan 21, 2016 at 11:28 AM, Geoff Winkless 
wrote:

> On 20 January 2016 at 20:04, Alvaro Herrera 
> wrote:
> > "which could be considered" is too open-ended.  Since this point is
> > the one and only that can cause enforcement to occur, it should be more
> > strict as to what it is that will not be tolerated.  I'd propose
> > something like "is widely regarded as harassment" or something like
> > that, so that it needs to be clear that there is a large group of people
> > that considers the behavior unwanted rather than some minority.
>
> The problem with _that_ is that on the internet of 3 billion people "a
> large group of people" can be whipped up from a tiny minority.
>

At the end of the day this will require human judgment rather than
formulation.

Human judgment may be flawed but in a culturally diverse group it is far
better than the alternative.

>
> Geoff
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] adding a bdr node using bcv backup

2016-01-21 Thread (Daniel Stolf)
Hi Craig, how are you?

Thanks for your answer. It doesn't seems too complex... Also, it's just a
test scenario, I don't intend to use as a production setup or to recommend
as such, at least not until I'm 100% sure I got it right...

So, assuming I get the snapshot right... The steps would be...

1) create replication slots on prior nodes before taking the snapshot (not
sure how to do that, which command would it be? );
2) take the snapshot;
3) bring it up on another server;
4) use bdr_init_copy

I'm not at work right now, but I remember two things...

On node 3 I brought up the copy, if I try get local node name, it says
node1, which is the node I got the copy from, ... Wouldn't I also have to
do something about that? Like, delete the previous information on bdr
database that went along?

Em qui, 21 de jan de 2016 00:50, Craig Ringer 
escreveu:

> On 21 January 2016 at 08:29, (Daniel Stolf)  wrote:
>
>> Hello there...
>>
>> I'm new to postgres and I'm trying out BDR replication...
>>
>> I know that when I issue the bdr.bdr_group_join command, it will copy the
>> entire database from the host I specify on parameter 'join_using_dsn' and
>> this may take a while depending on the network and the size of the
>> database...
>>
>> What I wanted to know is if I can leverage a bcv backup... Is it possible?
>>
>
> BCV seems to be an EMC backup system. It looks like a snapshot. If the
> snapshot taken is consistent and atomic, and if it includes both pg_xlog
> and the rest of the datadir and all tablespaces in the SAME snapshot taken
> at the SAME instant, then you can treat it much like a pg_basebackup. In
> that case you can use bdr_init_copy to bring it up as a new BDR node. You
> must either stop all writes to all other nodes or pre-create the
> replication slots *before* taking the snapshot though, otherwise the new
> node won't be able to catch up to writes done after the snapshot and before
> it was started.
>
> If this sounds too complex then stick to the documented methods that work.
> Working from separately taken snapshots is hard to get right and could lead
> to subtle data problems if you get it wrong.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [GENERAL] CoC [Final]

2016-01-21 Thread Geoff Winkless
On 20 January 2016 at 20:04, Alvaro Herrera  wrote:
> "which could be considered" is too open-ended.  Since this point is
> the one and only that can cause enforcement to occur, it should be more
> strict as to what it is that will not be tolerated.  I'd propose
> something like "is widely regarded as harassment" or something like
> that, so that it needs to be clear that there is a large group of people
> that considers the behavior unwanted rather than some minority.

The problem with _that_ is that on the internet of 3 billion people "a
large group of people" can be whipped up from a tiny minority.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recurring bookings

2016-01-21 Thread John R Pierce

On 1/21/2016 2:05 AM, Kevin Waterson wrote:

So far this is what I have.. (see below).
How can I have recurring bookings for a call?
Eg: if the call_frequency is weekly, how can I see a list of dates 
which this account will be called upon?




your call frequency table probably should have a field interval_length 
of type INTERVAL with values like INTERVAL '1 week'


then you could do something like select start_date + 
interval_length*int_num from call_frequency join generate_series(1,n) as 
int_num where call_frequency.id = ?;



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recurring bookings

2016-01-21 Thread Kevin Waterson
So far this is what I have.. (see below).
How can I have recurring bookings for a call?
Eg: if the call_frequency is weekly, how can I see a list of dates which
this account will be called upon?

Kind regards
Kevin


CREATE TABLE call_frequency (
id SERIAL PRIMARY KEY NOT NULL,
name varchar(25) NOT NULL,
description VARCHAR(200) NOT NULL
);
COMMENT ON TABLE call_frequency IS 'How often a call is scheduled';
INSERT INTO call_frequency( name, description ) VALUES ('Once Only',
'Single non-recurring call' );
INSERT INTO call_frequency( name, description ) VALUES ('Hourly', 'Every
hour' );
INSERT INTO call_frequency( name, description ) VALUES ('Daily', 'Every
day' );
INSERT INTO call_frequency( name, description ) VALUES ('Weekly', 'Every
week' );
INSERT INTO call_frequency( name, description ) VALUES ('Bi-Weekly', 'Every
second week' );
INSERT INTO call_frequency( name, description ) VALUES ('Monthly', 'Every
Month' );
INSERT INTO call_frequency( name, description ) VALUES ('Yearly', 'Every
Year' );


CREATE TABLE callcard (
id SERIAL PRIMARY KEY NOT NULL,
account_id INT NOT NULL REFERENCES accounts(id),
user_id INT NOT NULL REFERENCES users(id),
call_type_id INT NOT NULL REFERENCES call_types(id),
call_frequency_id INT NOT NULL REFERENCES call_frequency(id),
duration TSRANGE NOT NULL
);
COMMENT ON TABLE callcard IS 'Table of scheduled calls';


Re: [GENERAL] Import Schema converting tinyint to Boolean?

2016-01-21 Thread Félix GERZAGUET
On Thu, Jan 21, 2016 at 9:52 AM, Guyren Howe  wrote:

> I'm converting a MySQL database to a Postgres database by doing a bunch of
> CREATE TABLE… AS SELECT * FROM, but the tinyints are coming across as
> tinyints.
>
> Seems like there ought to be an option somewhere to have them behave like
> Booleans but I can't see it.
>

For that part of your question, maybe you could adapt the method described
here:
http://adpgtech.blogspot.fr/2016/01/using-postgresql-95s-import-foreign.html

Félix


[GENERAL] Import Schema converting tinyint to Boolean?

2016-01-21 Thread Guyren Howe
Converting databases is a nightmare. IMPORT SCHEMA almost makes it easy.

Almost.

I'm converting a MySQL database to a Postgres database by doing a bunch of 
CREATE TABLE… AS SELECT * FROM, but the tinyints are coming across as tinyints.

Seems like there ought to be an option somewhere to have them behave like 
Booleans but I can't see it.

Also, I would like to be able to execute raw MySQL-style SQL over the fdw 
connection, but I don't see an option to do that. I'd also like to be able to 
query from the information schema on the MySQL side, but I don't see how to 
schema-qualify the scopes. I guess I could separately do IMPORT SCHEMA against 
the MySQL info schema.

This facility is close to making a robust database conversion script a fairly 
simple thing to write…

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general