Re: [GENERAL] Text value within composite function result not quoted

2017-05-15 Thread Tom Lane
Guyren Howe  writes:
> Thanks. This is… inconvenient. I see nothing about an option to force quoting 
> of strings. Is there no such option? If not, I suggest that it would be a 
> useful addition.

Force-quoting the elements would not move the goalposts all that much
concerning parse-ability of composite (or array) output.  If you're
complaining about that, I strongly suspect your code also fails to cope
with embedded quotes, nested structures, and/or nulls.

You might consider expanding the query's output so that the fields are
delivered separately.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Ken Tanzer
On Mon, May 15, 2017 at 4:45 PM, Adrian Klaver 
wrote:

> On 05/15/2017 01:40 PM, Ken Tanzer wrote:
>
>
>
>> But let me ask, is there a big warning about this somewhere I missed?
>> Can the 9.2 updates do something to fix this, or at least create a warning
>> or an RPMNEW file?  I'm happy this is a cloud server and that I worked on a
>> copy.  However, in different circumstances I might well have reasoned
>> "well, installing the 9.6 packages really should be safe for 9.2, since
>> they're clearly meant to exist side-by-side."  And then have a setup that
>> no longer worked as it once did.  With an RHEL clone and PGDG packages
>> straight from the horses mouth, I'd have higher expectations than that.
>> Only because of the great work y'all do! ;)
>>
>
> Might want to file an issue here:
>
> https://redmine.postgresql.org/projects/pgrpms/
>
> You will need a Postgres community account, which you can sign up for on
> the same page.
>
>
>>
Done, and thanks for pointing me to the tracker.

https://redmine.postgresql.org/issues/2409

Cheers,
Ken


[GENERAL] Postgres intermittent connection errors: psql.bin: could not connect to server: Cannot assign requested address

2017-05-15 Thread Vamsi Patchipulusu
Hi All,

We are receiving below intermittent connection error during performance testing 
on postgres.
Thanks in advance for any suggestion or pointers on how we can troubleshoot 
this issue.

  Error: psql.bin: could not connect to server: Cannot assign requested 
address
Is the server running on host "abchost.corp.xyz.com" (xxx.xxx.xxx.xxx) 
and accepting
TCP/IP connections on port 5432?


Our organization is evaluating the feasibility of using postgres for an 
upcoming project.
As part of the evaluation we are performing  a 50 concurrent user read 
performance testing.
Jmeter is the tool we are using for orchestrating the tests. It  executes a 
shell script on 50 concurrent  threads.

The  shell script does the following:

a)  Connects to postgres database server using psql.

b)  Issues a single select statement on table with 200 rows .

c)   Writes the results to a text file.

Jmeter client and database host are sitting in the same data center.
Database OS: RedHat 6.x,
DB Version:  Postgres 9.6
Database is of decent config. 16 CPU, 64GB RAM, SAN storage.

Our tests are run for 10 to 15min.
During the middle of the run, we are noticing connection errors intermittently. 
(Around 20%  calls are failing due to this).

While some calls are failing with the error:   psql.bin: could not connect to 
server: Cannot assign requested address
Other calls are successful around the same timeframe may be one to two seconds 
later..

For the failed calls we are not able to see anything printed in the database 
server logs, (The exception is received on the client side).
For successful  calls we are able to see the details  of the sql statements, 
user, hostname etc... printed in the logs.

We have verified CPU,Memory,No of open files, netstat connection waits, but 
could not identify anything concrete that could be causing this issue.
The database is setup to accept 500 concurrent open connections and we are not 
using anywhere close to it.



Regards,
Vamsi











Re: [GENERAL] Text value within composite function result not quoted

2017-05-15 Thread David G. Johnston
On Monday, May 15, 2017, Tom Lane  wrote:

> Guyren Howe > writes:
> > ... get this result:
> > (200,{},Works!)
> > This is the textual representation of the result I get in psql and Ruby.
> Note that the textual final value is not quoted.
> > I imagine I can work out a way to deal with this, but this is not the
> most felicitous way of representing a text value that I can imagine.
> > Note that if I add a single space after “Works!”, I get quotes around
> the string.
>
> As per spec:
>
> https://www.postgresql.org/docs/current/static/arrays.html#ARRAYS-IO
>
>

Right idea (same output rules), wrong link. The output is a composite, not
an array.

https://www.postgresql.org/docs/current/static/rowtypes.html#ROWTYPES-IO-SYNTAX

David J.


Re: [GENERAL] Text value within composite function result not quoted

2017-05-15 Thread Guyren Howe

> On May 15, 2017, at 21:36 , Tom Lane  wrote:
> 
>> ... get this result:
>> (200,{},Works!)
>> This is the textual representation of the result I get in psql and Ruby. 
>> Note that the textual final value is not quoted.
>> I imagine I can work out a way to deal with this, but this is not the most 
>> felicitous way of representing a text value that I can imagine.
>> Note that if I add a single space after “Works!”, I get quotes around the 
>> string.
> 
> As per spec:
> 
> https://www.postgresql.org/docs/current/static/arrays.html#ARRAYS-IO 
> 

Thanks. This is… inconvenient. I see nothing about an option to force quoting 
of strings. Is there no such option? If not, I suggest that it would be a 
useful addition.


Re: [GENERAL] Text value within composite function result not quoted

2017-05-15 Thread Tom Lane
Guyren Howe  writes:
> ... get this result:
> (200,{},Works!)
> This is the textual representation of the result I get in psql and Ruby. Note 
> that the textual final value is not quoted.
> I imagine I can work out a way to deal with this, but this is not the most 
> felicitous way of representing a text value that I can imagine.
> Note that if I add a single space after “Works!”, I get quotes around the 
> string.

As per spec:

https://www.postgresql.org/docs/current/static/arrays.html#ARRAYS-IO

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Text value within composite function result not quoted

2017-05-15 Thread Guyren Howe
Define a couple of types:

CREATE TYPE request_in AS
(
path text[],
args jsonb,
server text,
port smallint,
headers jsonb,
body bytea,
type_requested text[]
);


CREATE TYPE request_out AS
(
status smallint,
headers jsonb,
body text
);

and a function:

CREATE OR REPLACE FUNCTION request(
req request_in)
RETURNS "request_out"
LANGUAGE 'plv8'
COST 100.0
VOLATILE 
AS $function$
return {'status': 200, 'headers': {}, 'body': "Works!"}
$function$;

call the function:

SELECT request(
(
'{}',
'{}'::jsonb,
'',
8080,
'{}'::jsonb,
''::bytea,
'{}')::request_in
)

get this result:

(200,{},Works!)

This is the textual representation of the result I get in psql and Ruby. Note 
that the textual final value is not quoted.

I imagine I can work out a way to deal with this, but this is not the most 
felicitous way of representing a text value that I can imagine.

Note that if I add a single space after “Works!”, I get quotes around the 
string.

This is 9.6.2 on MacOS.


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Adrian Klaver

On 05/15/2017 01:40 PM, Ken Tanzer wrote:




But let me ask, is there a big warning about this somewhere I missed?  
Can the 9.2 updates do something to fix this, or at least create a 
warning or an RPMNEW file?  I'm happy this is a cloud server and that I 
worked on a copy.  However, in different circumstances I might well have 
reasoned "well, installing the 9.6 packages really should be safe for 
9.2, since they're clearly meant to exist side-by-side."  And then have 
a setup that no longer worked as it once did.  With an RHEL clone and 
PGDG packages straight from the horses mouth, I'd have higher 
expectations than that.  Only because of the great work y'all do! ;)


Might want to file an issue here:

https://redmine.postgresql.org/projects/pgrpms/

You will need a Postgres community account, which you can sign up for on 
the same page.




Cheers,
Ken






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add NAMEDATALEN to PG_CONFIG?

2017-05-15 Thread Tom Lane
Andy Johnson  writes:
> Is there a way to find NAMEDATALEN in a 
> database, other than looking at the 
> source?

select typlen from pg_type where typname = 'name';

> If not could it be added to the 
> PG_CONFIG eventually?

Seems like it would only make sense to do that if we supported
configuring NAMEDATALEN, which we don't really: you have to
manually edit pg_config_manual.h, and how well the result works
is on your own head.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] union all taking years - PG 9.6

2017-05-15 Thread David G. Johnston
On Mon, May 15, 2017 at 4:21 PM, Patrick B  wrote:

> Hi guys.
>
> I have two tables, where 'tableA' is the old and 'tableC' is the new one.
> I say "new/old" because we are migrating the data from tableA to tableC
> soon.
>
> I created a view selecting from both tables, with a UNION ALL between
> them. When selecting from that view, it's really slow. I can't even run
> explain analyze (it's been 1h and query did not finished yet).
>
>
​Try just comparing the explain plans.

However, when running both selects with explain analyze, query is fast.
>
>
​Do those selects contain where clauses?​


> What should I do in this case? Why is that taking so long? I assume it's
> because the UNION will look for duplicates?
>
>
​It won't because you specified "ALL"​

I'll presume there is a where clause involved because concatenating two
queries via UNION ALL without either having WHERE clause should be pretty
much interchangeable.  Its unclear to me where things stand regarding
pushing down WHERE clauses through the UNION ALL which if it cannot for
some reason would result in what you are observing.

PostgreSQL version would help too.

David J.


Re: [GENERAL] Add NAMEDATALEN to PG_CONFIG?

2017-05-15 Thread Adrian Klaver

On 05/15/2017 04:27 PM, Andy Johnson wrote:
Is there a way to find NAMEDATALEN in a database, other than looking at 
the source? If not could it be added to the PG_CONFIG eventually?


Andy Johnson



https://www.postgresql.org/docs/9.6/static/runtime-config-preset.html

"max_identifier_length (integer)

Reports the maximum identifier length. It is determined as one less 
than the value of NAMEDATALEN when building the server. The default 
value of NAMEDATALEN is 64; therefore the default max_identifier_length 
is 63 bytes, which can be less than 63 characters when using multibyte 
encodings.

"

So:

production=# show max_identifier_length ;
 max_identifier_length
---
 63


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] union all taking years - PG 9.6

2017-05-15 Thread Adrian Klaver

On 05/15/2017 04:21 PM, Patrick B wrote:

Hi guys.

I have two tables, where 'tableA' is the old and 'tableC' is the new 
one. I say "new/old" because we are migrating the data from tableA to 
tableC soon.


I created a view selecting from both tables, with a UNION ALL between 
them. When selecting from that view, it's really slow. I can't even run 
explain analyze (it's been 1h and query did not finished yet).


However, when running both selects with explain analyze, query is fast.


What should I do in this case? Why is that taking so long? I assume it's 
because the UNION will look for duplicates?


Without actual information, the only thing that can be said is that it 
is slow.


Information needed:

Postgres version

Table schema for tableA and tableC

The view query

The output of the explain for the view query.



Thanks
Patrick.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Add NAMEDATALEN to PG_CONFIG?

2017-05-15 Thread Andy Johnson
Is there a way to find NAMEDATALEN in a 
database, other than looking at the 
source? If not could it be added to the 
PG_CONFIG eventually?

Andy Johnson



[GENERAL] union all taking years - PG 9.6

2017-05-15 Thread Patrick B
Hi guys.

I have two tables, where 'tableA' is the old and 'tableC' is the new one. I
say "new/old" because we are migrating the data from tableA to tableC soon.

I created a view selecting from both tables, with a UNION ALL between them.
When selecting from that view, it's really slow. I can't even run explain
analyze (it's been 1h and query did not finished yet).

However, when running both selects with explain analyze, query is fast.


What should I do in this case? Why is that taking so long? I assume it's
because the UNION will look for duplicates?

Thanks
Patrick.


Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Thomas Kellerer

Ronny Abraham schrieb am 15.05.2017 um 19:25:

4. Insert 10,000 rows to JSON, execution time (sec):

122.855001211

5. Insert 10,000 rows to JSONB, execution time (sec):

122.128999233


What’s interesting is that inserting to JSONB is slightly faster than inserting 
to JSON.


A difference in 0.7 seconds from a single test run is not significant enough to warrant 
the statement "JSONB is slightly faster".

That could very well have been caused by other things going on your system.
Or maybe just some caching of metadata.

Unless you repeat this at least 4 or 5 times, you can't tell if one is really 
faster then the other.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Merlin Moncure
On Mon, May 15, 2017 at 12:02 PM, Ronny Abraham  wrote:
> 4. Insert 10,000 rows to JSON, execution time (sec):
> 5. Insert 10,000 rows to JSONB, execution time (sec):
>
> What’s interesting is that inserting to JSONB is slightly faster than
> inserting to JSON.

With those times, only explanation is that you are bottlenecked by
storage fsync time.  If storage is fast, you should be able to do 10k
inserts per second+ for either type unless the documents are large.

For storing documents, you should only consider using the json type if
the exact structure of the originating document is important for the
most part.  For querying and manipulation it's better and faster
(mostly) and the API is more robust.

The json type has other uses; mainly for serialization from non-json
data but that's not a storage consideration.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Devrim Gündüz

Hi,

On Mon, 2017-05-15 at 16:34 -0400, Tom Lane wrote:
> > bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
> > psql: could not connect to server: Connection refused
> >    Is the server running locally and accepting
> >    connections on Unix domain socket
> > "/var/run/postgresql/.s.PGSQL.5432"?
> 
> The default is actually compiled into libpq.so, not psql itself.
> So I'm thinking what's happening here is the 9.2 psql is picking
> up a libpq.so supplied by 9.6.

Yeah, sorry, my bad. I forgot that the RPMs also put a file under
/etc/ld.so.conf.d, so that the latest libpq is picked up.

Regards,
-- 
Devrim Gündüz
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Ken Tanzer
>
>
>> Workarounds:
>>
>> * You can connect to 9.2 using /usr/pgsql-9.2/bin/psql command. It knows
>> the
>> old socket directory.
>>
>
> That was where I was going until I saw this in the OP:
>
> bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
> psql: could not connect to server: Connection refused
> Is the server running locally and accepting
> connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.
> 5432"?
>
>
>
>> * Pass -h /tmp to 9.6's psql, so that it connects to 9.2 instance.
>>
>> -HTH
>>
>> Regards,
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Thanks everyone for the replies.  Adrian is right--I did try this with the
9.2 binaries, with the same problem.  But to address Tom's question (and if
I'm using ldd properly), the 9.2 psql binary is using the 9.6 libpq.

[root@centos-new postgresql]# ldd /usr/bin/psql | grep libpq
libpq.so.5 => /usr/pgsql-9.6/lib/libpq.so.5 (0x7f2e6c99a000)
[root@centos-new postgresql]# ldd /usr/pgsql-9.2/bin/psql | grep libpq
libpq.so.5 => /usr/pgsql-9.6/lib/libpq.so.5 (0x7f52f9c67000)

Devrim--the -h /tmp option works great.

I still wanted this to just "work" though, for scripts and such.  I
specified the socket directory in the 9.2 postgresql.conf, and it seems to
be working "normally" now.

But let me ask, is there a big warning about this somewhere I missed?  Can
the 9.2 updates do something to fix this, or at least create a warning or
an RPMNEW file?  I'm happy this is a cloud server and that I worked on a
copy.  However, in different circumstances I might well have reasoned
"well, installing the 9.6 packages really should be safe for 9.2, since
they're clearly meant to exist side-by-side."  And then have a setup that
no longer worked as it once did.  With an RHEL clone and PGDG packages
straight from the horses mouth, I'd have higher expectations than that.
Only because of the great work y'all do! ;)

Cheers,
Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Ronny Abraham
Thank you, I updated my test to insert 10,000 rows where each row has a JSON or 
JSONB with 100 keys (I do not have a use case of 1000), here are the results:

7. Insert 10,000 rows to JSON (100 items in each row), execution time (sec):
119.411994457


8. Insert 10,000 rows to JSONB (100 items in each row), execution time (sec):
118.248999119

Thanks
Ronny

From: Dmitry Dolgov [mailto:9erthali...@gmail.com]
Sent: Monday, May 15, 2017 3:35 PM
To: Ronny Abraham
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

On 15 May 2017 at 19:25, Ronny Abraham 
> wrote:
>
> What’s interesting is that inserting to JSONB is slightly faster than 
> inserting to JSON.
>
> Maybe that’s because my JSON has a flat structure (no nesting), or maybe I am 
> doing something else wrong?

I assume it's because your json documents (10 fields) are not big enough. If 
you'll try the same tests
with something like 1000 keys for each document, there should be noticeable 
difference (at least in my
case it was like that).



This email is non-binding, is subject to contract, and neither Kulicke and 
Soffa Industries, Inc. nor its subsidiaries (each and collectively “K”) shall 
have any obligation to you to consummate the transactions herein or to enter 
into any agreement, other than in accordance with the terms and conditions of a 
definitive agreement if and when negotiated, finalized and executed between the 
parties. This email and all its contents are protected by International and 
United States copyright laws. Any reproduction or use of all or any part of 
this email without the express written consent of K is prohibited.


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Tom Lane
Adrian Klaver  writes:
> On 05/15/2017 01:10 PM, Devrim Gündüz wrote:
>> * You can connect to 9.2 using /usr/pgsql-9.2/bin/psql command. It knows the
>> old socket directory.

> That was where I was going until I saw this in the OP:

> bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
> psql: could not connect to server: Connection refused
>   Is the server running locally and accepting
>   connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

The default is actually compiled into libpq.so, not psql itself.
So I'm thinking what's happening here is the 9.2 psql is picking
up a libpq.so supplied by 9.6.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Adrian Klaver

On 05/15/2017 01:10 PM, Devrim Gündüz wrote:


Hi,

On Mon, 2017-05-15 at 12:55 -0700, Ken Tanzer wrote:

Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
install PGDG 9.6 alongside the already-running 9.2.  After installing the
9.6 packages (and even before doing an initdb), I am no
longer able to make a local connection to the 9.2 server.  Instead I get
the message:

psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

That socket file does not exist on the server. (And in fact, the
/var/run/postgresql directory didn't exist before installing 9.6).  When I
configure 9.6 to use port 5433 and run it, it does create that socket for
5433.  I tried creating such a socket manually for 5432, but that didn't
seem to change anything.

Any help in getting this working and/or pointing out what I'm missing would
be great.  I'm also confused conceptually about what is happening here.
What is it that the installation (but not execution) of 9.6 does that's
blocking the local 9.2 access?  I'm guessing it's gotta be something in the
RPM install scripts.


PGDG RPMs use alternatives method, to replace some binaries that can be used
across multiple PostgreSQL versions, and psql is one of them. When you install
9.6, 9.6's psql has higher priority than 9.2, so that one is used -- and 9.4+
are complied with a patch that changes default socket directory from /tmp to
/var/run/postgresql, and 9.2 is not aware of that.


Workarounds:

* You can connect to 9.2 using /usr/pgsql-9.2/bin/psql command. It knows the
old socket directory.


That was where I was going until I saw this in the OP:

bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?




* Pass -h /tmp to 9.6's psql, so that it connects to 9.2 instance.

-HTH

Regards,




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Devrim Gündüz

Hi,

On Mon, 2017-05-15 at 12:55 -0700, Ken Tanzer wrote:
> Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
> install PGDG 9.6 alongside the already-running 9.2.  After installing the
> 9.6 packages (and even before doing an initdb), I am no
> longer able to make a local connection to the 9.2 server.  Instead I get
> the message:
> 
> psql: could not connect to server: Connection refused
> Is the server running locally and accepting
> connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
> 
> That socket file does not exist on the server. (And in fact, the
> /var/run/postgresql directory didn't exist before installing 9.6).  When I
> configure 9.6 to use port 5433 and run it, it does create that socket for
> 5433.  I tried creating such a socket manually for 5432, but that didn't
> seem to change anything.
> 
> Any help in getting this working and/or pointing out what I'm missing would
> be great.  I'm also confused conceptually about what is happening here.
> What is it that the installation (but not execution) of 9.6 does that's
> blocking the local 9.2 access?  I'm guessing it's gotta be something in the
> RPM install scripts.

PGDG RPMs use alternatives method, to replace some binaries that can be used
across multiple PostgreSQL versions, and psql is one of them. When you install
9.6, 9.6's psql has higher priority than 9.2, so that one is used -- and 9.4+
are complied with a patch that changes default socket directory from /tmp to
/var/run/postgresql, and 9.2 is not aware of that.


Workarounds:

* You can connect to 9.2 using /usr/pgsql-9.2/bin/psql command. It knows the
old socket directory.

* Pass -h /tmp to 9.6's psql, so that it connects to 9.2 instance.

-HTH

Regards,

-- 
Devrim Gündüz
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread Nick Dro
This is a join in a middle of query.
How can I use dynamic SQL in the middle of query?ב מאי 15, 2017 20:26, David G. Johnston כתב:On Mon, May 15, 2017 at 10:02 AM, Nick Dro  wrote:
Hi, I'm new to postgresql and couldn't find answer to this situation anywhere. I asked this here: http://stackoverflow.com/questions/43984208/coditional-join-of-query-using-postgresqlI hope there is a better solution rather than creating two separated functions :(
​Generate the SQL itself in a string then execute the string.  Its called "Dynamic SQL"​.https://www.postgresql.org/docs/9.6/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYNDavid J.



Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Tom Lane
Ken Tanzer  writes:
> Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
> install PGDG 9.6 alongside the already-running 9.2.  After installing the
> 9.6 packages (and even before doing an initdb), I am no
> longer able to make a local connection to the 9.2 server.  Instead I get
> the message:

> psql: could not connect to server: Connection refused
> Is the server running locally and accepting
> connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

Where is the 9.2 server making its socket ... /tmp ?

What it looks like is that you've started to use a libpq.so that is
following the Red Hat convention of putting the socket file in
/var/run/postgresql, rather than /tmp.  I do not know exactly where
the PGDG packages stand on that theological issue, or whether they
changed between 9.2 and 9.6.  But the first step would be to use
"ldd" to see which libpq your invoked psql is pulling in.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Justin Pryzby
On Mon, May 15, 2017 at 12:55:48PM -0700, Ken Tanzer wrote:
> Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
> install PGDG 9.6 alongside the already-running 9.2.  After installing the
> 9.6 packages (and even before doing an initdb), I am no
> longer able to make a local connection to the 9.2 server.  Instead I get
> the message:

See eg.
https://www.postgresql.org/message-id/21044.1326496...@sss.pgh.pa.us
https://www.postgresql.org/message-id/0a21bc93-7b9c-476e-aaf4-0ff71708e...@elevated-dev.com

I'm guessing you upgraded the client libraries, which probably change the
(default) socket path.

Your options are to specify path to the socket (maybe in /tmp for running
PG92?), change to TCP connection, or specify server option
unix_socket_directories.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Ken Tanzer
Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
install PGDG 9.6 alongside the already-running 9.2.  After installing the
9.6 packages (and even before doing an initdb), I am no
longer able to make a local connection to the 9.2 server.  Instead I get
the message:

psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

That socket file does not exist on the server. (And in fact, the
/var/run/postgresql directory didn't exist before installing 9.6).  When I
configure 9.6 to use port 5433 and run it, it does create that socket for
5433.  I tried creating such a socket manually for 5432, but that didn't
seem to change anything.

Any help in getting this working and/or pointing out what I'm missing would
be great.  I'm also confused conceptually about what is happening here.
What is it that the installation (but not execution) of 9.6 does that's
blocking the local 9.2 access?  I'm guessing it's gotta be something in the
RPM install scripts.

Thanks!

Ken

bash-4.1$ whoami
postgres
bash-4.1$ psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
bash-4.1$ psql -p 5432 -h localhost
Password:
bash-4.1$ psql -p 5433
psql (9.6.3)
Type "help" for help.

postgres=# \q
bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
bash-4.1$ /usr/pgsql-9.6/bin/psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?


[root@centos-new postgresql]# cat /etc/issue
CentOS release 6.9 (Final)
Kernel \r on an \m

[root@centos-new postgresql]# yum list installed "postgresql*"
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.lstn.net
 * extras: centos.mirror.lstn.net
 * updates: centos.mirror.lstn.net
Installed Packages
postgresql92.x86_64 9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-contrib.x86_64 9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-devel.x86_64   9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-docs.x86_649.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-libs.x86_649.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-plpython.x86_649.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-pltcl.x86_64   9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-server.x86_64  9.2.21-1PGDG.rhel6
 @pgdg92
postgresql96.x86_64 9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-contrib.x86_64 9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-devel.x86_64   9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-docs.x86_649.6.3-1PGDG.rhel6
@pgdg96
postgresql96-libs.x86_649.6.3-1PGDG.rhel6
@pgdg96
postgresql96-pltcl.x86_64   9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-server.x86_64  9.6.3-1PGDG.rhel6
@pgdg96
[root@centos-new postgresql]#



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org

(253) 245-3801

Subscribe to the mailing list

 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread Alban Hertroys

> On 15 May 2017, at 19:02, Nick Dro  wrote:
> 
> Hi, 
> I'm new to postgresql and couldn't find answer to this situation anywhere. 
> I asked this here: 
> http://stackoverflow.com/questions/43984208/coditional-join-of-query-using-postgresql
> 
> I hope there is a better solution rather than creating two separated 
> functions :(

You can use your boolean parameter inside the join condition:
[…] on (tfquery.a = main.a and ((type_f and tfquery.d = main.d) or not type_f))

Beware that you don't also have a column named type_f in that join somewhere.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Dmitry Dolgov
On 15 May 2017 at 19:25, Ronny Abraham  wrote:
>
> What’s interesting is that inserting to JSONB is slightly faster than
inserting to JSON.
>
> Maybe that’s because my JSON has a flat structure (no nesting), or maybe
I am doing something else wrong?

I assume it's because your json documents (10 fields) are not big enough.
If you'll try the same tests
with something like 1000 keys for each document, there should be noticeable
difference (at least in my
case it was like that).


Re: [GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread Jack
This is a join in a middle of query.
How can I do such thing?



--
View this message in context: 
http://www.postgresql-archive.org/Coditional-join-of-query-using-PostgreSQL-tp5961718p5961726.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread David G. Johnston
On Mon, May 15, 2017 at 10:02 AM, Nick Dro  wrote:

> Hi,
> I'm new to postgresql and couldn't find answer to this situation anywhere.
> I asked this here:
> http://stackoverflow.com/questions/43984208/coditional-
> join-of-query-using-postgresql
>
> I hope there is a better solution rather than creating two separated
> functions :(
>

​Generate the SQL itself in a string then execute the string.  Its called
"Dynamic SQL"​.

https://www.postgresql.org/docs/9.6/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

David J.


Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Ronny Abraham
Here are the attachments.


From: Ronny Abraham
Sent: Monday, May 15, 2017 1:03 PM
To: 'pgsql-general@postgresql.org'
Subject: Insert performance and disk usage in JSON vs JSONB

Hello all,

I am trying to decide whether to use JSON or JSONB to store my application data.

From what I read so far about JSON vs JSONB:

Performance -  JSON is faster for inserts since it only odes JSON format 
verification, vs JSONB which also converts the jSON input to a binary JSONB 
with its meta-data.
However, queries are much faster for JSONB especially when using indexes.
Disk Usage - JSONB uses more space vs JSON, I assume this is due to its meta 
data it stores in the binary.


But, I need to take my application use cases into consideration when making the 
decision of JSON vs JSONB:
--
My application has a data producer which sends data to a consumer which in turn 
inserts the data to the DB.
I wanted to make sure I don't run into a consumer-producer problem where my 
producer generates data at a rate the consumer cannot handle.
Part of that is understanding insert times in JSON and JSONB.
In my application insert time is more critical than read time, since I do not 
have many clients on the reading side, and time is not critical. I also wanted 
to experiment with disk usage.

So I wrote 2 small test in python one for storage and one for insert 
performance.
In my test db I created 2 tables, each has only one column named 'data', in 
each I store a JSON\B with 10 fields.

The results of the first test (disk usage) are attached (excel sheet) - in it 
you can see a 26% overhead in JSONB over JSON.
The second test (insert performance) results are as following (python script 
attached):

1. Inserting 1 row to JSON table, result:
QUERY PLAN
---
Insert on table_json10  (cost=0.00..0.01 rows=1 width=32) (actual 
time=0.048..0.048 rows=0 loops=1)
  ->  Result  (cost=0.00..0.01 rows=1 width=32) (actual time=0.001..0.001 
rows=1 loops=1)
Planning time: 0.035 ms
Execution time: 0.071 ms
(4 rows)


2. Inserting 1 row to JSONB table, result:
 QUERY PLAN

Insert on table_json10b  (cost=0.00..0.01 rows=1 width=32) (actual 
time=0.052..0.052 rows=0 loops=1)
  ->  Result  (cost=0.00..0.01 rows=1 width=32) (actual time=0.001..0.002 
rows=1 loops=1)
Planning time: 0.018 ms
Execution time: 0.066 ms
(4 rows)


3. Multi insert (10) to JSON table, result:
  QUERY PLAN
--
Insert on table_json10  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.045..0.045 rows=0 loops=1)
  ->  Values Scan on "*VALUES*"  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.006..0.010 rows=10 loops=1)
Planning time: 0.036 ms
Execution time: 0.072 ms
(4 rows)


4. Multi insert (10) to JSONB table, result:
  QUERY PLAN
--
Insert on table_json10b  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.029..0.029 rows=0 loops=1)
  ->  Values Scan on "*VALUES*"  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.002..0.005 rows=10 loops=1)
Planning time: 0.021 ms
Execution time: 0.043 ms
(4 rows)


4. Insert 10,000 rows to JSON, execution time (sec):
122.855001211


5. Insert 10,000 rows to JSONB, execution time (sec):
122.128999233

# END TEST

What's interesting is that inserting to JSONB is slightly faster than inserting 
to JSON.
Maybe that's because my JSON has a flat structure (no nesting), or maybe I am 
doing something else wrong?

I was just interested in some input regarding insert performance and disk usage 
in JSON vs JSONB.

Thanks!
Ronny




This email is non-binding, is subject to contract, and neither Kulicke and 
Soffa Industries, Inc. nor its subsidiaries (each and collectively "K") shall 
have any obligation to you to consummate the transactions herein or to enter 
into any agreement, other than in accordance with the terms and conditions of a 
definitive agreement if and when negotiated, finalized and executed between the 
parties. This email and all its contents are protected by International and 
United States copyright laws. Any reproduction or use of all or any part of 
this email without the express written consent of K is prohibited.


DbStorage_JsonVsJsonB.xlsx
Description: DbStorage_JsonVsJsonB.xlsx


testInsertPerformance.py
Description: 

[GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread Nick Dro

Hi, I'm new to postgresql and couldn't find answer to this situation anywhere. I asked this here: http://stackoverflow.com/questions/43984208/coditional-join-of-query-using-postgresqlI hope there is a better solution rather than creating two separated functions :(


[GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Ronny Abraham
Hello all,

I am trying to decide whether to use JSON or JSONB to store my application data.

>From what I read so far about JSON vs JSONB:

Performance -  JSON is faster for inserts since it only odes JSON format 
verification, vs JSONB which also converts the jSON input to a binary JSONB 
with its meta-data.
However, queries are much faster for JSONB especially when using indexes.
Disk Usage - JSONB uses more space vs JSON, I assume this is due to its meta 
data it stores in the binary.


But, I need to take my application use cases into consideration when making the 
decision of JSON vs JSONB:
--
My application has a data producer which sends data to a consumer which in turn 
inserts the data to the DB.
I wanted to make sure I don't run into a consumer-producer problem where my 
producer generates data at a rate the consumer cannot handle.
Part of that is understanding insert times in JSON and JSONB.
In my application insert time is more critical than read time, since I do not 
have many clients on the reading side, and time is not critical. I also wanted 
to experiment with disk usage.

So I wrote 2 small test in python one for storage and one for insert 
performance.
In my test db I created 2 tables, each has only one column named 'data', in 
each I store a JSON\B with 10 fields.

The results of the first test (disk usage) are attached (excel sheet) - in it 
you can see a 26% overhead in JSONB over JSON.
The second test (insert performance) results are as following (python script 
attached):

1. Inserting 1 row to JSON table, result:
QUERY PLAN
---
Insert on table_json10  (cost=0.00..0.01 rows=1 width=32) (actual 
time=0.048..0.048 rows=0 loops=1)
  ->  Result  (cost=0.00..0.01 rows=1 width=32) (actual time=0.001..0.001 
rows=1 loops=1)
Planning time: 0.035 ms
Execution time: 0.071 ms
(4 rows)


2. Inserting 1 row to JSONB table, result:
 QUERY PLAN

Insert on table_json10b  (cost=0.00..0.01 rows=1 width=32) (actual 
time=0.052..0.052 rows=0 loops=1)
  ->  Result  (cost=0.00..0.01 rows=1 width=32) (actual time=0.001..0.002 
rows=1 loops=1)
Planning time: 0.018 ms
Execution time: 0.066 ms
(4 rows)


3. Multi insert (10) to JSON table, result:
  QUERY PLAN
--
Insert on table_json10  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.045..0.045 rows=0 loops=1)
  ->  Values Scan on "*VALUES*"  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.006..0.010 rows=10 loops=1)
Planning time: 0.036 ms
Execution time: 0.072 ms
(4 rows)


4. Multi insert (10) to JSONB table, result:
  QUERY PLAN
--
Insert on table_json10b  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.029..0.029 rows=0 loops=1)
  ->  Values Scan on "*VALUES*"  (cost=0.00..0.12 rows=10 width=32) (actual 
time=0.002..0.005 rows=10 loops=1)
Planning time: 0.021 ms
Execution time: 0.043 ms
(4 rows)


4. Insert 10,000 rows to JSON, execution time (sec):
122.855001211


5. Insert 10,000 rows to JSONB, execution time (sec):
122.128999233

# END TEST

What's interesting is that inserting to JSONB is slightly faster than inserting 
to JSON.
Maybe that's because my JSON has a flat structure (no nesting), or maybe I am 
doing something else wrong?

I was just interested in some input regarding insert performance and disk usage 
in JSON vs JSONB.

Thanks!
Ronny




This email is non-binding, is subject to contract, and neither Kulicke and 
Soffa Industries, Inc. nor its subsidiaries (each and collectively "K") shall 
have any obligation to you to consummate the transactions herein or to enter 
into any agreement, other than in accordance with the terms and conditions of a 
definitive agreement if and when negotiated, finalized and executed between the 
parties. This email and all its contents are protected by International and 
United States copyright laws. Any reproduction or use of all or any part of 
this email without the express written consent of K is prohibited.


[GENERAL] Conditional join in function

2017-05-15 Thread Jack
Hi,
I'm new to postgresql and couldn't find answer to this situation anywhere.
I asked this here:
http://stackoverflow.com/questions/43984208/coditional-join-of-query-using-postgresql

I hope there is a better solution rather than creating two separated
functions :(



--
View this message in context: 
http://www.postgresql-archive.org/Conditional-join-in-function-tp5961715.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to include BIGINT-column in RUM-index sorted by timestamp

2017-05-15 Thread Andreas Joseph Krogh
Hi PostgresPro (RUM-creators).
 
With the latest version of RUM: https://github.com/postgrespro/rum
, one is able to store BIGINT as part of the index, it seems. I'm however not 
able to make a query use the index without the Filter-step.
 
This table is roughly like this:
CREATE TABLE email_delivery( received_timestamp timestamp not null, folder_id 
bigint NOT NULL referencesorigo_email_folder(entity_id), fts_all tsvector ); 
 
I have created the index like this:
 
CREATE index rum_idx ON origo_email_delivery using rum (fts_all 
rum_tsvector_addon_ops, folder_id, received_timestamp)with (attach = 
'received_timestamp', to = 'fts_all'); 


The query is:
 
EXPLAIN (COSTS OFF) SELECT del.entity_id, del.received_timestamp, 
del.received_timestamp <=>'3000-01-01' :: TIMESTAMP, del.folder_id FROM 
email_delivery delWHERE del.fts_all @@ to_tsquery('simple', 'andre:*:*') 
ANDdel.folder_id IN (44965, 2470520) ORDER BY del.received_timestamp <=> 
'3000-01-01':: TIMESTAMP LIMIT 10; 
 
which gives the following plan:
 

┌───┐
 │  QUERY PLAN 
  │
 
├───┤
 │ Limit 

│
 │   ->  Index Scan using rum_idx on email_delivery del                       
                   │
 │ Index Cond: (fts_all @@ '''andre'':* & ''jose'':*'::tsquery) 
 │
 │ Order By: (received_timestamp <=> '3000-01-01 00:00:00'::timestamp 
without time zone) │
 │ Filter: (folder_id = ANY ('{44965,2470520}'::bigint[])) 
  │
 
└───┘

  
This indicates that the folder_id column, which is part of the index, isn't 
actually "part of the index", meaning it's not used in the index for some 
reason.
 
Can you tell me how to craft an index so that folder_id = ANY 
('{44965,2470520}'::bigint[]) is part of the Index Cond?
 
Thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 




Re: [GENERAL] Pattern Matching question - PG 9.6

2017-05-15 Thread matshyeq
^/testfile/client/[0-9]+/attachment/([0-9]{1,14}/master/$|unassigned/)

Kind Regards
~Maciek
On 15 May 2017 at 06:21, Patrick B  wrote:

>
>
> 2017-05-15 16:10 GMT+12:00 David G. Johnston :
>
>> On Sunday, May 14, 2017, Patrick B  wrote:
>>
>>>
>>> Demo: http://dbfiddle.uk/?rdbms=postgres_9.6=3c3a3f870eb4d0
>>> 02c5b4200042b25669
>>> 
>>>
>>> The rows that I should be getting are:
>>>
>>> 5   /testfile/client/10/attachment/1000/master/   10
>>>
>>> 7   /testfile/client/10/attachment/unassigned/file/1001/master   10
>>>
>>> 8   /testfile/client/10/attachment/unassigned/file/1002/master   10
>>>
>>> What am I doing wrong?
>>>
>>
>> Without you explaining why 6 and 9 are invalid it's impossible to say how
>> you should modify your regex to exclude them.  You may find positive and
>> negative look-ahead useful though.
>>
>> David J.
>>
>
>
> I thought I have already explained it. Here it goes again. Demo page is:
> http://dbfiddle.uk/?rdbms=postgres_9.6=ea61e7e1859bdb
> 7f297f853a9dc0e3d0
>
>
> As you can see, there is a lot of variations for the same file_id (1000).
> File_id (1001/1002) is a new unassigned file, different from the others.
>
> I wanna be able to get ONLY the 'master' variation ( 
> /testfile/client/10/attachment/1000/master/
> ) and the unassigned files variations [if any] (/testfile/client/10/
> attachment/unassigned/file/1001/master | /testfile/client/10/
> attachment/unassigned/file/1002/master).
>
> So on the demo above, only id IN (5,9,10) are valid for me. The SELECT
> that I used as an example is not returning me ONLY the data I need,
> instead, it is returning (almost) everything.
>
>
> To summarize: I wanna use a pattern matching the only returns these rows:
>
> /testfile/client/10/attachment/1000/master/
> /testfile/client/10/attachment/unassigned/file/1001/master
> /testfile/client/10/attachment/unassigned/file/1002/master
>
>
>
> What can I do to fix it?
> Thanks
> P.
>
>
>
>