Re: [GENERAL] Fwd: I could not see any row in audit table

2017-01-25 Thread Adrian Klaver

On 01/25/2017 09:57 AM, Shailesh Singh wrote:

Dear Adrian Klaver,

I have used pg_dump to take backup
eg: pg_dump my_db > /dbbackup/audit_table.sql


To be clear you dumped the entire database, not just the audit table, 
correct?


Meant to ask before, are you dumping/restoring between the same Postgres 
version, if so what version would that be?



If not what versions are we talking about and which version of pg_dump 
did you use?




And

I have restored it using psql command

eg: psql my_test_db < /dbbackup/audit_table.sql

before it has had also created necessay hstore extension for my new db
"*my_test_db*"


Again, just to be clear, you created the hstore extension in the new 
database and then restored the data, correct?





There is no error while restoring data.


The data is in the audit_table.sql file?



After Restoring when i issued select count(*) from audit.logged_actions
; , it shows zero row count ,

Some thing related to *toastble *i found as my table uses extended data
type size , but till now not able to view data after restoring.


Looking at the table schema, there are non-toastable fields in it also, 
so I would except you would see something or get an error when you did 
the select. I don't think this is a TOAST problem.




On Wed, Jan 25, 2017 at 7:56 PM, Adrian Klaver
> wrote:

On 01/25/2017 03:03 AM, Shailesh Singh wrote:




Dear Group Member ,


I had configured the audit trigger for my datbase following the
below
document url:

*https://wiki.postgresql.org/wiki/Audit_trigger_91plus

>


*
Now my audit table :

CREATE TABLE audit.logged_actions (
event_id bigserial PRIMARY KEY,
schema_name text NOT NULL,
TABLE_NAME text NOT NULL,
relid oid NOT NULL,
session_user_name text,
action_tstamp_tx TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_stm TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_clk TIMESTAMP WITH TIME ZONE NOT NULL,
transaction_id BIGINT,
application_name text,
client_addr inet,
client_port INTEGER,
client_query text NOT NULL,
action CHAR(1) NOT NULL CHECK (action IN ('I','D','U', 'T')),
row_data hstore,
changed_fields hstore,
statement_only BOOLEAN NOT NULL
);


Now this table contains 50 GB of data , But when taking its
backup using
pg_dump and after restoring , it show that it has zero row.


What was the dump command you used?

Where there any errors during the restore?



How to see the restored data?
**

--
With Regards!
Shailesh Singh






--
Adrian Klaver
adrian.kla...@aklaver.com 




--
With Regards!
Shailesh Singh



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tips on maintaining several pg_hba files

2017-01-25 Thread Michael Paquier
On Thu, Jan 26, 2017 at 2:36 AM, Alfredo Palhares  wrote:
> I have a PostgreSQL cluster with several ROLES that access the node
> according to his state.
>
> On the master node, I only want the roles that need to write and the admins.
> On the synchronous node   i would  want only roles with read access that
> require the most up to date data, and the admins
> On the asynchronous node allow roles with read acess, and users to debug the
> system.
>
>
> So I will probably be templating the pg_hba.conf file since there are
> common roles between the systems.
> Do you guys have any suggestions on this? Links?
>
> Does pg_hba support to include files?

You cannot include an entire file, but it is possible to list users
and/or databases via files specified by @:
https://www.postgresql.org/docs/9.6/static/auth-pg-hba-conf.html

Particularly this bit:
Files included by @ constructs are read as lists of names, which can
be separated by either whitespace or commas. Comments are introduced
by #, just as in pg_hba.conf, and nested @ constructs are allowed.
Unless the file name following @ is an absolute path, it is taken to
be relative to the directory containing the referencing file.

So you could take advantage of that to handle your configurations on
different nodes with the same pg_hba.conf, but different users and
databases.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread Tomas Vondra

On 01/25/2017 10:47 PM, John R Pierce wrote:

On 1/25/2017 12:59 PM, Thomas Kellerer wrote:

So here is my question: how does Postgres estimate/know the memory
needed for the aggregation? Or does it dynamically resize the memory
if the initial assumption was wrong?


my understanding is it fits as much as it can into a work_mem sized
allocation, and if thats not enough uses temporary files and multiple
passes.



That only works for hash joins, not for hash aggregates. Hash aggregate 
is about the only operation in PostgreSQL that can cause OOM because of 
under-estimation.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread Thomas Kellerer

Tomas Vondra schrieb am 25.01.2017 um 22:46:

I guess this is based on the column statistics stored in pg_stats, but I
am not sure:



It is based on the average length of values in that column, yes.


Thanks for confirming that.

I assume this is taken from pg_stats.avg_width ?


I'm not sure what you mean by 'dynamically resize'. The above
decision is pretty much how planner decides whether to use hash
aggregate or group aggregate. If we estimate that the hash aggregate
will fit into work_mem, the planner will consider both possibilities.
If the estimate says hash aggregate would not fit into work_mem,
we'll only consider group aggregate, because that can work with very
little memory.

At execution time we'll only use as much memory as actually needed.
The trouble is that if we under-estimated the amount of memory,
there's no way back.


The "under-estimation" is what I am referring to with "dynamically resize".

What happens if the planner assumes 100kb but in reality it needs 100MB?

Thomas








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread John R Pierce

On 1/25/2017 12:59 PM, Thomas Kellerer wrote:
So here is my question: how does Postgres estimate/know the memory 
needed for the aggregation? Or does it dynamically resize the memory 
if the initial assumption was wrong?


my understanding is it fits as much as it can into a work_mem sized 
allocation, and if thats not enough uses temporary files and multiple 
passes.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread Tomas Vondra

On 01/25/2017 09:59 PM, Thomas Kellerer wrote:

There was a question on dba.stackexchange recently:

   http://dba.stackexchange.com/a/162117/1822

That question (and the answer) deals with performance difference of a
query caused by the _declared_ length of a VARCHAR column in SQL Server
(everything else being equal - especially the actual data length)

For the curios: it does make a (big) difference in performance if you
declare varchar(100) or varchar(2000) in SQL Server - something that
really surprised me.

The difference in performance in SQL Servers seems to be caused by SQL
Server's optimizer that uses the _declared_ length of a column to
estimate the memory needed for the aggregation (or sorting).

Now, we all know that there is no performance difference whatsoever for
varchar columns regardless of the declared length.

In one of the comments, to that answer the question was asked how
Postgres knows how much memory it needs to allocate to do the aggregation.

I guess this is based on the column statistics stored in pg_stats, but I
am not sure:



It is based on the average length of values in that column, yes.

We estimate the number of distinct groups produced by the aggregation, 
and multiply it by average length of the key(s). The declared maximum 
length of a column does not matter.


So if the grouping is expected to produce 1000 groups, and each key 
column is 100B on average, 100kB should be enough - but only for the 
keys. The estimate also has to include the aggregate states, which is a 
different thing.


>

So here is my question: how does Postgres estimate/know the memory
needed for the aggregation? Or does it dynamically resize the memory if
the initial assumption was wrong?



I'm not sure what you mean by 'dynamically resize'. The above decision 
is pretty much how planner decides whether to use hash aggregate or 
group aggregate. If we estimate that the hash aggregate will fit into 
work_mem, the planner will consider both possibilities. If the estimate 
says hash aggregate would not fit into work_mem, we'll only consider 
group aggregate, because that can work with very little memory.


At execution time we'll only use as much memory as actually needed. The 
trouble is that if we under-estimated the amount of memory, there's no 
way back.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread Thomas Kellerer

There was a question on dba.stackexchange recently:

   http://dba.stackexchange.com/a/162117/1822

That question (and the answer) deals with performance difference of a query 
caused by the _declared_ length of a VARCHAR column in SQL Server (everything 
else being equal - especially the actual data length)

For the curios: it does make a (big) difference in performance if you declare 
varchar(100) or varchar(2000) in SQL Server - something that really surprised 
me.

The difference in performance in SQL Servers seems to be caused by SQL Server's 
optimizer that uses the _declared_ length of a column to estimate the memory 
needed for the aggregation (or sorting).

Now, we all know that there is no performance difference whatsoever for varchar 
columns regardless of the declared length.

In one of the comments, to that answer the question was asked how Postgres 
knows how much memory it needs to allocate to do the aggregation.

I guess this is based on the column statistics stored in pg_stats, but I am not 
sure:

So here is my question: how does Postgres estimate/know the memory needed for 
the aggregation? Or does it dynamically resize the memory if the initial 
assumption was wrong?

Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread ProPAAS DBA



On 01/25/2017 09:35 AM, Tom Lane wrote:

Steve Crawford  writes:

Adrian asks the correct questions. Lacking the answers to those I'm going
to venture a guess that a Unix-domain socket exists but access via
Unix-domain sockets is somehow blocked, probably by pg_hba.conf.

Actually, the more common case I've seen is that the server put the
socket file in directory X but psql (or more specifically libpq)
is looking in directory Y.  Try "show unix_socket_directories"
(or on older server versions "show unix_socket_directory") in a
successful connection, and compare to the path that psql shows when
bleating that it can't connect.

Usually this isn't operator error per se, but inconsistent defaults
between builds obtained from different sources.

regards, tom lane



Thanks Tom;

This was the issue...




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: I could not see any row in audit table

2017-01-25 Thread Shailesh Singh
Dear Adrian Klaver,

I have used pg_dump to take backup
eg: pg_dump my_db > /dbbackup/audit_table.sql

And

I have restored it using psql command

eg: psql my_test_db < /dbbackup/audit_table.sql

before it has had also created necessay hstore extension for my new db "
*my_test_db*"

There is no error while restoring data.

After Restoring when i issued select count(*) from audit.logged_actions ; ,
it shows zero row count ,

Some thing related to *toastble *i found as my table uses extended data
type size , but till now not able to view data after restoring.

On Wed, Jan 25, 2017 at 7:56 PM, Adrian Klaver 
wrote:

> On 01/25/2017 03:03 AM, Shailesh Singh wrote:
>
>>
>>
>>
>> Dear Group Member ,
>>
>>
>> I had configured the audit trigger for my datbase following the below
>> document url:
>>
>> *https://wiki.postgresql.org/wiki/Audit_trigger_91plus
>> 
>>
>>
>> *
>> Now my audit table :
>>
>> CREATE TABLE audit.logged_actions (
>> event_id bigserial PRIMARY KEY,
>> schema_name text NOT NULL,
>> TABLE_NAME text NOT NULL,
>> relid oid NOT NULL,
>> session_user_name text,
>> action_tstamp_tx TIMESTAMP WITH TIME ZONE NOT NULL,
>> action_tstamp_stm TIMESTAMP WITH TIME ZONE NOT NULL,
>> action_tstamp_clk TIMESTAMP WITH TIME ZONE NOT NULL,
>> transaction_id BIGINT,
>> application_name text,
>> client_addr inet,
>> client_port INTEGER,
>> client_query text NOT NULL,
>> action CHAR(1) NOT NULL CHECK (action IN ('I','D','U', 'T')),
>> row_data hstore,
>> changed_fields hstore,
>> statement_only BOOLEAN NOT NULL
>> );
>>
>>
>> Now this table contains 50 GB of data , But when taking its backup using
>> pg_dump and after restoring , it show that it has zero row.
>>
>
> What was the dump command you used?
>
> Where there any errors during the restore?
>
>
>>
>> How to see the restored data?
>> **
>>
>> --
>> With Regards!
>> Shailesh Singh
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
With Regards!
Shailesh Singh


[GENERAL] Tips on maintaining several pg_hba files

2017-01-25 Thread Alfredo Palhares
Hello everyone,

I have a PostgreSQL cluster with several ROLES that access the node
according to his state.

On the master node, I only want the roles that need to write and the admins.
On the synchronous node   i would  want only roles with read access that
require the most up to date data, and the admins
On the asynchronous node allow roles with read acess, and users to debug
the system.


So I will probably bee templating the pg_hba.conf file since there are
common roles between the systems.
Do you guys have any suggestions on this? Links?

Does pg_hba support to include files?


Regards,
Alfredo Palhares


Re: [GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread Tom Lane
Steve Crawford  writes:
> Adrian asks the correct questions. Lacking the answers to those I'm going
> to venture a guess that a Unix-domain socket exists but access via
> Unix-domain sockets is somehow blocked, probably by pg_hba.conf.

Actually, the more common case I've seen is that the server put the
socket file in directory X but psql (or more specifically libpq)
is looking in directory Y.  Try "show unix_socket_directories"
(or on older server versions "show unix_socket_directory") in a
successful connection, and compare to the path that psql shows when
bleating that it can't connect.

Usually this isn't operator error per se, but inconsistent defaults
between builds obtained from different sources.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread Melvin Davidson
On Wed, Jan 25, 2017 at 11:07 AM, Adrian Klaver 
wrote:

> On 01/25/2017 08:02 AM, ProPAAS DBA wrote:
>
>> Hi all;
>>
>>
>> we have a client server where 'psql -h localhost' is the only way psql
>> will connect when ssh'd onto the db server. Would like to be able to
>> just run psql but not sure what the issue/fix is. Anyone have any
>> insight hot to fix this?
>>
>
>
> What is the error you get if you use something other then -h localhost?
>
> What is in pg_hba.conf?
>
>
>>
>> Thanks in advance
>>
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

What is O/S -h localhost points to?
What is the version of PostgreSQL?




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread Steve Crawford
Adrian asks the correct questions. Lacking the answers to those I'm going
to venture a guess that a Unix-domain socket exists but access via
Unix-domain sockets is somehow blocked, probably by pg_hba.conf.

>From the psql man page: "...Not all of these options are required; there
are useful defaults. If you omit the host name, psql will connect via a
Unix-domain socket to a server on the local host, or via TCP/IP to
localhost on machines that don't have Unix-domain sockets"

Cheers,
Steve




On Wed, Jan 25, 2017 at 8:07 AM, Adrian Klaver 
wrote:

> On 01/25/2017 08:02 AM, ProPAAS DBA wrote:
>
>> Hi all;
>>
>>
>> we have a client server where 'psql -h localhost' is the only way psql
>> will connect when ssh'd onto the db server. Would like to be able to
>> just run psql but not sure what the issue/fix is. Anyone have any
>> insight hot to fix this?
>>
>
>
> What is the error you get if you use something other then -h localhost?
>
> What is in pg_hba.conf?
>
>
>>
>> Thanks in advance
>>
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread Adrian Klaver

On 01/25/2017 08:02 AM, ProPAAS DBA wrote:

Hi all;


we have a client server where 'psql -h localhost' is the only way psql
will connect when ssh'd onto the db server. Would like to be able to
just run psql but not sure what the issue/fix is. Anyone have any
insight hot to fix this?



What is the error you get if you use something other then -h localhost?

What is in pg_hba.conf?




Thanks in advance








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread ProPAAS DBA

Hi all;


we have a client server where 'psql -h localhost' is the only way psql 
will connect when ssh'd onto the db server. Would like to be able to 
just run psql but not sure what the issue/fix is. Anyone have any 
insight hot to fix this?



Thanks in advance





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: I could not see any row in audit table

2017-01-25 Thread Adrian Klaver

On 01/25/2017 03:03 AM, Shailesh Singh wrote:




Dear Group Member ,


I had configured the audit trigger for my datbase following the below
document url:

*https://wiki.postgresql.org/wiki/Audit_trigger_91plus



*
Now my audit table :

CREATE TABLE audit.logged_actions (
event_id bigserial PRIMARY KEY,
schema_name text NOT NULL,
TABLE_NAME text NOT NULL,
relid oid NOT NULL,
session_user_name text,
action_tstamp_tx TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_stm TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_clk TIMESTAMP WITH TIME ZONE NOT NULL,
transaction_id BIGINT,
application_name text,
client_addr inet,
client_port INTEGER,
client_query text NOT NULL,
action CHAR(1) NOT NULL CHECK (action IN ('I','D','U', 'T')),
row_data hstore,
changed_fields hstore,
statement_only BOOLEAN NOT NULL
);


Now this table contains 50 GB of data , But when taking its backup using
pg_dump and after restoring , it show that it has zero row.


What was the dump command you used?

Where there any errors during the restore?




How to see the restored data?
**

--
With Regards!
Shailesh Singh







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: I could not see any row in audit table

2017-01-25 Thread Raymond O'Donnell

On 25/01/17 11:03, Shailesh Singh wrote:




Dear Group Member ,


I had configured the audit trigger for my datbase following the below
document url:

*https://wiki.postgresql.org/wiki/Audit_trigger_91plus



*
Now my audit table :

CREATE TABLE audit.logged_actions (
event_id bigserial PRIMARY KEY,
schema_name text NOT NULL,
TABLE_NAME text NOT NULL,
relid oid NOT NULL,
session_user_name text,
action_tstamp_tx TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_stm TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_clk TIMESTAMP WITH TIME ZONE NOT NULL,
transaction_id BIGINT,
application_name text,
client_addr inet,
client_port INTEGER,
client_query text NOT NULL,
action CHAR(1) NOT NULL CHECK (action IN ('I','D','U', 'T')),
row_data hstore,
changed_fields hstore,
statement_only BOOLEAN NOT NULL
);


Now this table contains 50 GB of data , But when taking its backup using
pg_dump and after restoring , it show that it has zero row.


How did you restore it?

What is showing that there are zero rows? Did you do "select count(*) 
from ...", or something else?


Ray.

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: I could not see any row in audit table

2017-01-25 Thread Shailesh Singh
Dear Group Member ,


I had configured the audit trigger for my datbase following the below
document url:




*https://wiki.postgresql.org/wiki/Audit_trigger_91plus
*
Now my audit table :

CREATE TABLE audit.logged_actions (
event_id bigserial PRIMARY KEY,
schema_name text NOT NULL,
TABLE_NAME text NOT NULL,
relid oid NOT NULL,
session_user_name text,
action_tstamp_tx TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_stm TIMESTAMP WITH TIME ZONE NOT NULL,
action_tstamp_clk TIMESTAMP WITH TIME ZONE NOT NULL,
transaction_id BIGINT,
application_name text,
client_addr inet,
client_port INTEGER,
client_query text NOT NULL,
action CHAR(1) NOT NULL CHECK (action IN ('I','D','U', 'T')),
row_data hstore,
changed_fields hstore,
statement_only BOOLEAN NOT NULL);


Now this table contains 50 GB of data , But when taking its backup using
pg_dump and after restoring , it show that it has zero row.


How to see the restored data?

-- 
With Regards!
Shailesh Singh


Re: [GENERAL] What is the purpose of PostGIS on PostgreSQL?

2017-01-25 Thread Gavin Flower

On 25/01/17 20:14, Johann Spies wrote:



On 25 January 2017 at 08:32, Gavin Flower 
> 
wrote:





What is 'shapefile'?

I don't recall ever coming across that
term!https://en.wikipedia.org/wiki/Shapefile


 See https://en.wikipedia.org/wiki/Shapefile

Regards
Johann

--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Thanks!

This might prove very useful for a project of mine that is almost 
nothing to do with GIS!!!



Cheers,
Gavin



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching array for multiple items

2017-01-25 Thread Andreas Joseph Krogh
På onsdag 25. januar 2017 kl. 09:47:56, skrev Thomas Kellerer <
spam_ea...@gmx.net >:
Alex Magnum schrieb am 25.01.2017 um 09:29:
 > I can search an array with 1 = ANY('{1,3,4,7}'::int[])
 >
 > I need to check for one or multiple items in the array.
 >
 > e.g.'1,7,3'  = ANY('{1,3,4,7}'::int[]
 >
 > I do need to check if
 > a) all items exist in the array

 You can use the contains (or is contained) operator for that:

    array[1,7,3] <@ array[1,3,4,7] is true

    array[1,7,10] <@ array[1,3,4,7] is false

 > b) at least one item exists in the array

 You can use the "overlaps" operator:

    array[1,7,3] && array[1,3,4,7] returns true

    array[10,11] && array[1,3,4,7] returns false

 > Does the order of left and right side matter?

 For the contains or (is contained) operator the order matters, for the 
overlaps operator it does not.

 For more details see 
https://www.postgresql.org/docs/current/static/functions-array.html

 Thomas
 
Can you elaborate on index-usage? Ie. will the suggested queries above utilize 
idexes (gist?). If so, can you give an example with definition of index 
and explain-plan?
 
It would be interesting to see how this performs vs. contrib/intarray.
 
Thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Searching array for multiple items

2017-01-25 Thread Thomas Kellerer
Alex Magnum schrieb am 25.01.2017 um 09:29:
> I can search an array with 1 = ANY('{1,3,4,7}'::int[])
> 
> I need to check for one or multiple items in the array.
> 
> e.g.'1,7,3'  = ANY('{1,3,4,7}'::int[]
> 
> I do need to check if
> a) all items exist in the array

You can use the contains (or is contained) operator for that:

   array[1,7,3] <@ array[1,3,4,7] is true

   array[1,7,10] <@ array[1,3,4,7] is false

> b) at least one item exists in the array

You can use the "overlaps" operator: 

   array[1,7,3] && array[1,3,4,7] returns true

   array[10,11] && array[1,3,4,7] returns false

> Does the order of left and right side matter?

For the contains or (is contained) operator the order matters, for the overlaps 
operator it does not.

For more details see 
https://www.postgresql.org/docs/current/static/functions-array.html

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching array for multiple items

2017-01-25 Thread Oleg Bartunov
On Wed, Jan 25, 2017 at 11:29 AM, Alex Magnum  wrote:

> Hi,
> I can search an array with 1 = ANY('{1,3,4,7}'::int[])
>
> I need to check for one or multiple items in the array.
>
> e.g. '1,7,3'  = ANY('{1,3,4,7}'::int[]
>
> I do need to check if
> a) all items exist in the array
> b) at least one item exists in the array
>
> Is there a an operator that allows me to do these two?
> Does the order of left and right side matter?
> Right now I have a small function but I guess there is a more efficient
> way.
>

Look on our contrib/intarray


>
> Thanks for any help.
>
> A
>
>
>


[GENERAL] Searching array for multiple items

2017-01-25 Thread Alex Magnum
Hi,
I can search an array with 1 = ANY('{1,3,4,7}'::int[])

I need to check for one or multiple items in the array.

e.g. '1,7,3'  = ANY('{1,3,4,7}'::int[]

I do need to check if
a) all items exist in the array
b) at least one item exists in the array

Is there a an operator that allows me to do these two?
Does the order of left and right side matter?
Right now I have a small function but I guess there is a more efficient way.

Thanks for any help.

A