Re: [GENERAL] Row level security policy - calling function for right hand side value of 'in' in using_expression

2017-02-02 Thread Tom Lane
Jong-won Choi  writes:
> I have a RLS policy definition like:
>  CREATE POLICY promoter_policy ON Agency
>  USING (promoter in build_valid_promoter_list())
>  WITH CHECK (promoter in build_valid_promoter_list());

That's failing basic SQL expression syntax: the RHS of "IN" has
to be a parenthesized sub-select or array value.  You'd have better
luck with (promoter in (select * from build_valid_promoter_list()))
... syntax-wise, at least.  I'm not sure if we allow sub-selects
in RLS conditions.

Personally I'd write that more like
USING (check_valid_promoter(promoter))
with that function being defined in the obvious way.  There's little
reason to enumerate the entire set of valid promoters if you only
need to find out whether one specific value is one.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Row level security policy - calling function for right hand side value of 'in' in using_expression

2017-02-02 Thread Jong-won Choi

I have a RLS policy definition like:

CREATE POLICY promoter_policy ON Agency
USING (promoter in build_valid_promoter_list())
WITH CHECK (promoter in build_valid_promoter_list());


The build_valid_promoter_list function definition is:

CREATE OR REPLACE FUNCTION build_valid_promoter_list() RETURNS 
TABLE(id BIGINT)

  LANGUAGE plpgsql
AS $$
   DECLARE
   promoter_id BIGINT;
   acct_role TEXT;
   BEGIN
 SELECT promoter, role INTO promoter_id, acct_role FROM 
PromoterAccount
 WHERE oid = substring(current_setting('ts.promoter',true), 
26)::BIGINT;


 IF acct_role = 'agency' THEN
RETURN QUERY SELECT UNNEST(customers) FROM Agency WHERE 
promoter = promoter_id;

 ELSE
RETURN QUERY SELECT promoter_id;
 END IF;
   END
$$;



And I have one problem and one concern;

- When I try to create the policy using the first code fragment, I got 
'ERROR:  syntax error at or near "build_valid_promoter_list"'. I am 
wondering the reason and how to fix it.


- Ideally, it would be great if the function build_valid_promoter_list() 
get called once and the RLS internal uses it as a constant value. Is 
this possible?




I am very new to RLS, any hints, opinions, and fixes will be greatly 
appreciated.




Thanks

- Jong-won


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average - Pg 9.2

2017-02-02 Thread David G. Johnston
On Thursday, February 2, 2017, Patrick B  wrote:

> Hi guys,
>
> I've got a table which has id and created date columns.
>
> I want to get the average of inserted rows monthly.
> How can I get this data?
>
> This query is not working as it is showing me same data in both columns.
>
> select created_date,
> AVG(id)
> OVER(ORDER BY created_date) AS avr from test
>
> http://sqlfiddle.com/#!15/3289b/1
>
>
Maybe...see GROUP BY...

David J.


[GENERAL] Average - Pg 9.2

2017-02-02 Thread Patrick B
Hi guys,

I've got a table which has id and created date columns.

I want to get the average of inserted rows monthly.
How can I get this data?

This query is not working as it is showing me same data in both columns.

select created_date,
AVG(id)
OVER(ORDER BY created_date) AS avr from test

http://sqlfiddle.com/#!15/3289b/1

Thanks!
Patrick


Re: [GENERAL] Synchronous Commit, WAL archiving and statement_timeout

2017-02-02 Thread Michael Paquier
On Fri, Feb 3, 2017 at 2:15 AM, JP Jacoupy  wrote:
> I expected the statement to timeout because the synchronous_commit wouldn't
> work since the Slave is down while rollbacking on the Master.

Queries being stuck because of synchronous replication are already
committed locally on the primary when they are waiting for the standby
to confirm that the transaction has been committed as well there,
before sending back confirmation to the client. So statement_timeout
has no effect on in such situations.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronous Commit, WAL archiving and statement_timeout

2017-02-02 Thread Adrian Klaver

On 02/02/2017 09:15 AM, JP Jacoupy wrote:


Hello,

I noticed something strange and I would like to understand what's
happening.

I have the following environment:
  - 2 PostgreSQL instance running in hot-standby with synchronous commit
activated.


There have been many changes in replication over the years/versions, so 
it would be helpful to know what Postgres version you are using?



(further called Master & Slave)
  - The archiving of the WAL files is activated on the master running
every 5 minutes
  - Slave is down

I set the statement_timeout inside my ~/.psqlrc:
$ cat ~/.psqlrc
set statement_timeout = 1;
commit;

When running an UPDATE statement (via psql) on the master, it hangs
(psql seems to
wait a response from the slave) ignoring any value I set in my .psqlrc
and the update is
done and written on the Master. Furthermore if I try (doing the same
things in a small
script with a timeout on the call to PQexec) to make a call to PQcancel
it does nothing
on the Master.

I expected the statement to timeout because the synchronous_commit
wouldn't work


https://www.postgresql.org/docs/9.4/static/warm-standby.html#SYNCHRONOUS-REPLICATION

"25.2.8.3. Planning for High Availability

Commits made when synchronous_commit is set to on or remote_write will 
wait until the synchronous standby responds. The response may never 
occur if the last, or only, standby should crash."



since the Slave is down while rollbacking on the Master.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbouncer increase pool_size, reload does not work

2017-02-02 Thread Adrian Klaver

On 02/02/2017 03:27 AM, alexanderfelipewo wrote:

hello!

i have a question for pgbouncer in case someone has faced this and there is
a solution available.

i have a db with pgbouncer where the pool_size is set to 50 (i ve noticed
this using both session/transaction modes). Sometimes there are waiting
sessions b/c more than 50 want to execute sth at once and usually each query
is not so fast. So let's say i want to increase the pool_size from 50 to 70.
I change the config file and then login to pgbouncer where i run a
'reload;'. Now when i check the 'show config' i see the new value in the
'default_pool_size' parameter.

BUT

the problem is that the amount of servers is still 50 and does not go up to
70. At the same time there are clients in a 'waiting' state. If i do a
restart then it will work and the servers will be able to go up to 70.

I ve tried these in different databases and it doesnt happen all the time
but it does happen pretty often.
Has anyone else seen this? if so, is there a solution (except restarting) or
explanation?


Would this work?:

https://pgbouncer.github.io/faq.html#how-to-upgrade-pgbouncer-without-dropping-connections




thank you in advance



--
View this message in context: 
http://postgresql.nabble.com/pgbouncer-increase-pool-size-reload-does-not-work-tp5942273.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbouncer increase pool_size, reload does not work

2017-02-02 Thread Jehan-Guillaume de Rorthais
On Thu, 2 Feb 2017 04:27:45 -0700 (MST)
alexanderfelipewo  wrote:

> hello!
> 
> i have a question for pgbouncer in case someone has faced this and there is
> a solution available.
> 
> i have a db with pgbouncer where the pool_size is set to 50 (i ve noticed
> this using both session/transaction modes). Sometimes there are waiting
> sessions b/c more than 50 want to execute sth at once and usually each query
> is not so fast. So let's say i want to increase the pool_size from 50 to 70.
> I change the config file and then login to pgbouncer where i run a
> 'reload;'. Now when i check the 'show config' i see the new value in the
> 'default_pool_size' parameter. 
> 
> BUT
> 
> the problem is that the amount of servers is still 50 and does not go up to
> 70. At the same time there are clients in a 'waiting' state. If i do a
> restart then it will work and the servers will be able to go up to 70.
> 
> I ve tried these in different databases and it doesnt happen all the time
> but it does happen pretty often. 
> Has anyone else seen this? if so, is there a solution (except restarting) or
> explanation?

That's because internally, it seems each pool (pair of role/db) is set up when
it is created, eg. during the very first connexion.

Even if the default pool_size can be changed with a simple reload,
you will have to wait for your pool to be fully deallocated before it
can be recreated with the new pool_size value.

I didn't dig too much in the code, maybe a core hacker of pgbouncer can provide
some better details and tips about this.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Synchronous Commit, WAL archiving and statement_timeout

2017-02-02 Thread JP Jacoupy
Hello,

I noticed something strange and I would like to understand what's happening.

I have the following environment:
- 2 PostgreSQL instance running in hot-standby with synchronous commit 
activated.
(further called Master & Slave)
- The archiving of the WAL files is activated on the master running every 5 
minutes
- Slave is down

I set the statement_timeout inside my ~/.psqlrc:
$ cat ~/.psqlrc
set statement_timeout = 1;
commit;

When running an UPDATE statement (via psql) on the master, it hangs (psql seems 
to
wait a response from the slave) ignoring any value I set in my .psqlrc and the 
update is
done and written on the Master. Furthermore if I try (doing the same things in 
a small
script with a timeout on the call to PQexec) to make a call to PQcancel it does 
nothing
on the Master.

I expected the statement to timeout because the synchronous_commit wouldn't work
since the Slave is down while rollbacking on the Master.

[GENERAL] pgbouncer increase pool_size, reload does not work

2017-02-02 Thread alexanderfelipewo
hello!

i have a question for pgbouncer in case someone has faced this and there is
a solution available.

i have a db with pgbouncer where the pool_size is set to 50 (i ve noticed
this using both session/transaction modes). Sometimes there are waiting
sessions b/c more than 50 want to execute sth at once and usually each query
is not so fast. So let's say i want to increase the pool_size from 50 to 70.
I change the config file and then login to pgbouncer where i run a
'reload;'. Now when i check the 'show config' i see the new value in the
'default_pool_size' parameter. 

BUT

the problem is that the amount of servers is still 50 and does not go up to
70. At the same time there are clients in a 'waiting' state. If i do a
restart then it will work and the servers will be able to go up to 70.

I ve tried these in different databases and it doesnt happen all the time
but it does happen pretty often. 
Has anyone else seen this? if so, is there a solution (except restarting) or
explanation?


thank you in advance



--
View this message in context: 
http://postgresql.nabble.com/pgbouncer-increase-pool-size-reload-does-not-work-tp5942273.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread John R Pierce

On 2/2/2017 12:13 PM, PAWAN SHARMA wrote:
For testing i have created  > > 1:create user user1; 2:create user user2; 3:create group 
dba_group; > 4:grant dba_group to user1; 5:ALTER group dba_group 
CREATEDB > CREATEROLE; > > > but still user1 don't have createdb and 
createrole privilege, even he > is assigned to  dba_group which having 
both the privileges..?


those sorts of attributes don't inherit with role membership. role 
membership affects table access rights and such.you have read the 
documentation on this, yes?  see the note on 
https://www.postgresql.org/docs/current/static/sql-createrole.html where 
it says...


   The INHERIT attribute governs inheritance of grantable privileges
   (that is, access privileges for database objects and role
   memberships). It does not apply to the special role attributes set
   by CREATE ROLE and ALTER ROLE. For example, being a member of a role
   with CREATEDB privilege does not immediately grant the ability to
   create databases, even if INHERIT is set; it would be necessary to
   become that role via SET ROLE before creating a database.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Fastest simple key-value store, multiple writers, like Redis?

2017-02-02 Thread Nicolas Paris
Le 02 févr. 2017 à 20:00, Rob Nikander écrivait :
> Hi,
> 
> I'm working on a project with multiple different data storage backends. I'd
> like to consolidate and use Postgres for more things. In one new situation
> we're starting to use Redis, thinking it will perform better than Postgres for
> a table that is updated a lot by concurrent background jobs. 
> 
> I'm skeptical of no-sql stuff for various reasons. I'm wondering what PG
> experts think -- is there is a way to configure Postgres to handle a table
> differently, so that it could compete with Redis? Or are there some workloads
> where it is definitely better to use an alternative data store?
> 
> This table will have a few million rows, five small columns. Rows will be
> updated, read, or inserted 5-10 million times a day, by concurrent processes.
> It operates like a key-value store in that most selects will be getting one
> row, and maybe updating that row. Ideally these processes could work without
> stepping on each other's toes and competing for locks. 
> 
> Rob
> 
> 
Hi!

While I have found existing tools for benchmarking both postgres and
redis (hammerdb by eg), AFAIK they are not compatible with custom
queries.
I wonder if such procedure would give some precise insight:

For each database kind:
1. ask the community to optimise your design
2. create your optimised table
3. populate the database
4. write a scripting program with both postgres/redis connectors and
able to run concurent queries (python?)
5. write methods to run queries (insert/update) for both database kind
6. get the timing
7. return to 1. or next
8. give your feedback to both communities


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest simple key-value store, multiple writers, like Redis?

2017-02-02 Thread Khalil Khamlichi
You can keep using redis and use FDW to access it as if it was a postgres
table.
It does read and write to redis and can present redis data as regular
tables to you.
I never really had to choose between redis and postgres. Postgres was
always for safety and persistance and redis for sharing data and for
providing realtime services with fast access to data.


On Feb 2, 2017 7:02 PM, "Rob Nikander"  wrote:

> Hi,
>
> I'm working on a project with multiple different data storage backends.
> I'd like to consolidate and use Postgres for more things. In one new
> situation we're starting to use Redis, thinking it will perform better than
> Postgres for a table that is updated a lot by concurrent background jobs.
>
> I'm skeptical of no-sql stuff for various reasons. I'm wondering what PG
> experts think -- is there is a way to configure Postgres to handle a table
> differently, so that it could compete with Redis? Or are there some
> workloads where it is definitely better to use an alternative data store?
>
> This table will have a few million rows, five small columns. Rows will be
> updated, read, or inserted 5-10 million times a day, by concurrent
> processes. It operates like a key-value store in that most selects will be
> getting one row, and maybe updating that row. Ideally these processes could
> work without stepping on each other's toes and competing for locks.
>
> Rob
>
>
>


[GENERAL] Re: PSQL 9.5 select for update locks too many rows when using numeric instead of int

2017-02-02 Thread Sfiligoi, Igor
Uhm... maybe I misinterpreted the results.

Looking better, the root cause seems to be that the query planner is not using 
the index, resorting to a seq scan instead.

OK... that makes more sense.

Sorry for the bogus email.

Igor

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sfiligoi, Igor
Sent: Thursday, February 02, 2017 1:22 PM
To: pgsql-general@postgresql.org
Subject: -EXT-[GENERAL] PSQL 9.5 select for update locks too many rows when 
using numeric instead of int

Dear PSQL team.

I just found a weird problem.

When I pass a numeric type to a select for update statement, it locks loads of 
rows, instead of a single one!
See explains below.

Is this a known bug (in 9.5)?
Any chance it was fixed in a more recent release?

Thanks,
  Igor

Note: My table has about 200M rows.
true_data_id is of type bigint.

mcatdb=> PREPARE fooplan3 (NUMERIC) AS SELECT 
DI.object_status,DR.replica_status,DR.replication_id FROM MCAT_DATA_INFO DI, 
MCAT_DATA_REPLICA DR WHERE DI.true_data_id = DR.data_id AND DI.true_data_id = 
$1 FOR UPDATE;

mcatdb=> explain analyze EXECUTE fooplan3(209390104);
 QUERY PLAN
-
LockRows  (cost=0.57..16852579.49 rows=1036721 width=32) (actual 
time=233942.206..254040.547 rows=1 loops=1)
   ->  Nested Loop  (cost=0.57..16842212.28 rows=1036721 width=32) (actual 
time=233942.171..254040.505 rows=1 loops=1)
 ->  Seq Scan on mcat_data_info di  (cost=0.00..9867006.22 rows=1037098 
width=22) (actual time=233942.109..254040.419 rows=1 loops=1)
   Filter: ((true_data_id)::numeric = '209390104'::numeric)
   Rows Removed by Filter: 207368796
 ->  Index Scan using pkey_data_replica on mcat_data_replica dr  
(cost=0.57..6.72 rows=1 width=26) (actual time=0.047..0.052 rows=1 loops=1)
   Index Cond: (data_id = di.true_data_id)
Execution time: 254040.632 ms

mcatdb=> PREPARE fooplan4 (INT) AS SELECT 
DI.object_status,DR.replica_status,DR.replication_id FROM MCAT_DATA_INFO DI, 
MCAT_DATA_REPLICA DR WHERE DI.true_data_id = DR.data_id AND DI.true_data_id = 
$1 FOR UPDATE;
mcatdb=> explain analyze EXECUTE fooplan4(209390104);
 QUERY PLAN
-
LockRows  (cost=1.14..17.20 rows=1 width=32) (actual time=0.307..0.318 rows=1 
loops=1)
   ->  Nested Loop  (cost=1.14..17.19 rows=1 width=32) (actual 
time=0.232..0.243 rows=1 loops=1)
 ->  Index Scan using idx_0_data_info on mcat_data_info di  
(cost=0.57..8.59 rows=1 width=22) (actual time=0.193..0.197 rows=1 loops=1)
   Index Cond: (true_data_id = 209390104)
 ->  Index Scan using pkey_data_replica on mcat_data_replica dr  
(cost=0.57..8.59 rows=1 width=26) (actual time=0.032..0.039 rows=1 loops=1)
   Index Cond: (data_id = 209390104)
Execution time: 0.420 ms

mcatdb=> PREPARE fooplan5 (BIGINT) AS SELECT 
DI.object_status,DR.replica_status,DR.replication_id FROM MCAT_DATA_INFO DI, 
MCAT_DATA_REPLICA DR WHERE DI.true_data_id = DR.data_id AND DI.true_data_id = 
$1 FOR UPDATE;
mcatdb=> explain analyze EXECUTE fooplan5(209390104);
 QUERY PLAN
-
LockRows  (cost=1.14..17.20 rows=1 width=32) (actual time=0.316..0.347 rows=1 
loops=1)
   ->  Nested Loop  (cost=1.14..17.19 rows=1 width=32) (actual 
time=0.252..0.283 rows=1 loops=1)
 ->  Index Scan using idx_0_data_info on mcat_data_info di  
(cost=0.57..8.59 rows=1 width=22) (actual time=0.042..0.059 rows=1 loops=1)
   Index Cond: (true_data_id = '209390104'::bigint)
 ->  Index Scan using pkey_data_replica on mcat_data_replica dr  
(cost=0.57..8.59 rows=1 width=26) (actual time=0.199..0.212 rows=1 loops=1)
   Index Cond: (data_id = '209390104'::bigint)
Execution time: 0.443 ms
(7 rows)




[GENERAL] PSQL 9.5 select for update locks too many rows when using numeric instead of int

2017-02-02 Thread Sfiligoi, Igor
Dear PSQL team.

I just found a weird problem.

When I pass a numeric type to a select for update statement, it locks loads of 
rows, instead of a single one!
See explains below.

Is this a known bug (in 9.5)?
Any chance it was fixed in a more recent release?

Thanks,
  Igor

Note: My table has about 200M rows.
true_data_id is of type bigint.

mcatdb=> PREPARE fooplan3 (NUMERIC) AS SELECT 
DI.object_status,DR.replica_status,DR.replication_id FROM MCAT_DATA_INFO DI, 
MCAT_DATA_REPLICA DR WHERE DI.true_data_id = DR.data_id AND DI.true_data_id = 
$1 FOR UPDATE;

mcatdb=> explain analyze EXECUTE fooplan3(209390104);
 QUERY PLAN
-
LockRows  (cost=0.57..16852579.49 rows=1036721 width=32) (actual 
time=233942.206..254040.547 rows=1 loops=1)
   ->  Nested Loop  (cost=0.57..16842212.28 rows=1036721 width=32) (actual 
time=233942.171..254040.505 rows=1 loops=1)
 ->  Seq Scan on mcat_data_info di  (cost=0.00..9867006.22 rows=1037098 
width=22) (actual time=233942.109..254040.419 rows=1 loops=1)
   Filter: ((true_data_id)::numeric = '209390104'::numeric)
   Rows Removed by Filter: 207368796
 ->  Index Scan using pkey_data_replica on mcat_data_replica dr  
(cost=0.57..6.72 rows=1 width=26) (actual time=0.047..0.052 rows=1 loops=1)
   Index Cond: (data_id = di.true_data_id)
Execution time: 254040.632 ms

mcatdb=> PREPARE fooplan4 (INT) AS SELECT 
DI.object_status,DR.replica_status,DR.replication_id FROM MCAT_DATA_INFO DI, 
MCAT_DATA_REPLICA DR WHERE DI.true_data_id = DR.data_id AND DI.true_data_id = 
$1 FOR UPDATE;
mcatdb=> explain analyze EXECUTE fooplan4(209390104);
 QUERY PLAN
-
LockRows  (cost=1.14..17.20 rows=1 width=32) (actual time=0.307..0.318 rows=1 
loops=1)
   ->  Nested Loop  (cost=1.14..17.19 rows=1 width=32) (actual 
time=0.232..0.243 rows=1 loops=1)
 ->  Index Scan using idx_0_data_info on mcat_data_info di  
(cost=0.57..8.59 rows=1 width=22) (actual time=0.193..0.197 rows=1 loops=1)
   Index Cond: (true_data_id = 209390104)
 ->  Index Scan using pkey_data_replica on mcat_data_replica dr  
(cost=0.57..8.59 rows=1 width=26) (actual time=0.032..0.039 rows=1 loops=1)
   Index Cond: (data_id = 209390104)
Execution time: 0.420 ms

mcatdb=> PREPARE fooplan5 (BIGINT) AS SELECT 
DI.object_status,DR.replica_status,DR.replication_id FROM MCAT_DATA_INFO DI, 
MCAT_DATA_REPLICA DR WHERE DI.true_data_id = DR.data_id AND DI.true_data_id = 
$1 FOR UPDATE;
mcatdb=> explain analyze EXECUTE fooplan5(209390104);
 QUERY PLAN
-
LockRows  (cost=1.14..17.20 rows=1 width=32) (actual time=0.316..0.347 rows=1 
loops=1)
   ->  Nested Loop  (cost=1.14..17.19 rows=1 width=32) (actual 
time=0.252..0.283 rows=1 loops=1)
 ->  Index Scan using idx_0_data_info on mcat_data_info di  
(cost=0.57..8.59 rows=1 width=22) (actual time=0.042..0.059 rows=1 loops=1)
   Index Cond: (true_data_id = '209390104'::bigint)
 ->  Index Scan using pkey_data_replica on mcat_data_replica dr  
(cost=0.57..8.59 rows=1 width=26) (actual time=0.199..0.212 rows=1 loops=1)
   Index Cond: (data_id = '209390104'::bigint)
Execution time: 0.443 ms
(7 rows)




Re: [SPAM] [GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread PAWAN SHARMA
On Fri, Feb 3, 2017 at 12:47 AM, John R Pierce  wrote:

> On 2/2/2017 11:09 AM, PAWAN SHARMA wrote:
>
>
> 1: Create User
> 2:Create group
> 3:Alter Group Add/Remove User's..
>
>
> create user user1;
> create user user2;
> create group group1;
> grant group1 to user1, user2;
>
>
> in fact in postgres, both users and groups are roles, the only distinction
> is in how they are used.   CREATE USER  is exactly the same as CREATE
> ROLE  WITH LOGIN;
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>

Hi John

For testing i have created

1:create user user1;
2:create user user2;
3:create group dba_group;
4:grant dba_group to user1;
5:ALTER group dba_group
CREATEDB
CREATEROLE;


but still user1 don't have createdb and createrole privilege, even he is
assigned to  dba_group which having both the privileges..?
 .


Consider pgmodeler - Re: [GENERAL] Data Modeling Tools - Version specific to Postgres

2017-02-02 Thread James Bullock
On 02/01/2017 01:35 PM, Greg Slawek wrote:
> 
> Can anyone recommend a data modeling tool (preferably Mac OSX compatible)?
> 
> I would like to be sure it can export version specific SQL code (ie 9.1 vs 
> 9.4)
> 


I'm very happy with pgmodeler, which I picked up about 6-8 weeks ago. It's an
open source / paid bundle Qt-based app (so in C++) which I'm running on OpenSuSE
42.2 / KDE. See the web site of same name, and associated GITHub project.

AIR, "version specific" DDL generation is a backlog item or partially
implemented. Over all, there are some gaps in function compared to what I'd
like, and occasional fit and finish, BUT it's on par with any of the modeling
tools, including commercial, I've seen over the years, and better than most.
(I've never seen a modeling tool that did everything I'd like, or even robustly
did everything it claimed.)


YMMV - I'm using my design / build effort to get hands' on with postgreSQL's
distinctive features, so haven't exercised pgmodeler over the whole scope of
what it could model and maintain.


-- 
Jim Bullock, Rare Bird Enterprises, "Conscious Development"
LinkedIn: http://www.linkedin.com/in/rarebirdenterprises

Listen to a round table of experts in these books from Dorset House:
Roundtable on Project Management, http://www.dorsethouse.com/books/rpm.html
Roundtable on Technical Leadership, http://www.dorsethouse.com/books/rtl.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread John R Pierce

On 2/2/2017 11:09 AM, PAWAN SHARMA wrote:


1: Create User
2:Create group
3:Alter Group Add/Remove User's..


create user user1;
create user user2;
create group group1;
grant group1 to user1, user2;


in fact in postgres, both users and groups are roles, the only 
distinction is in how they are used.   CREATE USER  is exactly the 
same as CREATE ROLE  WITH LOGIN;



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Fastest simple key-value store, multiple writers, like Redis?

2017-02-02 Thread Simon Riggs
On 2 February 2017 at 19:00, Rob Nikander  wrote:

> I'm working on a project with multiple different data storage backends. I'd
> like to consolidate and use Postgres for more things. In one new situation
> we're starting to use Redis, thinking it will perform better than Postgres
> for a table that is updated a lot by concurrent background jobs.
>
> I'm skeptical of no-sql stuff for various reasons. I'm wondering what PG
> experts think -- is there is a way to configure Postgres to handle a table
> differently, so that it could compete with Redis? Or are there some
> workloads where it is definitely better to use an alternative data store?
>
> This table will have a few million rows, five small columns. Rows will be
> updated, read, or inserted 5-10 million times a day, by concurrent
> processes. It operates like a key-value store in that most selects will be
> getting one row, and maybe updating that row. Ideally these processes could
> work without stepping on each other's toes and competing for locks.

Difficult to know precisely.

What I encourage you to do is write a benchmark that is representative
of what you are trying to achieve, using the pgbench utility's ability
to handle custom scripts.
https://www.postgresql.org/docs/devel/static/pgbench.html

Please post it to the project as a submission that we can then use for
performance discussions. No need to use precise business terms when
you name tables and columns, keep it generic.

You'll get a precise measurement of whether it works for you. And the
project will get a representative test case that we can understand and
tune for. And if everyone does that we'll get a set of use cases that
will help demonstrate our performance and to allow people to discuss
improvements.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread PAWAN SHARMA
On Fri, Feb 3, 2017 at 12:08 AM, John R Pierce  wrote:

> On 2/2/2017 8:24 AM, Moreno Andreo wrote:
>
>
> Can we use AD(Active Directory) groups concepts in postgres as we are
> using this concepts in SQL server.
>
> So, is there any possible to work with AD groups such as (IT-DBA,
> IT-APPS..etc ) in postgresql.
>
>
> Like this?
> https://wiki.postgresql.org/wiki/LDAP_Authentication_against_AD
>
>
> that handles authentication, but AFAIK, postgresql has no concept of
> Active Directory groups, only group roles defined within the postgres
> server. even if you're using AD for authentication, you still have to
> create roles in postgresql, and put these roles in groups.
>
> --
> john r pierce, recycling bits in santa cruz
>
>
Thanks John,

So, We need to follow below steps in such case:

1: Create User
2:Create group
3:Alter Group Add/Remove User's..


[GENERAL] Fastest simple key-value store, multiple writers, like Redis?

2017-02-02 Thread Rob Nikander
Hi,

I'm working on a project with multiple different data storage backends. I'd
like to consolidate and use Postgres for more things. In one new situation
we're starting to use Redis, thinking it will perform better than Postgres
for a table that is updated a lot by concurrent background jobs.

I'm skeptical of no-sql stuff for various reasons. I'm wondering what PG
experts think -- is there is a way to configure Postgres to handle a table
differently, so that it could compete with Redis? Or are there some
workloads where it is definitely better to use an alternative data store?

This table will have a few million rows, five small columns. Rows will be
updated, read, or inserted 5-10 million times a day, by concurrent
processes. It operates like a key-value store in that most selects will be
getting one row, and maybe updating that row. Ideally these processes could
work without stepping on each other's toes and competing for locks.

Rob


Re: [SPAM] [GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread John R Pierce

On 2/2/2017 8:24 AM, Moreno Andreo wrote:


Can we use AD(Active Directory) groups concepts in postgres as we are 
using this concepts in SQL server.


So, is there any possible to work with AD groups such as (IT-DBA, 
IT-APPS..etc ) in postgresql.


Like this?
https://wiki.postgresql.org/wiki/LDAP_Authentication_against_AD


that handles authentication, but AFAIK, postgresql has no concept of 
Active Directory groups, only group roles defined within the postgres 
server. even if you're using AD for authentication, you still have 
to create roles in postgresql, and put these roles in groups.


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] (solved) R: Very strange problem on index

2017-02-02 Thread Alban Hertroys
On 2 February 2017 at 14:57, Job  wrote:
> Hi Raymond,
>
> Thank nyou for your appreciated feedback.

But what's your answer to his question? You still didn't tell.

> Here is the original message:
>
> i really strange problem, quite near to paranormal, is occurring during a 
> server migration.
>
> We have a table with some millions of record, perfectly working on other 
> Postgresql 9.6.1 machines:
>
>  Table "public.webrecord"
>   Column   | Type |   Modifiers
> ---+--+-
> ---+--+---
>  id| integer  | not null default 
> nextval('webrecord_id_seq'::regclass)
>  categoria | character varying|
>  dominio   | character varying|
>  country   | character varying(5) |
> Indexes:
> "keywebrecord" PRIMARY KEY, btree (id)
> "dominio_idx" btree (dominio)
>
> By entering a simple query:
>
> Select * from webrecord where dominio='sito.com' it wait some seconds and cpu 
> raise up.
>
> I tried to:
> - delete the tabel
> - recreate the schema again
> - re-populating it
> - reindexing
> - vacuum
>
> But nothing happened.
> The query planne says it scans lots of rows:
>
> explain analyze select * from webrecord where dominio='crl.microsoft.com';
> QUERY PLAN
> --
>  Gather  (cost=1000.00..144045.21 rows=62073 width=92) (actual 
> time=1096.202..1096.202 rows=0 loops=1)
>Workers Planned: 4
>Workers Launched: 4
>->  Parallel Seq Scan on webrecord  (cost=0.00..136837.91 rows=15518 
> width=92) (actual time=1063.277..1063.277 rows=0 loops=5)
>  Filter: ((dominio)::text = 'crl.microsoft.com'::text)
>  Rows Removed by Filter: 2482938  Planning time: 0.119 ms  Execution 
> time: 1107.846 ms
>
> On other machine the results are immediate!
>
> I have finished my tries...
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread Moreno Andreo

  
  
Hi Pawan,
  
  Il 02/02/2017 16:49, PAWAN SHARMA ha scritto:


  

  

  

  

  

  Hi All,

  

  
  
  
  Can we use AD(Active Directory) groups concepts
in postgres as we are using this concepts in SQL
server.
  
  
  So, is there any possible to work with AD groups
such as (IT-DBA, IT-APPS..etc ) in postgresql.

  

  

  


Like this?
https://wiki.postgresql.org/wiki/LDAP_Authentication_against_AD

Cheers
Moreno


  

  

  

  
  
  -Pawan

  

  

  



  





[GENERAL] AD(Active Directory) groups concepts in postgres

2017-02-02 Thread PAWAN SHARMA
Hi All,

Can we use AD(Active Directory) groups concepts in postgres as we are using
this concepts in SQL server.

So, is there any possible to work with AD groups such as (IT-DBA,
IT-APPS..etc ) in postgresql.

-Pawan


Re: [GENERAL] Avoiding repeating simple field definitions

2017-02-02 Thread John R Pierce

On 2/2/2017 7:20 AM, Guyren Howe wrote:
I saw a thing somewhere about avoiding repeating the same field 
definitions. So an app I’m working on uses an exactly 6-character 
sting as an identifier, which appears in many places.


are you talking about data normalization ?


https://en.wikipedia.org/wiki/Database_normalization


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoiding repeating simple field definitions

2017-02-02 Thread Adrian Klaver

On 02/02/2017 07:20 AM, Guyren Howe wrote:

I saw a thing somewhere about avoiding repeating the same field
definitions. So an app I’m working on uses an exactly 6-character sting
as an identifier, which appears in many places.


The thing would be?

Can you show an example of the 6 character string and how it is used in 
multiple places?


What is your concern?



IIRC, the thing I read proposed defining a type AS IMPLICIT, but I’m not
sure. Mainly because the docs urge caution with using AS IMPLICIT.

Thoughts?



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Avoiding repeating simple field definitions

2017-02-02 Thread Guyren Howe
I saw a thing somewhere about avoiding repeating the same field definitions. So 
an app I’m working on uses an exactly 6-character sting as an identifier, which 
appears in many places.

IIRC, the thing I read proposed defining a type AS IMPLICIT, but I’m not sure. 
Mainly because the docs urge caution with using AS IMPLICIT.

Thoughts?


R: [GENERAL] (solved) R: Very strange problem on index

2017-02-02 Thread Job
Hi Raymond,

Thank nyou for your appreciated feedback.

Here is the original message:

i really strange problem, quite near to paranormal, is occurring during a 
server migration.
 
We have a table with some millions of record, perfectly working on other 
Postgresql 9.6.1 machines:
 
 Table "public.webrecord"
  Column   | Type |   Modifiers
---+--+-
---+--+---
 id| integer  | not null default 
nextval('webrecord_id_seq'::regclass)
 categoria | character varying|
 dominio   | character varying|
 country   | character varying(5) |
Indexes:
"keywebrecord" PRIMARY KEY, btree (id)
"dominio_idx" btree (dominio)

By entering a simple query:

Select * from webrecord where dominio='sito.com' it wait some seconds and cpu 
raise up.

I tried to:
- delete the tabel
- recreate the schema again
- re-populating it
- reindexing
- vacuum

But nothing happened.
The query planne says it scans lots of rows:

explain analyze select * from webrecord where dominio='crl.microsoft.com';
QUERY PLAN
--
 Gather  (cost=1000.00..144045.21 rows=62073 width=92) (actual 
time=1096.202..1096.202 rows=0 loops=1)
   Workers Planned: 4
   Workers Launched: 4
   ->  Parallel Seq Scan on webrecord  (cost=0.00..136837.91 rows=15518 
width=92) (actual time=1063.277..1063.277 rows=0 loops=5)
 Filter: ((dominio)::text = 'crl.microsoft.com'::text)
 Rows Removed by Filter: 2482938  Planning time: 0.119 ms  Execution 
time: 1107.846 ms

On other machine the results are immediate!

I have finished my tries...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] (solved) R: Very strange problem on index

2017-02-02 Thread Raymond O'Donnell

On 02/02/17 12:53, Job wrote:


I create a table copy, with indexes, and it worked perfecly.
Then i dropped the original table and recreated it back by a copy of the newest.

Same problems, lack during searches and indexes not used.


I didn't see your original message so may have missed this, but did you 
run ANALYZE on the table after creating it? - Depending on how you 
created the table, the statistics needed by the planner may or may not 
have been up to date.


Ray.


--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] (solved) R: Very strange problem on index

2017-02-02 Thread Job

I create a table copy, with indexes, and it worked perfecly.
Then i dropped the original table and recreated it back by a copy of the newest.

Same problems, lack during searches and indexes not used.

I restart Postgresql 9.6.1 engine and now index is working perfectly!

Maybe a cache or something other problems?

I am really curious to understand why!

Thank you!
/F

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Very strange problem on index

2017-02-02 Thread Job
Hi guys,
 
i really strange problem, quite near to paranormal, is occurring during a 
server migration.
 
We have a table with some millions of record, perfectly working on other 
Postgresql 9.6.1 machines:
 
 Table "public.webrecord"
  Column   | Type |   Modifiers
---+--+
 id| integer  | not null default 
nextval('webrecord_id_seq'::regclass)
 categoria | character varying|
 dominio   | character varying|
 country   | character varying(5) |
Indexes:
"keywebrecord" PRIMARY KEY, btree (id)
"dominio_idx" btree (dominio)

By entering a simple query:

Select * from webrecord where dominio='sito.com' it wait some seconds and cpu 
raise up.

I tried to:
- delete the tabel
- recreate the schema again
- re-populating it
- reindexing
- vacuum

But nothing happened.
The query planne says it scans lots of rows:

explain analyze select * from webrecord where dominio='crl.microsoft.com';
QUERY PLAN
--
 Gather  (cost=1000.00..144045.21 rows=62073 width=92) (actual 
time=1096.202..1096.202 rows=0 loops=1)
   Workers Planned: 4
   Workers Launched: 4
   ->  Parallel Seq Scan on webrecord  (cost=0.00..136837.91 rows=15518 
width=92) (actual time=1063.277..1063.277 rows=0 loops=5)
 Filter: ((dominio)::text = 'crl.microsoft.com'::text)
 Rows Removed by Filter: 2482938
 Planning time: 0.119 ms
 Execution time: 1107.846 ms

On other machine the results are immediate!

I have finished my tries...

Thank you!
/F

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general