Re: [GENERAL] pgadmin4 rc1 query tool performance

2016-09-06 Thread Martijn Tonies (Upscene Productions)
i testing the latest release of pgadmin4 (rc1) and noticed that the 
query tool is significantly slower than the query tool in pgadmin3.
i am not sure if this occurs only on my computer or only under os x 
(10.10.5) or is this a known behavior.


I don't know. You might get an answer quicker at the link below though:

https://www.postgresql.org/list/pgadmin-hackers/


sorry for the noise, i posted this accidentally here instead of 
pgadmin-support. i got already an answer there.


What was the answer?


With regards,

Martijn Tonies
Upscene Productions



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread John R Pierce

On 9/6/2016 8:21 PM, Venkata B Nagothi wrote:


it's a full copy from the production, so it's 2.3TB


Thats quite huge. pg_upgrade would be a better choice and yes, 
downtime is needed. You need to have the database shutdown all through 
the upgrade process.


How long it will take depends on type of Disks you have. It is 
definitely a lot faster than pg_dump/pg_restore.


Just moving a copy of the 2.3TB data between service providers is going 
to take quite a long time.


ok, just a rough concept.

1) setup new streaming slave at new provider using 9.2, wait for 
replication to complete and satisfy yourself that the database is 
complete and intact.
2) shut down master at old provider (and any other slaves), promote new 
provider 9.2 to master.
3) pg_upgrade using --link option to 9.4 on new provider.   if this 
fails, restart old master and start over at step 1
4) bring up applications on new database.  if they fail, restart old 
master, and start over at step 1

5) bring up new slave(s) on new database.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Venkata B Nagothi
On Wed, Sep 7, 2016 at 10:43 AM, Patrick B  wrote:

>
>> how large is the full database cluster?
>>
>> if its only a few GB or whatever, and not grossly complex, pg_dumpall -h
>> oldhost | psql -h newhost, is the simplest way to move a complete set of
>> databases from an old server to a new. if there are large tables with
>> millions of rows indexed, make sure you set maintenance_work_mem on the new
>> server to 2GB before running this.
>>
>>
>> it's a full copy from the production, so it's 2.3TB
>

Thats quite huge. pg_upgrade would be a better choice and yes, downtime is
needed. You need to have the database shutdown all through the upgrade
process.

How long it will take depends on type of Disks you have. It is definitely a
lot faster than pg_dump/pg_restore.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Materialized view auto refresh

2016-09-06 Thread Nguyễn Trần Quốc Vinh
Dear Harry.

I'm sorry. Please check it again at
http://it.ued.udn.vn/myprojects/pgTriggerGen/:
http://it.ued.udn.vn/myprojects/pgTriggerGen/PgSQLMVTriggersGenerator_v2_src.rar
.

Best regards,



On Tue, Sep 6, 2016 at 2:42 PM, hari.prasath 
wrote:

> Dear Nguyen Tran Quoc Vinh
>   Source link is broken. Please check this http://it.ued.udn.vn/
> myprojects/pgTriggerGen/PgSQLMVTriggersGenerator_src.rar
>
> cheers
> *- *Harry
>
>
>  On Mon, 05 Sep 2016 11:50:49 +0530 *Nguyễn Trần Quốc Vinh
> >*wrote 
>
> Dear Harry.
>
> You can try this while you are waiting the better solution from Prof.
> Kevin Grittner. We did not improve because we do not know if we are in the
> right way...
>
> https://www.postgresql.org/message-id/CAAU9oxsB5gy8LZ12kqaa3r1iv19C7
> famnefixdac1fhrfyt...@mail.gmail.com
>
> You can downloat the source and the binary from here http://it.ued.udn.vn/
> myprojects/pgTriggerGen/.
>
> Best regards,
>
> TS. Nguyễn Trần Quốc Vinh
> ---
> Chủ nhiệm khoa Tin học
> Trường ĐH Sư phạm - ĐH Đà Nẵng
> Website: http://it.ued.udn.vn; http://www.ued.udn.vn; http://
> www.ued.udn.vn
> LLKH: http://scv.ued.udn.vn/~ntquocvinh
> ĐT: 0511.6-512-586
> DĐ: 0914.78-08-98
> 
> Nguyen Tran Quoc Vinh, PhD
> Dean
> Faculty of Information Technology
> Danang University of Education
> Website: http://it.ued.udn.vn; http://www.ued.udn.vn/; http:
> //www.ued.udn.vn
> SCV: http://scv.ued.udn.vn/~ntquocvinh
> Phone: (+84) 511.6-512-586
> Mobile: (+84) 914.78-08-98
>
> On Tue, Aug 9, 2016 at 8:49 PM, Kevin Grittner  wrote:
>
> On Tue, Aug 9, 2016 at 4:50 AM, hari.prasath 
> wrote:
>
> >   Is there any tentative schedule for real-time or incremental(only
> > applying delta changes) refresh of materialized views.?.
>
> There is work in progress, but no hard schedule.  Unfortunately, it
> has often been set aside to address more immediate issues for
> particular end users; but I expect to get back to it Real Soon Now.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
>


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Patrick B
>
>
> how large is the full database cluster?
>
> if its only a few GB or whatever, and not grossly complex, pg_dumpall -h
> oldhost | psql -h newhost, is the simplest way to move a complete set of
> databases from an old server to a new. if there are large tables with
> millions of rows indexed, make sure you set maintenance_work_mem on the new
> server to 2GB before running this.
>
>
> it's a full copy from the production, so it's 2.3TB


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread John R Pierce

On 9/6/2016 4:47 PM, Patrick B wrote:

1 - upgrade master server
2 - by performing #1 - the slaves would become unused (streaming 
replication wouldn't work)

3 - Setup new slave running Postgres 9.4
4 - dump the db to the new server



how large is the full database cluster?

if its only a few GB or whatever, and not grossly complex, pg_dumpall -h 
oldhost | psql -h newhost, is the simplest way to move a complete set of 
databases from an old server to a new. if there are large tables with 
millions of rows indexed, make sure you set maintenance_work_mem on the 
new server to 2GB before running this.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Patrick B
>
>
> John,
> Yes, Rackspace supports slony. I used to work there and they use slony on
> their core system.
>
> Patrick,
> You cannot stream directly from 9.2 to 9.4. However, you can just create a
> separate 9.4 cluster (on a separate port).
> Then pg_dump from 9.2 in plain format and it will load on 9.4
>


damn!!! really? =( That's not good

will have to think about how will perform this

I think  what I would need to use is:


1 - upgrade master server
2 - by performing #1 - the slaves would become unused (streaming
replication wouldn't work)
3 - Setup new slave running Postgres 9.4
4 - dump the db to the new server


Do you guys agree?


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Melvin Davidson
On Tue, Sep 6, 2016 at 7:28 PM, Patrick B  wrote:

>
>
> 2016-09-07 11:25 GMT+12:00 John R Pierce :
>
>> On 9/6/2016 4:20 PM, Melvin Davidson wrote:
>>
>>> If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
>>> the slave.
>>>
>>
>> does rackspace support slony?  how about amazon dms ?
>>
>> slony requires configuring replication on each table.  if the database
>> has a large complex schema this could take considerable setup effort.
>> 
>>
>
> I'm not using slony I'm using the default streaming replication
> postgres 9.2 tool.
>
>
> I'm setting up a postgres 9.2 master and a 9.4 streaming replication slave
> atm...
>
> will dump the DB on the 9.2 and restore it into the 9.4 to see if that can
> be done.
> I don't intend to upgrade both master and slaves on my current production
> scenario just to be able to use DMS
>
> Patrick
>
>
John,
Yes, Rackspace supports slony. I used to work there and they use slony on
their core system.

Patrick,
You cannot stream directly from 9.2 to 9.4. However, you can just create a
separate 9.4 cluster (on a separate port).
Then pg_dump from 9.2 in plain format and it will load on 9.4.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Patrick B
2016-09-07 11:25 GMT+12:00 John R Pierce :

> On 9/6/2016 4:20 PM, Melvin Davidson wrote:
>
>> If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
>> the slave.
>>
>
> does rackspace support slony?  how about amazon dms ?
>
> slony requires configuring replication on each table.  if the database has
> a large complex schema this could take considerable setup effort.
> 
>

I'm not using slony I'm using the default streaming replication
postgres 9.2 tool.


I'm setting up a postgres 9.2 master and a 9.4 streaming replication slave
atm...

will dump the DB on the 9.2 and restore it into the 9.4 to see if that can
be done.
I don't intend to upgrade both master and slaves on my current production
scenario just to be able to use DMS

Patrick


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread John R Pierce

On 9/6/2016 4:20 PM, Melvin Davidson wrote:
If you use slony to replicate, you CAN have 9.2 on the master and 9.4 
on the slave.


does rackspace support slony?  how about amazon dms ?

slony requires configuring replication on each table.  if the database 
has a large complex schema this could take considerable setup effort.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Melvin Davidson
If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
the slave.

On Tue, Sep 6, 2016 at 6:43 PM, John R Pierce  wrote:

> On 9/6/2016 3:27 PM, Rich Shepard wrote:
>
> On Wed, 7 Sep 2016, Patrick B wrote:
>
> 2 - I've never done a Postgres upgrade before, can you give some guide
> here? Is usually a easy thing? How long can it take? Is a downtime needed?
>
>
>
>   I suggest the place to start is 'man pg_upgrade.' It is helpful.
>
>
> pg_upgrade would require both 9.2 (with your existing database directory)
> and 9.4 be installed on the same server.
>
> the easiest way to do this upgrade depends heavily on how large the
> database is.   'pg_dumpall -h oldserver | psql -h newserver'  might be
> simplest for modest data sizes, but way too slow for larger data sets.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
the slave.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread John R Pierce

On 9/6/2016 3:27 PM, Rich Shepard wrote:

On Wed, 7 Sep 2016, Patrick B wrote:


2 - I've never done a Postgres upgrade before, can you give some guide
here? Is usually a easy thing? How long can it take? Is a downtime 
needed?



  I suggest the place to start is 'man pg_upgrade.' It is helpful. 


pg_upgrade would require both 9.2 (with your existing database 
directory) and 9.4 be installed on the same server.


the easiest way to do this upgrade depends heavily on how large the 
database is.   'pg_dumpall -h oldserver | psql -h newserver'  might be 
simplest for modest data sizes, but way too slow for larger data sets.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Rich Shepard

On Wed, 7 Sep 2016, Patrick B wrote:


2 - I've never done a Postgres upgrade before, can you give some guide
here? Is usually a easy thing? How long can it take? Is a downtime needed?


Patrick,

  I suggest the place to start is 'man pg_upgrade.' It is helpful.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Patrick B
Hi guys,

I'll be migrating my Postgres 9.2 database from Rackspace to Amazon.

To do that work, I'll be using DMS at amazon... unfortunately DMS needs a
Postgres 9.4+ version at least (
http://docs.aws.amazon.com/pt_br/dms/latest/userguide/CHAP_Introduction.Sources.html)
- So that means that I'll have to upgrade my postgres version.

Questions:

1 - I'll have to upgrade the version on my Master server, right? I can't
upgrade only in a streaming replication slave server, right?

2 - I've never done a Postgres upgrade before, can you give some guide
here? Is usually a easy thing? How long can it take? Is a downtime needed?

Thanks guys,
Patrick


Re: [GENERAL] Privileges on public schema can't be revoked?

2016-09-06 Thread Tom Lane
Stephen Frost  writes:
> \dn+ in psql will give you the access privileges for all schemas.

> I'd have to look at the "other solutions" you're referring to, but, in
> general, we do not exclude the public role in any way from the access
> privilege system.

Possibly Greg was thinking of the information_schema views.  Grants to
PUBLIC do show up in those.  The problem is that there's no view there
that covers schema privileges, probably because the SQL standard doesn't
really have any such thing as grantable schema privileges.  (Or at least
it didn't last I heard, I might be out of date.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Privileges on public schema can't be revoked?

2016-09-06 Thread Stephen Frost
Gregm

* Greg Fodor (gfo...@gmail.com) wrote:
> A, I wasn't aware of the PUBLIC meta-role. Not sure if it's useful
> feedback, I spent a lot of time digging around the web for solutions
> that would basically let me query the database to see all of the
> effective privileges for a user, and none of the solutions I found
> were able to get me to a point where this was clear, since they all
> queried against the various information schema tables that I think
> neglect to take into account the PUBLIC meta-role.

\dn+ in psql will give you the access privileges for all schemas.

I'd have to look at the "other solutions" you're referring to, but, in
general, we do not exclude the public role in any way from the access
privilege system.  The \dn+ above should return something like:

postgres=UC/postgres+
=UC/postgres

for the Access Privileges column for the public schema, which shows that
the 'postgres' role and the '' role (aka, 'public') have been granted
both USAGE and CREATE on that schema. 

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Privileges on public schema can't be revoked?

2016-09-06 Thread Greg Fodor
A, I wasn't aware of the PUBLIC meta-role. Not sure if it's useful
feedback, I spent a lot of time digging around the web for solutions
that would basically let me query the database to see all of the
effective privileges for a user, and none of the solutions I found
were able to get me to a point where this was clear, since they all
queried against the various information schema tables that I think
neglect to take into account the PUBLIC meta-role.

It seems that functionality that lets a superuser quickly audit the
privileges for a user (including those granted via PUBLIC) would be
really helpful for diagnosing cases where that user can do something
they shouldn't be allowed to. The converse, where they cannot do
something they should, is quickly remedied by granting the privilege.
I could imagine two possible interfaces, one where you can get a list
of all privileges granted to user on an object and why, or another
where you simply submit a statement to the DB and it gives you an
audit trail of why that statement is permitted (EXPLAIN PRIVILEGES?
:))

Thanks for the info!

On Tue, Sep 6, 2016 at 11:07 AM, Tom Lane  wrote:
> Greg Fodor  writes:
>> Apologies in advance about this since it is likely something obvious,
>> but I am seeing some very basic behavior that does not make sense.
>> I've tested this on a fresh build of 9.6rc1 and also 9.1.24 (just to
>> see if it was a regression.) After creating a test database, and a
>> test user that I revoke all privileges on the public schema to, yet
>> that user is still able to create tables in the public schema.
>
> You would need to revoke the default grant of privileges to PUBLIC;
> revokes against any particular user have no effect on his being
> a member of PUBLIC.
>
> IOW, revoke only revokes a previous matching grant, and there was
> no such grant in this case.  What there was was a grant to PUBLIC;
> see the relevant bit in initdb.c:
>
> "GRANT CREATE, USAGE ON SCHEMA public TO PUBLIC;\n\n",
>
> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin 4 records limit of 2000

2016-09-06 Thread Melvin Davidson
On Tue, Sep 6, 2016 at 3:26 PM, Adrian Klaver 
wrote:

> On 09/06/2016 10:05 AM, dudedoe01 wrote:
>
>> Hi,
>>
>> I have pgAdmin 9.4. I can only view 2000 rows of data at a time in
>> pgAdmin 4
>> while the previous version 9.3 there was no limit. Is there anyway to
>> remove
>> the limit on the rows of data I can see all the data concurrently.
>>
>> Please provide insights.
>>
>> Thanks in Advance,
>>
>> 
>>
>
> https://www.pgadmin.org/docs4/dev/query_tool.html?highlight=limit
>
> The Query tool
>
> The Query tool Toolbar
>
> No limit
>
> Use the No limit drop-down listbox to specify how many rows to display in
> the output panel.
> Select from No limit (the default), 1000 rows, 500 rows, or 100 rows.
>
>
>>
>>
>> --
>> View this message in context: http://postgresql.nabble.com/p
>> gAdmin-4-records-limit-of-2000-tp5919751.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Actually, in PgAdmin 4 1.0 beta4 the default is No limit and cannot
currently be changed,
at least not in Win 10.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Thanks to the Pg community (Louisiana Flood)

2016-09-06 Thread Joshua D. Drake

Hello fellow community members,

I just wanted to put out a public thanks to all the community members 
that sent donations to the CMD office in Baton Rouge. We received a ton 
of needed supplies. You can see the reference here:


http://www.linuxhiker.org/2016/08/helping-lousiana.html

Thanks again!

Sincerely,

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Scott Marlowe
On Tue, Sep 6, 2016 at 1:18 PM, Steve Atkins  wrote:
>
>> On Sep 6, 2016, at 12:08 PM, Scott Marlowe  wrote:
>>
>> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep  wrote:
>>>
>>> max_connections = 100
>>> shared_buffers = 512MB
>>> effective_cache_size = 24GB
>>> work_mem = 110100kB
>>
>> This is WAY too high for work_mem. Work_mem is how much memory a
>> single sort can grab at once. Each query may run > 1 sort, and you
>> could have 100 queries running at once.
>>
>> This setting is 110GB. That's about 109.9GB too high for safety. When
>> things go wrong with this too big, they go very wrong, sending the
>> machine into a swap storm from which it may not return.
>
> It's an oddly spelled 110MB, which doesn't seem unreasonable.

oh yeah. still kind biggish but not as big as I had thought.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Jeff Janes
On Fri, Sep 2, 2016 at 8:38 PM, Pradeep  wrote:

> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in
> PostgreSQL configuration file it was not reflecting in OS level and also
> Database performance is degrading.
>


What were they before you changed them?

Do you mean that the performance is degrading after the change, or that the
performance is degrading as the database grows, and your changes have not
helped?


>
> *Example*: I am using Windows 2008 R2 server .For PostgreSQL I have
> allocated 24GB RAM out of 32GB.
>
> However after changing the below parameters, In task bar it is showing
> 2.7GB Utilization even though my utilization is more.
>
> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
>
>
> max_connections = 100
>
> shared_buffers = 512MB
>
> effective_cache_size = 24GB
>
> work_mem = 110100kB
>
> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>

None of these seem unreasonable.  In order to recommend changes, we would
have to know more about your database workload.  Or better yet, specific
queries which seem too slow.

Cheers,

Jeff


Re: [GENERAL] pgAdmin 4 records limit of 2000

2016-09-06 Thread Adrian Klaver

On 09/06/2016 10:05 AM, dudedoe01 wrote:

Hi,

I have pgAdmin 9.4. I can only view 2000 rows of data at a time in pgAdmin 4
while the previous version 9.3 there was no limit. Is there anyway to remove
the limit on the rows of data I can see all the data concurrently.

Please provide insights.

Thanks in Advance,




https://www.pgadmin.org/docs4/dev/query_tool.html?highlight=limit

The Query tool

The Query tool Toolbar

No limit

Use the No limit drop-down listbox to specify how many rows to display 
in the output panel.

Select from No limit (the default), 1000 rows, 500 rows, or 100 rows.





--
View this message in context: 
http://postgresql.nabble.com/pgAdmin-4-records-limit-of-2000-tp5919751.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Steve Atkins

> On Sep 6, 2016, at 12:08 PM, Scott Marlowe  wrote:
> 
> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep  wrote:
>> 
>> max_connections = 100
>> shared_buffers = 512MB
>> effective_cache_size = 24GB
>> work_mem = 110100kB
> 
> This is WAY too high for work_mem. Work_mem is how much memory a
> single sort can grab at once. Each query may run > 1 sort, and you
> could have 100 queries running at once.
> 
> This setting is 110GB. That's about 109.9GB too high for safety. When
> things go wrong with this too big, they go very wrong, sending the
> machine into a swap storm from which it may not return.

It's an oddly spelled 110MB, which doesn't seem unreasonable.

> 
> It's far more likely that you've just got poorly written queries. I'd
> make a post with explain analyze output etc. Here's a good resource
> for reporting slow queries:
> 
> https://wiki.postgresql.org/wiki/Slow_Query_Questions

+1

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Scott Marlowe
On Fri, Sep 2, 2016 at 9:38 PM, Pradeep  wrote:
> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in PostgreSQL
> configuration file it was not reflecting in OS level and also Database
> performance is degrading.
>
>
>
> Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated
> 24GB RAM out of 32GB.

Actually effective_cache_size allocates nothing. It tells the pgsql
server about how much memory the machine it is running on is using for
OS level caching. On  32G machine with 1G or so of shared_buffers that
number is about right.

> However after changing the below parameters, In task bar it is showing 2.7GB
> Utilization even though my utilization is more.

2.7G is ok. Postgresql expects the OS to help out with caching so it
doesn't need to grab all the memory in the machine etc. In fact that
would be counterproductive in most situations.

> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
> max_connections = 100
> shared_buffers = 512MB
> effective_cache_size = 24GB
> work_mem = 110100kB

This is WAY too high for work_mem. Work_mem is how much memory a
single sort can grab at once. Each query may run > 1 sort, and you
could have 100 queries running at once.

This setting is 110GB. That's about 109.9GB too high for safety. When
things go wrong with this too big, they go very wrong, sending the
machine into a swap storm from which it may not return.

> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9

Too high of a checkpoint completion target may cause buffers to get
written out more often than needed. but it varies based on load etc.

> wal_buffers = 16MB
>
> default_statistics_target = 100

It's far more likely that you've just got poorly written queries. I'd
make a post with explain analyze output etc. Here's a good resource
for reporting slow queries:

https://wiki.postgresql.org/wiki/Slow_Query_Questions

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with slow query - Pgsql 9.2

2016-09-06 Thread Jeff Janes
On Mon, Sep 5, 2016 at 6:53 PM, Patrick B  wrote:

> Hi guys,
>
> I got this query:
>
>> SELECT id,jobid,description,serialised_data
>> FROM logtable
>> WHERE log_type = 45
>> AND clientid = 24011
>> ORDER BY gtime desc
>
>

What is really going to help you here is multicolumn index on (clientid,
log_type), or (log_type, clientid).

It will not cost you much, because you can get rid of whichever
single-column index is on the column you list first in your multi-column
index.

>
>
> Explain analyze: https://explain.depesz.com/s/XKtU
>
> So it seems the very slow part is into:
>
>   ->  Bitmap Index Scan on "ix_client"  (cost=0.00..5517.96
>> rows=367593 width=0) (actual time=2668.246..2668.246 rows=356327 loops=1)
>> Index Cond: ("clientid" = 24011)
>
>
> Am I right? The query is already using an index on that table... how could
> I improve the performance in a query that is already using an index?
>

Right, that is the slow step.  Probably the index is not already in memory
and had to be read from disk, slowly.  You could turn track_io_timing on
and then run explain (analyze, buffers) to see if that is the case.  But
once you build a multi-column index, it shouldn't really matter anymore.

Cheers,

Jeff


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Naveed Shaikh
On Windows, large values for shared_buffers aren't as effective. You may
find better results keeping the setting relatively low and using the
operating system cache more instead. The useful range for shared_buffers on
Windows systems is generally from 64MB to 512MB.

---
Warm Regards,
--
Naveed Shaikh

On Tue, Sep 6, 2016 at 11:22 PM, Ilya Kazakevich <
ilya.kazakev...@jetbrains.com> wrote:

> Hi.
>
>
>
> “shared_buffers” should be set to 30-40% of your system RAM.
>
> This param controls how much memory database may use.
>
>
>
> Please see https://www.postgresql.org/docs/9.1/static/runtime-
> config-resource.html
>
>
>
>
>
>
>
> Ilya Kazakevich
>
>
>
> JetBrains
>
> http://www.jetbrains.com
>
> The Drive to Develop
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:pgsql-general-owner@
> postgresql.org] *On Behalf Of *Pradeep
> *Sent:* Saturday, September 03, 2016 6:39 AM
> *To:* pgsql-general@postgresql.org
> *Subject:* [GENERAL] PostgreSQL Database performance
>


Re: [GENERAL] Privileges on public schema can't be revoked?

2016-09-06 Thread Tom Lane
Greg Fodor  writes:
> Apologies in advance about this since it is likely something obvious,
> but I am seeing some very basic behavior that does not make sense.
> I've tested this on a fresh build of 9.6rc1 and also 9.1.24 (just to
> see if it was a regression.) After creating a test database, and a
> test user that I revoke all privileges on the public schema to, yet
> that user is still able to create tables in the public schema.

You would need to revoke the default grant of privileges to PUBLIC;
revokes against any particular user have no effect on his being
a member of PUBLIC.

IOW, revoke only revokes a previous matching grant, and there was
no such grant in this case.  What there was was a grant to PUBLIC;
see the relevant bit in initdb.c:

"GRANT CREATE, USAGE ON SCHEMA public TO PUBLIC;\n\n",

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Privileges on public schema can't be revoked?

2016-09-06 Thread Stephen Frost
Greg,

* Greg Fodor (gfo...@gmail.com) wrote:
> Apologies in advance about this since it is likely something obvious,
> but I am seeing some very basic behavior that does not make sense.
> I've tested this on a fresh build of 9.6rc1 and also 9.1.24 (just to
> see if it was a regression.) After creating a test database, and a
> test user that I revoke all privileges on the public schema to, yet
> that user is still able to create tables in the public schema. Revokes
> on other schemas work as expected, it seems the public schema is
> treated specially.
> 
> https://gist.github.com/gfodor/c360683f25f55497c8c657255fd0e0f8
> 
> Any help appreciated!

The privilege on the public schema was granted to "public" and therefore
must be revoked from "public" to remove that privilege.  Revoking from
"guy" doesn't have any effect.

Note that if you revoke all privielges from 'public' then only users who
have been explicitly granted access will be able to create or *use* any
objects in the public schema.

Generally, I revoke CREATE rights from the public schema, but leave
USAGE rights, as I then put trusted extensions and other tools into the
public schema.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Ilya Kazakevich
Hi.

 

"shared_buffers" should be set to 30-40% of your system RAM.

This param controls how much memory database may use.

 

Please see
https://www.postgresql.org/docs/9.1/static/runtime-config-resource.html

 

 

 

Ilya Kazakevich

 

JetBrains

  http://www.jetbrains.com

The Drive to Develop

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Pradeep
Sent: Saturday, September 03, 2016 6:39 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL Database performance 



Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Naveed Shaikh
Which version of PostgreSQL are you using on your windows?

Increasing work_mem can lead to far less disk-swapping, and therefore far
quicker queries. However, it can cause problems if set too high, and should
be constrained taking into account max_connections. The following
calculation is what is typically recommended to determine a decent work_mem
 value:

Total RAM * 0.25 / max_connections

If there are large reporting queries that run on the database which require
more work memory than a typical connection,work_mem can be set for those
particular queries. If, for example, there is a reporting user that only
runs infrequent but large reports, a specific work_mem setting can be
applied to that particular role.
e.g.

ALTER ROLE reporting SET work_mem = '64MB';

---
Warm Regards,
--
Naveed Shaikh

On Sat, Sep 3, 2016 at 9:08 AM, Pradeep  wrote:

> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in
> PostgreSQL configuration file it was not reflecting in OS level and also
> Database performance is degrading.
>
>
>
> *Example*: I am using Windows 2008 R2 server .For PostgreSQL I have
> allocated 24GB RAM out of 32GB.
>
> However after changing the below parameters, In task bar it is showing
> 2.7GB Utilization even though my utilization is more.
>
> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
>
>
> max_connections = 100
>
> shared_buffers = 512MB
>
> effective_cache_size = 24GB
>
> work_mem = 110100kB
>
> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>
>
>
> Thanks & Regards
>
> Pradeep Kanth
>
>
>


Re: [GENERAL] IDE for function/stored proc development.

2016-09-06 Thread Edson Richter
I do use Dbwrench, but is pretty basic, no fancy procedures / function 
development support.

Enviado do meu smartphone Sony Xperia™

 Pavel Stehule escreveu 

Hi

2016-09-03 11:36 GMT+02:00 Tim Uckun :

> Does anybody use an IDE for doing heavy duty stored proc development?
> PGadmin is decent but I am looking for something better.
>
> I have tried jetbrains with the db browser plugin and on the surface it
> seems like a good choice but it's really buggy when working with procs.
>
> I also tried datagrip by jetbrains and that too seems to be all over the
> place. It has some amazing features for working with the database but some
> of the simplest stuff is lacking or half baked.
>
> I looked at atom and could not find any useful plugins for PG.
>
> Anybody have experience with something awesome?
>

I am using the Emacs - but any editor should be ok. There is one rule -
edit file first, and import to database as next step. PGadmin is pretty bad
tool for maintaing stored procedures.

Regards

Pavel


[GENERAL] Help with slow query - Pgsql 9.2

2016-09-06 Thread Patrick B
Hi guys,

I got this query:

> SELECT id,jobid,description,serialised_data
> FROM logtable
> WHERE log_type = 45
> AND clientid = 24011
> ORDER BY gtime desc



Explain analyze: https://explain.depesz.com/s/XKtU

So it seems the very slow part is into:

  ->  Bitmap Index Scan on "ix_client"  (cost=0.00..5517.96
> rows=367593 width=0) (actual time=2668.246..2668.246 rows=356327 loops=1)
> Index Cond: ("clientid" = 24011)


Am I right? The query is already using an index on that table... how could
I improve the performance in a query that is already using an index?

Thanks
Patricl


[GENERAL] pgAdmin 4 records limit of 2000

2016-09-06 Thread dudedoe01
Hi,

I have pgAdmin 9.4. I can only view 2000 rows of data at a time in pgAdmin 4
while the previous version 9.3 there was no limit. Is there anyway to remove
the limit on the rows of data I can see all the data concurrently.

Please provide insights.

Thanks in Advance,

 



--
View this message in context: 
http://postgresql.nabble.com/pgAdmin-4-records-limit-of-2000-tp5919751.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Privileges on public schema can't be revoked?

2016-09-06 Thread Greg Fodor
Apologies in advance about this since it is likely something obvious,
but I am seeing some very basic behavior that does not make sense.
I've tested this on a fresh build of 9.6rc1 and also 9.1.24 (just to
see if it was a regression.) After creating a test database, and a
test user that I revoke all privileges on the public schema to, yet
that user is still able to create tables in the public schema. Revokes
on other schemas work as expected, it seems the public schema is
treated specially.

https://gist.github.com/gfodor/c360683f25f55497c8c657255fd0e0f8

Any help appreciated!

-Greg


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IDE for function/stored proc development.

2016-09-06 Thread Edson Richter


 Martijn Tonies (Upscene Productions) escreveu 

> Good morning,
> 
>  >I looked at your purchase, and did not see any Postgres version. Am I 
> missing (/misunderstanding) something here?
> 
>  It’s not yet available, please wait until the end of the week 
> 
>  That being said, the pricing will be the same as for MySQL.
> 
>  With regards,
> 
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com


Would be nice to have a table which enlist all features (in rows) and databases 
(in cols), and on intersections, add the "since date... ", "from date ..." or 
"planned" status...

You can split the table by bundle (basic /enterprise) so you will help your 
visitors to better understand your offer... 

Just my 2c. 

Sorry for being off topic... 

Edson Richter 

[GENERAL] PostgreSQL Database performance

2016-09-06 Thread Pradeep
Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL
configuration file it was not reflecting in OS level and also Database
performance is degrading.

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated
24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB
Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source
PostgreSQL database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

 

Thanks & Regards

Pradeep Kanth

 



Re: [GENERAL] pgadmin4 rc1 query tool performance

2016-09-06 Thread Attila Soki

> Am 06.09.2016 um 15:23 schrieb Adrian Klaver :
> 
> On 09/06/2016 02:35 AM, Attila Soki wrote:
>> Hi,
>> 
>> i testing the latest release of pgadmin4 (rc1) and noticed that the query 
>> tool is significantly slower than the query tool in pgadmin3.
>> i am not sure if this occurs only on my computer or only under os x 
>> (10.10.5) or is this a known behavior.
> 
> I don't know. You might get an answer quicker at the link below though:
> 
> https://www.postgresql.org/list/pgadmin-hackers/

sorry for the noise, i posted this accidentally here instead of 
pgadmin-support. i got already an answer there.

Attila

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clustered index to preserve data locality in a multitenant application?

2016-09-06 Thread Nicolas Grilly
On Tue, Aug 30, 2016 at 1:10 PM, Nicolas Grilly 
 wrote:

> We are developing a multitenant application which is currently based on
> MySQL, but we're thinking of migrating to PostgreSQL.
>
> We rely on clustered indexes to preserve data locality for each tenant.
> Primary keys start with the tenant ID. This way, rows belonging to the same
> tenant are stored next to each other. Because all requests hit only one
> tenant, this is a great performance improvement.
>
> PostgreSQL doesn't have clustered indexes — I'm aware of the CLUSTER
> command but it's a one-time operation — and I'm wondering if this can be a
> problem or not.
>
> Let's say we have a table containing data for 10,000 tenants and 10,000
> rows per tenant, for a total of 100,000,000 rows. Let's say each 8 KB block
> contains ~10 rows. Let's way we want to compute the sum of an integer
> column for all rows belonging to a given tenant ID.
>
> In MySQL/InnoDB, rows are stored in the leaf nodes of a B-tree. To compute
> the sum, MySQL has to read at least 1,000 blocks (each block containing ~10
> rows). I deliberately neglect the cost of walking the B-tree intermediate
> nodes.
>
> By comparison, PostgreSQL has to read at least 10,000 blocks (each block
> containing ~10 rows, but most of the time, only one row will match the
> tenant ID, other rows belonging to other tenants).
>
> A few questions:
>
> - Am I missing something?
> - Am I overestimating the benefit of a clustered index in our case, and
> the cost of not having one in PostgreSQL?
> - Is there another technical solution to this problem?
>


After a few days thinking and researching about this, I've come to the
following conclusion:

1) Trying to preserve data locality in a multi-tenant application is a
legitimate requirement (because it can significantly improve performance).
2) But index organized tables are not the right answer.
3) Partitioning by tenant_id, sorting heap rows by tenant_id, and/or
creating covering indexes (for index-only scans) are better solutions.

I found at least two examples of well-known companies using PostgreSQL at
scale for a multi-tenant application, and taking specific steps to preserve
data locality:

- Heap shards its data by customer_id (all rows in a logical shard belong
to the same customer — except for small customers, but it's easy to make
their queries fast anyway) [1].
- Instagram uses pg_reorg (the ancestor of pg_repack) to keep likes created
by the same user contiguous on disk [2].

At first, I thought that index organized tables (aka clustered indexes)
were the solution, and that missing them in PostgreSQL could be an issue,
but I was wrong. They are not the right tool for the job.

Index organized tables are a powerful tool for an application that needs
very fast lookups and range scans on the primary key. But they also have
significant drawbacks [3]:

- Lookups on secondary keys are slower (because they need one more
indirection).
- The index is bigger (because rows are stored directly in the index,
instead of a heap).
- InnoDB can only use the primary key as clustering key, which is very
restrictive (for example, in PostgreSQL, most recently inserted/updated
rows are naturally clustered together, independently of the chosen primary
key).

So, PostgreSQL uses heap organized tables instead of index organized
tables, and this is a good thing, at least for the kind of queries my
application needs. But, at scale, I still need to find a way to preserve
data locality for each tenant.

The solutions I've identified are:

- Partition by tenant_id as suggested by Thomas Kellerer earlier in this
thread. Declarative partitioning will make this easier in a future version
of PostgreSQL. It's also possible to "shard" using Citus Data (like Heap or
CloudFlare).
- Periodically sort rows by tenant_id in the heap, using something like
pg_repack, as suggested by Kenneth Marshall and Ben Chobot.
- Create covering indexes, which let PostgreSQL do index-only scans
(exactly like an index organized table), as suggested by Eduardo Morras and
engineers at Heap [4].

It looks like I can move forward with our migration from MySQL to
PostgreSQL, without worrying about the lack of clustered indexes, because
there are better solutions to keep tenant data contiguous!

Thanks for all your comments.

Nicolas Grilly

[1] https://news.ycombinator.com/item?id=12412216
[2] http://instagram-engineering.tumblr.com/post/40781627982
/handling-growth-with-postgres-5-tips-from
[3] https://dzone.com/articles/unreasonable-defaults-primary
[4] "Powering Heap", https://www.youtube.com/watch?v=NVl9_6J1G60


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-06 Thread Vick Khera
On Sun, Sep 4, 2016 at 4:37 PM, Patrick B  wrote:
> That's great news! My only concern is about the "RSYNC" - Hope that doesn't
> take long!!!
>
> This all steps must be performed by me on the next few days/weeks - I'll
> keep you guys updated... Keen to see the new DB running in a SSD environment
> :)

That will depend on your database change velocity. If the initial copy
takes a long time, Re-run the rsync again just before the shut down.
Try to pick the lowest load time.

For sure rsync can take a long time. I've also done similar moves
using ZFS snapshots, which are crazy fast to sync... but that assumes
you're using ZFS already on the current data directory.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Alexander Farber
Sandor, this has worked, thank you -

On Tue, Sep 6, 2016 at 3:35 PM, Sándor Daku  wrote:

>
> Of course you need the played field you relied on it in the order by
> clause. You can use the result of a select in a from clause of another
> select.
>
> SELECT SUM(skips) from (SELECT CASE WHEN action='skip' THEN 1 ELSE 0 END
> as skips, played
>  FROM words_moves
>  WHERE gid = 3
>  ORDER BY played DESC
>  LIMIT 6) as skipscount;
>
>
words=> SELECT SUM(skips) from (SELECT CASE WHEN action='skip' THEN 1 ELSE
0 END as skips, played
words(>  FROM words_moves
words(>  WHERE gid = 3
words(>  ORDER BY played DESC
words(>  LIMIT 6) as skipscount;
 sum
-
   6
(1 row)


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Sándor Daku
On 6 September 2016 at 15:19, Alexander Farber 
wrote:

> Hello Charles and other, please excuse my stupidity, but -
>
> On Tue, Sep 6, 2016 at 2:52 PM, Charles Clavadetscher <
> clavadetsc...@swisspug.org> wrote:
>
>>
>> You must group by played, as the message suggests. You are implicitly
>> selecting the column through order by, although you don't have it in the
>> list of selected columns.
>>
>>
>
> Here I have 7 "skip" events for gid=3 ("game id") in the table:
>
> words=> select mid, action, gid, uid from words_moves order by played desc;
>  mid | action | gid | uid
> -++-+-
>   15 | skip   |   3 |   1
>   14 | skip   |   3 |   2
>   13 | skip   |   3 |   1
>   12 | skip   |   3 |   2
>   11 | skip   |   3 |   1
>   10 | skip   |   3 |   2
>9 | skip   |   3 |   1
>6 | play   |   3 |   2
>5 | play   |   4 |   1
>3 | swap   |   3 |   1
>2 | play   |   2 |   1
>1 | play   |   1 |   1
> (12 rows)
>
> And then I try the suggestion I got in this mailing list:
>
> words=> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
> words-> FROM words_moves
> words-> WHERE gid = 3
> words-> GROUP BY played
> words-> ORDER BY played DESC
> words-> LIMIT 6;
>  sum
> -
>1
>1
>1
>1
>1
>1
> (6 rows)
>
> I guess I need ASC in the last statement, but main problem is how to get
> the total sum...
>
> Regards
> Alex
>
>
My bad. I didn't pay attention.

Of course you need the played field you relied on it in the order by
clause. You can use the result of a select in a from clause of another
select.

SELECT SUM(skips) from (SELECT CASE WHEN action='skip' THEN 1 ELSE 0 END as
skips, played
 FROM words_moves
 WHERE gid = 3
 ORDER BY played DESC
 LIMIT 6) as skipscount;

And now I really hope, I didn't miss something important again!

Regards,
Sándor


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Alexander Farber
Also tried the second suggestion:

words=> select count(action='skip') from words_moves where gid=3 group by
played order by played desc limit 6;
 count
---
 1
 1
 1
 1
 1
 1
(6 rows)


Re: [GENERAL] pgadmin4 rc1 query tool performance

2016-09-06 Thread Adrian Klaver

On 09/06/2016 02:35 AM, Attila Soki wrote:

Hi,

i testing the latest release of pgadmin4 (rc1) and noticed that the query tool 
is significantly slower than the query tool in pgadmin3.
i am not sure if this occurs only on my computer or only under os x (10.10.5) 
or is this a known behavior.


I don't know. You might get an answer quicker at the link below though:

https://www.postgresql.org/list/pgadmin-hackers/




how to repeat:
create table test1 (a int, t text);
insert into test1 (select generate_series(0,25000,1),'Lorem ipsum');

create table test2 (a int, t text);
insert into test2 (select generate_series(0,25000,1),'Lorem ipsum dolor sit 
amet, consectetur adipiscing elit. Praesent sit amet auctor lectus. Nulla 
condimentum maximus mauris, ac mattis leo blandit eget.');


select * from test1;
pgadmin3: Total query runtime: 518 msec
25001 rows retrieved.

pgadmin4 1.0 rc1:
Total query runtime: 2 secs.
25001 rows retrieved.



select * from test2;
pgadmin3:
Total query runtime: 2.8 secs
25001 rows retrieved

pgadmin4 1.0 rc1:
Total query runtime: 14 secs.
25001 rows retrieved.

client: os x 10.10.5
server: debian jessie in vmware, pg9.6-rc1

thanks,

Attila






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Alexander Farber
Hello Charles and other, please excuse my stupidity, but -

On Tue, Sep 6, 2016 at 2:52 PM, Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

>
> You must group by played, as the message suggests. You are implicitly
> selecting the column through order by, although you don't have it in the
> list of selected columns.
>
>

Here I have 7 "skip" events for gid=3 ("game id") in the table:

words=> select mid, action, gid, uid from words_moves order by played desc;
 mid | action | gid | uid
-++-+-
  15 | skip   |   3 |   1
  14 | skip   |   3 |   2
  13 | skip   |   3 |   1
  12 | skip   |   3 |   2
  11 | skip   |   3 |   1
  10 | skip   |   3 |   2
   9 | skip   |   3 |   1
   6 | play   |   3 |   2
   5 | play   |   4 |   1
   3 | swap   |   3 |   1
   2 | play   |   2 |   1
   1 | play   |   1 |   1
(12 rows)

And then I try the suggestion I got in this mailing list:

words=> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
words-> FROM words_moves
words-> WHERE gid = 3
words-> GROUP BY played
words-> ORDER BY played DESC
words-> LIMIT 6;
 sum
-
   1
   1
   1
   1
   1
   1
(6 rows)

I guess I need ASC in the last statement, but main problem is how to get
the total sum...

Regards
Alex


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Charles Clavadetscher
Hello

> On 06.09.2016, at 14:35, Alexander Farber  wrote:
> 
> No, I am sorry - for struggling with probably basic questions, but without 
> GROUP BY I get another error:
> 
> org.postgresql.util.PSQLException: ERROR: column "words_moves.played" must 
> appear in the GROUP BY clause or be used in an aggregate function|  
> Where: PL/pgSQL function words_skip_game(integer,integer) line 27 at SQL 
> statement

You must group by played, as the message suggests. You are implicitly selecting 
the column through order by, although you don't have it in the list of selected 
columns.

Charles

> 
> 
>> On Tue, Sep 6, 2016 at 2:30 PM, Sándor Daku  wrote:
>>> On 6 September 2016 at 14:23, Alexander Farber  
>>> wrote:
>>> 
 On Tue, Sep 6, 2016 at 1:30 PM, Sándor Daku  wrote:
 
 Get the last 6 record and 
 
 1.  ... action='SKIP' as isskip ... then you can group on and count the 
 skip moves. If there is 6 of them the game ends.
 
 2.  ... sum(case when action='SKIP' then 1 else 0 end) ... If the result 
 is 6 the game ends
>>> 
>>> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
>>> FROM words_moves
>>> GROUP BY action
>>> ORDER BY played DESC
>>> LIMIT 6
>>> INTO _sum;
>>> 
>>> RAISE NOTICE '_sum = %', _sum;
>>> 
>>> IF _sum = 6 THEN
>>> _finished = CURRENT_TIMESTAMP;
>>> END IF;
>>> 
>>> but get the error -
>>> 
>>> org.postgresql.util.PSQLException: ERROR: 
>>> column "words_moves.played" must appear in the GROUP BY clause or be used 
>>> in an aggregate function|  
>>> Where: PL/pgSQL function words_skip_game(integer,integer) line 27 at SQL 
>>> statement
>  
>>> P.S: Here is the table in question
>>> 
>>>   Table "public.words_moves"
>>>  Column |   Type   | Modifiers
>>> +--+---
>>>  mid| integer  | not null default 
>>> nextval('words_moves_mid_seq'::regclass)
>>>  action | words_action | not null
>>>  gid| integer  | not null
>>>  uid| integer  | not null
>>>  played | timestamp with time zone | not null
>>>  tiles  | jsonb|
>>>  score  | integer  |
>>> Indexes:
>>> "words_moves_pkey" PRIMARY KEY, btree (mid)
>>> Check constraints:
>>> "words_moves_score_check" CHECK (score > 0)
>>> Foreign-key constraints:
>>> "words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid) ON 
>>> DELETE CASCADE
>>> "words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid) ON 
>>> DELETE CASCADE
>>> Referenced by:
>>> TABLE "words_games" CONSTRAINT "words_mid_fk" FOREIGN KEY (mid) 
>>> REFERENCES words_moves(mid)
>>> TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY 
>>> (mid) REFERENCES words_moves(mid) ON DELETE CASCADE
>> 
>> Sry! I wasn't clear enough.
>> 
>> Those are two separate solutions. Pick one!
>> 
>> In this case you don't need the group by 
>> 
>> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
>> FROM words_moves
>> ORDER BY played DESC
>> LIMIT 6
>> INTO _sum


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Alexander Farber
No, I am sorry - for struggling with probably basic questions, but without
GROUP BY I get another error:

org.postgresql.util.PSQLException: ERROR: column "words_moves.played" must
appear in the GROUP BY clause or be used in an aggregate function|
Where: PL/pgSQL function words_skip_game(integer,integer) line 27 at SQL
statement


On Tue, Sep 6, 2016 at 2:30 PM, Sándor Daku  wrote:

> On 6 September 2016 at 14:23, Alexander Farber  > wrote:
>
>>
>> On Tue, Sep 6, 2016 at 1:30 PM, Sándor Daku 
>> wrote:
>>
>>>
>>> Get the last 6 record and
>>>
>>> 1.  ... action='SKIP' as isskip ... then you can group on and count the
>>> skip moves. If there is 6 of them the game ends.
>>>
>>> 2.  ... sum(case when action='SKIP' then 1 else 0 end) ... If the result
>>> is 6 the game ends
>>>
>>>
>> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
>> FROM words_moves
>> GROUP BY action
>> ORDER BY played DESC
>> LIMIT 6
>> INTO _sum;
>>
>> RAISE NOTICE '_sum = %', _sum;
>>
>> IF _sum = 6 THEN
>> _finished = CURRENT_TIMESTAMP;
>> END IF;
>>
>> but get the error -
>>
>> org.postgresql.util.PSQLException: ERROR:
>> column "words_moves.played" must appear in the GROUP BY clause or be used
>> in an aggregate function|
>> Where: PL/pgSQL function words_skip_game(integer,integer) line 27 at SQL
>> statement
>>
>

> P.S: Here is the table in question
>>
>>   Table "public.words_moves"
>>  Column |   Type   | Modifiers
>> +--+
>> ---
>>  mid| integer  | not null default
>> nextval('words_moves_mid_seq'::regclass)
>>  action | words_action | not null
>>  gid| integer  | not null
>>  uid| integer  | not null
>>  played | timestamp with time zone | not null
>>  tiles  | jsonb|
>>  score  | integer  |
>> Indexes:
>> "words_moves_pkey" PRIMARY KEY, btree (mid)
>> Check constraints:
>> "words_moves_score_check" CHECK (score > 0)
>> Foreign-key constraints:
>> "words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid)
>> ON DELETE CASCADE
>> "words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid)
>> ON DELETE CASCADE
>> Referenced by:
>> TABLE "words_games" CONSTRAINT "words_mid_fk" FOREIGN KEY (mid)
>> REFERENCES words_moves(mid)
>> TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
>> (mid) REFERENCES words_moves(mid) ON DELETE CASCADE
>>
>>
> Sry! I wasn't clear enough.
>
> Those are two separate solutions. Pick one!
>
> In this case you don't need the group by
>
> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
> FROM words_moves
> ORDER BY played DESC
> LIMIT 6
> INTO _sum
>
>


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Sándor Daku
On 6 September 2016 at 14:23, Alexander Farber 
wrote:

> Thank you, Sandor -
>
> On Tue, Sep 6, 2016 at 1:30 PM, Sándor Daku  wrote:
>
>>
>> Get the last 6 record and
>>
>> 1.  ... action='SKIP' as isskip ... then you can group on and count the
>> skip moves. If there is 6 of them the game ends.
>>
>> 2.  ... sum(case when action='SKIP' then 1 else 0 end) ... If the result
>> is 6 the game ends
>>
>>
> I am trying
>
> SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
> FROM words_moves
> GROUP BY action
> ORDER BY played DESC
> LIMIT 6
> INTO _sum;
>
> RAISE NOTICE '_sum = %', _sum;
>
> IF _sum = 6 THEN
> _finished = CURRENT_TIMESTAMP;
> END IF;
>
> but get the error -
>
> org.postgresql.util.PSQLException: ERROR:
> column "words_moves.played" must appear in the GROUP BY clause or be used
> in an aggregate function|
> Where: PL/pgSQL function words_skip_game(integer,integer) line 27 at SQL
> statement
>
> Regards
> Alex
>
> P.S: Here is the table in question
>
>   Table "public.words_moves"
>  Column |   Type   | Modifiers
> +--+
> ---
>  mid| integer  | not null default
> nextval('words_moves_mid_seq'::regclass)
>  action | words_action | not null
>  gid| integer  | not null
>  uid| integer  | not null
>  played | timestamp with time zone | not null
>  tiles  | jsonb|
>  score  | integer  |
> Indexes:
> "words_moves_pkey" PRIMARY KEY, btree (mid)
> Check constraints:
> "words_moves_score_check" CHECK (score > 0)
> Foreign-key constraints:
> "words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid)
> ON DELETE CASCADE
> "words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid)
> ON DELETE CASCADE
> Referenced by:
> TABLE "words_games" CONSTRAINT "words_mid_fk" FOREIGN KEY (mid)
> REFERENCES words_moves(mid)
> TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
> (mid) REFERENCES words_moves(mid) ON DELETE CASCADE
>
>
Sry! I wasn't clear enough.

Those are two separate solutions. Pick one!

In this case you don't need the group by

SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
FROM words_moves
ORDER BY played DESC
LIMIT 6
INTO _sum

Regards,
Sándor


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Alexander Farber
Thank you, Sandor -

On Tue, Sep 6, 2016 at 1:30 PM, Sándor Daku  wrote:

>
> Get the last 6 record and
>
> 1.  ... action='SKIP' as isskip ... then you can group on and count the
> skip moves. If there is 6 of them the game ends.
>
> 2.  ... sum(case when action='SKIP' then 1 else 0 end) ... If the result
> is 6 the game ends
>
>
I am trying

SELECT SUM(CASE WHEN action='skip' THEN 1 ELSE 0 END)
FROM words_moves
GROUP BY action
ORDER BY played DESC
LIMIT 6
INTO _sum;

RAISE NOTICE '_sum = %', _sum;

IF _sum = 6 THEN
_finished = CURRENT_TIMESTAMP;
END IF;

but get the error -

org.postgresql.util.PSQLException: ERROR:
column "words_moves.played" must appear in the GROUP BY clause or be used
in an aggregate function|
Where: PL/pgSQL function words_skip_game(integer,integer) line 27 at SQL
statement

Regards
Alex

P.S: Here is the table in question

  Table "public.words_moves"
 Column |   Type   | Modifiers
+--+---
 mid| integer  | not null default
nextval('words_moves_mid_seq'::regclass)
 action | words_action | not null
 gid| integer  | not null
 uid| integer  | not null
 played | timestamp with time zone | not null
 tiles  | jsonb|
 score  | integer  |
Indexes:
"words_moves_pkey" PRIMARY KEY, btree (mid)
Check constraints:
"words_moves_score_check" CHECK (score > 0)
Foreign-key constraints:
"words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid) ON
DELETE CASCADE
"words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid) ON
DELETE CASCADE
Referenced by:
TABLE "words_games" CONSTRAINT "words_mid_fk" FOREIGN KEY (mid)
REFERENCES words_moves(mid)
TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE


Re: [GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Sándor Daku
On 6 September 2016 at 12:32, Alexander Farber 
wrote:

> Good afternoon,
>
> for a 2-player game I store moves in the following 9.5.4 table:
>
> CREATE TYPE words_action AS ENUM ('play', 'skip', 'swap', 'resign');
>
> CREATE TABLE words_moves (
> mid SERIAL PRIMARY KEY,
> action words_action NOT NULL,
> gid integer NOT NULL REFERENCES words_games ON DELETE CASCADE,
> uid integer NOT NULL REFERENCES words_users ON DELETE CASCADE,
> played timestamptz NOT NULL,
> tiles jsonb,
> score integer CHECK (score > 0)
> );
>
> ALTER TABLE words_games ADD CONSTRAINT words_mid_fk FOREIGN KEY (mid)
> REFERENCES words_moves;
>
> And then I have a custom function for skipping a move (and inserting a
> 'skip' into the above table):
>
> CREATE OR REPLACE FUNCTION words_skip_game(
> IN in_uid integer,
> IN in_gid integer,
> OUT out_gid integer)
> RETURNS integer AS
> $func$
> DECLARE
> _finished timestamptz;
> _score1   integer;
> _score2   integer;
> _uid2 integer;
> BEGIN
> INSERT INTO words_moves (
> action,
> gid,
> uid,
> played
> ) VALUES (
> 'skip',
> in_gid,
> in_uid,
> CURRENT_TIMESTAMP
> );
>
> Could you please suggest a good way to check that the last 6 moves where
> 'skip', so that I can end the game when each player skipped her move 3
> times in a row?
>
> IF /* there are 6 'skip's - how to do it please? */ THEN
> _finished = CURRENT_TIMESTAMP;
> END IF;
>
> Below is the rest of my function, thank you for any ideas -
>
> Regards
> Alex
>
> UPDATE words_games SET
> finished = _finished,
> played1  = CURRENT_TIMESTAMP
> WHERE
> gid  = in_gid AND
> player1  = in_uid AND
> finished IS NULL AND
> -- and it is first player's turn
> (played1 IS NULL OR played1 < played2)
> RETURNING
> gid,
> score1,
> score2,
> player2
> INTO
> out_gid,
> _score1, -- my score
> _score2, -- her score
> _uid2;
>
> IF NOT FOUND THEN
> UPDATE words_games SET
> finished = _finished,
> played2  = CURRENT_TIMESTAMP
> WHERE
> gid  = in_gid AND
> player2  = in_uid AND
> finished IS NULL AND
> -- and it is second player's turn
> (played2 IS NULL OR played2 < played1);
> RETURNING
> gid,
> score2,  -- swapped
> score1,
> player1
> INTO
> out_gid,
> _score1,  -- my score
> _score2,  -- her score
> _uid2;
> END IF;
>
> IF NOT FOUND THEN
> RAISE EXCEPTION 'Game % not found for user %', in_gid,
> in_uid;
> END IF;
>
> -- game over, update win/loss/draw stat for both players
> IF _finished IS NOT NULL THEN
> IF _score1 > _score2 THEN
>
> UPDATE words_users SET
> win = win + 1
> WHERE uid = in_uid;
>
> UPDATE words_users SET
> loss = loss + 1
> WHERE uid = _uid2;
>
> ELSIF _score1 < _score2 THEN
>
> UPDATE words_users SET
> loss = loss + 1
> WHERE uid = in_uid;
>
> UPDATE words_users SET
> win = win + 1
> WHERE uid = _uid2;
> ELSE
> UPDATE words_users SET
> draw = draw + 1
> WHERE uid = in_uid OR uid = _uid2;
> END IF;
> END IF;
> END
> $func$ LANGUAGE plpgsql;
>

Get the last 6 record and

1.  ... action='SKIP' as isskip ... then you can group on and count the
skip moves. If there is 6 of them the game ends.

2.  ... sum(case when action='SKIP' then 1 else 0 end) ... If the result is
6 the game ends

Regards,
Sándor


[GENERAL] Check if there 6 last records of same type without gaps

2016-09-06 Thread Alexander Farber
Good afternoon,

for a 2-player game I store moves in the following 9.5.4 table:

CREATE TYPE words_action AS ENUM ('play', 'skip', 'swap', 'resign');

CREATE TABLE words_moves (
mid SERIAL PRIMARY KEY,
action words_action NOT NULL,
gid integer NOT NULL REFERENCES words_games ON DELETE CASCADE,
uid integer NOT NULL REFERENCES words_users ON DELETE CASCADE,
played timestamptz NOT NULL,
tiles jsonb,
score integer CHECK (score > 0)
);

ALTER TABLE words_games ADD CONSTRAINT words_mid_fk FOREIGN KEY (mid)
REFERENCES words_moves;

And then I have a custom function for skipping a move (and inserting a
'skip' into the above table):

CREATE OR REPLACE FUNCTION words_skip_game(
IN in_uid integer,
IN in_gid integer,
OUT out_gid integer)
RETURNS integer AS
$func$
DECLARE
_finished timestamptz;
_score1   integer;
_score2   integer;
_uid2 integer;
BEGIN
INSERT INTO words_moves (
action,
gid,
uid,
played
) VALUES (
'skip',
in_gid,
in_uid,
CURRENT_TIMESTAMP
);

Could you please suggest a good way to check that the last 6 moves where
'skip', so that I can end the game when each player skipped her move 3
times in a row?

IF /* there are 6 'skip's - how to do it please? */ THEN
_finished = CURRENT_TIMESTAMP;
END IF;

Below is the rest of my function, thank you for any ideas -

Regards
Alex

UPDATE words_games SET
finished = _finished,
played1  = CURRENT_TIMESTAMP
WHERE
gid  = in_gid AND
player1  = in_uid AND
finished IS NULL AND
-- and it is first player's turn
(played1 IS NULL OR played1 < played2)
RETURNING
gid,
score1,
score2,
player2
INTO
out_gid,
_score1, -- my score
_score2, -- her score
_uid2;

IF NOT FOUND THEN
UPDATE words_games SET
finished = _finished,
played2  = CURRENT_TIMESTAMP
WHERE
gid  = in_gid AND
player2  = in_uid AND
finished IS NULL AND
-- and it is second player's turn
(played2 IS NULL OR played2 < played1);
RETURNING
gid,
score2,  -- swapped
score1,
player1
INTO
out_gid,
_score1,  -- my score
_score2,  -- her score
_uid2;
END IF;

IF NOT FOUND THEN
RAISE EXCEPTION 'Game % not found for user %', in_gid,
in_uid;
END IF;

-- game over, update win/loss/draw stat for both players
IF _finished IS NOT NULL THEN
IF _score1 > _score2 THEN

UPDATE words_users SET
win = win + 1
WHERE uid = in_uid;

UPDATE words_users SET
loss = loss + 1
WHERE uid = _uid2;

ELSIF _score1 < _score2 THEN

UPDATE words_users SET
loss = loss + 1
WHERE uid = in_uid;

UPDATE words_users SET
win = win + 1
WHERE uid = _uid2;
ELSE
UPDATE words_users SET
draw = draw + 1
WHERE uid = in_uid OR uid = _uid2;
END IF;
END IF;
END
$func$ LANGUAGE plpgsql;


[GENERAL] pgadmin4 rc1 query tool performance

2016-09-06 Thread Attila Soki
Hi,

i testing the latest release of pgadmin4 (rc1) and noticed that the query tool 
is significantly slower than the query tool in pgadmin3.
i am not sure if this occurs only on my computer or only under os x (10.10.5) 
or is this a known behavior.


how to repeat:
create table test1 (a int, t text);
insert into test1 (select generate_series(0,25000,1),'Lorem ipsum');

create table test2 (a int, t text);
insert into test2 (select generate_series(0,25000,1),'Lorem ipsum dolor sit 
amet, consectetur adipiscing elit. Praesent sit amet auctor lectus. Nulla 
condimentum maximus mauris, ac mattis leo blandit eget.');


select * from test1;
pgadmin3: Total query runtime: 518 msec
25001 rows retrieved.

pgadmin4 1.0 rc1:
Total query runtime: 2 secs.
25001 rows retrieved.



select * from test2;
pgadmin3:
Total query runtime: 2.8 secs
25001 rows retrieved

pgadmin4 1.0 rc1:
Total query runtime: 14 secs.
25001 rows retrieved.

client: os x 10.10.5
server: debian jessie in vmware, pg9.6-rc1

thanks,

Attila



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres driver for mysql

2016-09-06 Thread Glyn Astill

> From: Mimiko 
>To: Posthresql-general  
>Sent: Monday, 5 September 2016, 19:38
>Subject: [GENERAL] postgres driver for mysql
> 
>
>Hello to all.
>
>I want to move applications to postgres. But there are applications 
>which can use only mysql or local mdb or mssql. For now I run a mysql 
>server to which this applications are connected. Long time ago I've 
>encountered a federeted module for mysql to allow to store data in 
>postgres. Now that site doesnot exists.
>
>Can anyone suggest a db federeted plugin for mysql/mariadb to store data 
>in pg. Changing applications is impossible, they are proprietary and 
>work only with specific databases only.
>


I've no answer to your question really, however it looks like at some point a 
while back I bookmarked this page:

http://www.pinaraf.info/2013/03/my-post-engine-a-postgres-engine-for-mysql/

Is that what you were using? If so the git repository is still accessible via 
gitweb:


https://www.gitorious.org/my-post-engine/my-post-engine

If you're desperate to use it you could pull a snapshot from there.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restricted access on DataBases

2016-09-06 Thread Durumdara
Dear Everybody!

I'm sorry because lack of answer - I try to do it now.

2016-09-05 16:19 GMT+02:00 Adrian Klaver :

> On 09/05/2016 05:45 AM, Durumdara wrote:
>
>> Dear PG-masters!
>>
>> We want to put more databases to one server, to "public" schema:
>> DB_A, DB_B, DB_C.
>>
>
> The PUBLIC schema is contained within a database not the other way around,
> so further explanation is necessary.


Thank you, this is important information. I will read about it.


>
>
> And users:
>> US_A, US_B, US_C, and Main_Admin.
>> We want to setup the environment.
>> Every simple user can access his database:
>> DB_A - US_A
>> DB_B - US_B
>> DB_C - US_C
>>
>> They can't access other databases only theirs.
>>
>> Main_Admin can access all databases.
>>
>
> Is Main_Admin created as a superuser?
>

It is not really su, but we want to use it as a "super user of these
databases"

For example:  there are a, b, c, d customers. Each of them have on
database. All of them in one server.
We create different users for them to not see any data except theirs.

Main_Admin is a login role for us. We know each role's password, but we
want to use our role to manage everything. It's not a superuser like
postgres, but it can do everything what A + B + C + D can.

If I want to write this in pseudo code, I would do as:

1.) Double owning:
set owner on DB_A to US_A, Main_Admin
set owner on DB_B to US_B, Main_Admin
...

2.) Grant
Grant all rights to US_A on DB_A to all objects;
Grant all rights to Main_Admin on DB_A to all objects;
Grant all rights to US_B on DB_B to all objects;
Grant all rights to Main_Admin on DB_B to all objects;


The owner got right to every object in the database. To the future objects
too.
"Double owning", or "All rights in database and contained objects" is good
for us to Main_Admin can access everything on every databases.

Like superuser, but without rights to get out from these databases.
Just like sandboxes. US_A have sandbox DB_A. US_B have sandbox DB_B.
Main_Admin have sandboxes (A + B + C + D), but can't do any problem on
other databases.


I will read your answers.

Very-very thank you.

dd


Re: [GENERAL] Materialized view auto refresh

2016-09-06 Thread hari.prasath
Dear Nguyen Tran Quoc Vinh

  Source link is broken. Please check this 
http://it.ued.udn.vn/myprojects/pgTriggerGen/PgSQLMVTriggersGenerator_src.rar



cheers

- Harry






 On Mon, 05 Sep 2016 11:50:49 +0530 Nguyễn Trần Quốc Vinh 
wrote  




Dear Harry.



You can try this while you are waiting the better solution from Prof. Kevin 
Grittner. We did not improve because we do not know if we are in the right 
way...



https://www.postgresql.org/message-id/caau9oxsb5gy8lz12kqaa3r1iv19c7famnefixdac1fhrfyt...@mail.gmail.com



You can downloat the source and the binary from here 
http://it.ued.udn.vn/myprojects/pgTriggerGen/.



Best regards,




TS. Nguyễn Trần Quốc Vinh

---

Chủ nhiệm khoa Tin học

Trường ĐH Sư phạm - ĐH Đà Nẵng

Website: http://it.ued.udn.vn; http://www.ued.udn.vn; http://www.ued.udn.vn

LLKH: http://scv.ued.udn.vn/~ntquocvinh

ĐT: 0511.6-512-586

DĐ: 0914.78-08-98



Nguyen Tran Quoc Vinh, PhD

Dean

Faculty of Information Technology

Danang University of Education

Website: http://it.ued.udn.vn; http://www.ued.udn.vn/; http://www.ued.udn.vn

SCV: http://scv.ued.udn.vn/~ntquocvinh

Phone: (+84) 511.6-512-586

Mobile: (+84) 914.78-08-98










On Tue, Aug 9, 2016 at 8:49 PM, Kevin Grittner  wrote:






On Tue, Aug 9, 2016 at 4:50 AM, hari.prasath  
wrote:
 
 >   Is there any tentative schedule for real-time or incremental(only
 > applying delta changes) refresh of materialized views.?.
 
 There is work in progress, but no hard schedule.  Unfortunately, it
 has often been set aside to address more immediate issues for

 particular end users; but I expect to get back to it Real Soon Now.

 

 --

 Kevin Grittner

 EDB: http://www.enterprisedb.com

 The Enterprise PostgreSQL Company

 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general