Re: [GENERAL] Nice to have features: Percentage function

2017-04-16 Thread Melvin Davidson
On Sun, Apr 16, 2017 at 12:23 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 04/15/2017 10:47 PM, Ron Ben wrote:
>
>> Hi,
>> I'm always finiding myself writing many varations of functions to
>> calculate percentage.
>> I think it would be nice if postgresql would have build in functions for
>> that.
>> I think the major functionality is something like the 3 ooptions here:
>> https://percentagecalculator.net/
>>
>> It may help to keep code simple and clean and it seem like something
>> simple to implement.
>>
>
> Plan B, CREATE your own extension. What follows is my first attempt at
> creating an extension and the functions included are simple placeholders
> more then anything else:
>
> File name: calc_percents--1.0.sql
>
> -- complain if script is sourced in psql, rather than via CREATE EXTENSION
> \echo Use "CREATE EXTENSION calc_percents" to load this file. \quit
>
> CREATE OR REPLACE FUNCTION public.percent_of(val1 numeric, val2 numeric)
>  RETURNS numeric
>  LANGUAGE sql
> AS $function$
>   select (val1 / 100) *  val2;
> $function$
> ;
>
> CREATE OR REPLACE FUNCTION public.what_percent(val1 numeric, val2 numeric)
>  RETURNS numeric
>  LANGUAGE sql
> AS $function$
>   SELECT (val1 / val2) * 100;
> $function$
> ;
>
> CREATE OR REPLACE FUNCTION public.percent_diff(val1 numeric, val2 numeric)
>  RETURNS numeric
>  LANGUAGE sql
> AS $function$
>   select (val2 - val1) / val1 * 100;
> $function$
> ;
>
> File name: calc_percents.control
>
> # calc_percents extension
> comment = 'Functions for calculating percentages'
> default_version = '1.0'
> relocatable = true
>
>
> Install the above in $SHARE/extension, in my case
> /usr/local/pgsql/share/extension/
>
> Then:
>
> test=# create extension calc_percents;
> CREATE EXTENSION
> test=# \df percent_of
>   List of functions
>  Schema |Name| Result data type |Argument data types | Type
> ++--+---
> -+
>  public | percent_of | numeric  | val1 numeric, val2 numeric |
> normal
>
> test=# select * from round(percent_of(10, 100), 2) ;
>  round
> ---
>  10.00
>
>
> test=# \df percent_diff
>List of functions
>  Schema | Name | Result data type |Argument data types |  Type
> +--+--+-
> ---+
>  public | percent_diff | numeric  | val1 numeric, val2 numeric |
> normal
>
>
> test=# select * from round(percent_diff(100, 109), 2) ;
>  round
> ---
>   9.00
> (1 row)
>
>
> test=# \df what_percent
>List of functions
>  Schema | Name | Result data type |Argument data types |  Type
> +--+--+-
> ---+
>  public | what_percent | numeric  | val1 numeric, val2 numeric |
> normal
> (1 row)
>
> test=# select * from round(what_percent(10, 109), 2) ;
>  round
> ---
>   9.17
>
>
>
>> If you think it's a good idea it would be nice if someone can implement
>> this.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>












*Or, you could just as easily compute inline in SQL:SELECT datname,
pg_size_pretty(pg_database_size(datname))as size_pretty,
pg_database_size(datname) as size,   (SELECT pg_size_pretty (SUM(
pg_database_size(datname))::bigint)FROM pg_database)  AS
total,   ((pg_database_size(datname) / (SELECT SUM(
pg_database_size(datname)) FROM
pg_database) ) * 100)::numeric(6,3) AS pct  FROM pg_database   ORDER BY
datname;*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Service outage: each postgres process use the exact amount of the configured work_mem

2017-04-14 Thread Melvin Davidson
On Fri, Apr 14, 2017 at 2:42 PM, Chris Mair <ch...@1006.org> wrote:

> you should increase shared_memory to 40GB. General philosphy is to
>> allocate 80% of system memory to shared_memory
>>
>
> Uhm...
>
> 80% is too much, likely:
>
> https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html
>
> Bye,
> Chris.
>
>
>
Sorry, I had a brain fart, shared_buffers should be 25% system memory, or
12GB


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Service outage: each postgres process use the exact amount of the configured work_mem

2017-04-14 Thread Melvin Davidson
On Fri, Apr 14, 2017 at 2:28 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:

> Melvin,
> Sorry for top-posting.
> I'm going ahead in troubleshooting. As Jeff said, there's probably nothing
> wrong with my values (at the end of the message you can find minimal info
> you requested).
> I tried running some queries against psql server and response times are
> good, so I'm moving my attentions to Windows server, which hosts a WCF
> service, that is the one that actually server customers.
>
> Thanks for now
> Moreno
>
>
> Il 14/04/2017 20:01, Melvin Davidson ha scritto:
>
>
> On Fri, Apr 14, 2017 at 1:50 PM, Moreno Andreo <moreno.and...@evolu-s.it>
> wrote:
>
>> Sorry,
>>  my mistake (I'm a bit nervous...)
>>
>> that's not work_mem, but shared_buffers
>>
>> Thanks
>>
>>
>> Il 14/04/2017 19:33, Melvin Davidson ha scritto:
>>
>>
>>
>> On Fri, Apr 14, 2017 at 1:12 PM, Moreno Andreo <moreno.and...@evolu-s.it>
>> wrote:
>>
>>> Hi all,
>>> About 2 hours and half ago, suddenly (and on the late afternoon of the
>>> Easter Friday), customers reported failing connections to our server, or
>>> even very slow.
>>> After a bit of checking (that also involved server reboot) I noticed
>>> (using top) that every process regarding postgres is using exactly the
>>> amout I configured as work_mem (3 GB). And AFAIK it's not good.
>>>
>>> 30085 postgres 20 0 *3370048* 156656 153876 S 6.7 0.3 0:00.20 postgres
>>> 29833 postgres 20 0 *337* 65260 62416 S 1.7 0.1 0:00.17 postgres
>>> 29632 postgres 20 0 *3372468* 11712 6028 S 0.7 0.0 0:00.60 postgres
>>>
>>> What can be happened?
>>> Nothing has been touched
>>> postgresql 9.5.6 on debian 8 just apt-get upgrade'd
>>>
>>> Any help would be appreciated.
>>> Moreno.
>>>
>>
>>
>>
>> *>using exactly the amout I configured as work_mem (3 GB). *
>>
>>
>> *You are right, that is bad, but that is your own fault. 3GB of work_mem
>> is very bad, Try lowing in to something more reasonable, like 20GB. *
>>
>> *https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY
>> <https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY>
>> "several running sessions could be doing such operations concurrently.
>> Therefore, the total memory used could be many times the value of work_mem;
>> it is necessary to keep this fact in mind when choosing the value."*
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>>
>>
> Moreno,
> we are working with minimal information here.
> Perhaps if you provided us with the following information it would be more
> useful,
>
> A. Total *SYSTEM MEMORY*
>
> 52 GB
>
> B. shared_memory
>
> 3 GB (was 13 GB)
>
> C. work_memory
>
> default
>
> D. max_connections
>
> 1000
>
> E. How many users were connected when the problem occurred?
>
> About 350 connections
>
> Thanks
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>
>
>
*Moreno, *

*I understand you will troubleshoot on your own, but based on total system
memory, you should increase *

*shared_memory to 40GB. General philosphy is to allocate 80% of system
memory to shared_memory. Of course you will have to do a PostgreSQL restart
*


*for that to take effect.*

*Also, with 1000 max_connections, you would be much better off with a
connection balancer like PgBouncer https://pgbouncer.github.io/downloads/
<https://pgbouncer.github.io/downloads/> *


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Service outage: each postgres process use the exact amount of the configured work_mem

2017-04-14 Thread Melvin Davidson
On Fri, Apr 14, 2017 at 1:50 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:

> Sorry,
>  my mistake (I'm a bit nervous...)
>
> that's not work_mem, but shared_buffers
>
> Thanks
>
>
> Il 14/04/2017 19:33, Melvin Davidson ha scritto:
>
>
>
> On Fri, Apr 14, 2017 at 1:12 PM, Moreno Andreo <moreno.and...@evolu-s.it>
> wrote:
>
>> Hi all,
>> About 2 hours and half ago, suddenly (and on the late afternoon of the
>> Easter Friday), customers reported failing connections to our server, or
>> even very slow.
>> After a bit of checking (that also involved server reboot) I noticed
>> (using top) that every process regarding postgres is using exactly the
>> amout I configured as work_mem (3 GB). And AFAIK it's not good.
>>
>> 30085 postgres 20 0 *3370048* 156656 153876 S 6.7 0.3 0:00.20 postgres
>> 29833 postgres 20 0 *337* 65260 62416 S 1.7 0.1 0:00.17 postgres
>> 29632 postgres 20 0 *3372468* 11712 6028 S 0.7 0.0 0:00.60 postgres
>>
>> What can be happened?
>> Nothing has been touched
>> postgresql 9.5.6 on debian 8 just apt-get upgrade'd
>>
>> Any help would be appreciated.
>> Moreno.
>>
>
>
>
> *>using exactly the amout I configured as work_mem (3 GB). *
>
>
> *You are right, that is bad, but that is your own fault. 3GB of work_mem
> is very bad, Try lowing in to something more reasonable, like 20GB. *
>
> *https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY
> <https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY>
> "several running sessions could be doing such operations concurrently.
> Therefore, the total memory used could be many times the value of work_mem;
> it is necessary to keep this fact in mind when choosing the value."*
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>
>
>
Moreno,
we are working with minimal information here.
Perhaps if you provided us with the following information it would be more
useful,

A. Total *SYSTEM MEMORY*
B. shared_memory
C. work_memory
D. max_connections
E. How many users were connected when the problem occurred?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Service outage: each postgres process use the exact amount of the configured work_mem

2017-04-14 Thread Melvin Davidson
On Fri, Apr 14, 2017 at 1:12 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:

> Hi all,
> About 2 hours and half ago, suddenly (and on the late afternoon of the
> Easter Friday), customers reported failing connections to our server, or
> even very slow.
> After a bit of checking (that also involved server reboot) I noticed
> (using top) that every process regarding postgres is using exactly the
> amout I configured as work_mem (3 GB). And AFAIK it's not good.
>
> 30085 postgres 20 0 *3370048* 156656 153876 S 6.7 0.3 0:00.20 postgres
> 29833 postgres 20 0 *337* 65260 62416 S 1.7 0.1 0:00.17 postgres
> 29632 postgres 20 0 *3372468* 11712 6028 S 0.7 0.0 0:00.60 postgres
>
> What can be happened?
> Nothing has been touched
> postgresql 9.5.6 on debian 8 just apt-get upgrade'd
>
> Any help would be appreciated.
> Moreno.
>



*>using exactly the amout I configured as work_mem (3 GB).*


*You are right, that is bad, but that is your own fault. 3GB of work_mem is
very bad, Try lowing in to something more reasonable, like 20GB.*

*https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY
<https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY>"several
running sessions could be doing such operations concurrently. Therefore,
the total memory used could be many times the value of work_mem; it is
necessary to keep this fact in mind when choosing the value."*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Best way to alter a foreign constraint

2017-03-20 Thread Melvin Davidson
On Mon, Mar 20, 2017 at 11:07 AM, Sylvain MARECHAL <
marechal.sylva...@gmail.com> wrote:

> Le 19/03/2017 à 09:54, Sylvain Marechal a écrit :
>
> 2017-03-18 20:40 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:
>
>> On 03/18/2017 12:05 PM, Sylvain Marechal wrote:
>>
>>> Hello all,
>>>
>>> Some of my tables were badly designed and have 2 indexes, like the
>>> following example (lots of tables have same problem):
>>>
>>> <<<
>>> postgres=# \d test1
>>>  Table "public.test1"
>>>  Column |  Type   | Modifiers
>>> +-+---
>>>  t1 | integer | not null
>>> Indexes:
>>> "test1_pkey" PRIMARY KEY, btree (t1)
>>> "test1_t1_key" UNIQUE CONSTRAINT, btree (t1)
>>> Referenced by:
>>> TABLE "test2" CONSTRAINT "test2_t1_fkey" FOREIGN KEY (t1) REFERENCES
>>> test1(t1)
>>>
>>> postgres=# \d test2
>>>  Table "public.test2"
>>>  Column |  Type   | Modifiers
>>> +-+---
>>>  t2 | integer | not null
>>>  t1 | integer |
>>> Indexes:
>>> "test2_pkey" PRIMARY KEY, btree (t2)
>>> Foreign-key constraints:
>>> "test2_t1_fkey" FOREIGN KEY (t1) REFERENCES test1(t1)
>>>
>>>>
>>>>>>
>>> It is not possible to remove the "test1_t1_key" constraint because the
>>> "test2_t1_fkey"  internally references it:
>>> <<<
>>> postgres=# ALTER TABLE test1 DROP CONSTRAINT test1_t1_key;
>>> ERROR:  cannot drop constraint test1_t1_key on table test1 because other
>>> objects depend on it
>>> DETAIL:  constraint test2_t1_fkey on table test2 depends on index
>>> test1_t1_key
>>> HINT:  Use DROP ... CASCADE to drop the dependent objects too.
>>>
>>>>
>>>>>>
>> Why not CASCADE?:
>
> In fact, CASCADE is not enough, because I don't know how the test2_t1_fkey
> is built : does it use the test1_pkey primary key or the test1_t1_key
> unique key?
> I am sure this information can be found in system catalogs, but I find it
> safer to explicitely delete then recreate the foreign constraint.
>
> Sylvain
>

This query might help you determine which key is referenced.
The key information is confkey, which is reported as key position in the
referenced table.
Refer to system catalogs description in documentaion for more info.

SELECT cn.conname,
   CASE WHEN cn.contype = 'c' THEN 'check'
WHEN cn.contype = 'f' THEN 'foreign key'
WHEN cn.contype = 'p' THEN 'primary key'
WHEN cn.contype = 'u' THEN 'unique'
WHEN cn.contype = 't' THEN 'trigger'
WHEN cn.contype = 'x' THEN 'exclusion'
   END as type,
   cn.condeferrable,
   CASE WHEN cn.conrelid > 0
THEN (SELECT nspname || '.' || relname
   FROM pg_class c
   JOIN pg_namespace n ON n.oid = c.relnamespace
  WHERE c.oid = cn.conrelid)
ELSE ''
   END as table,
   confkey,
   consrc
   FROM pg_constraint cn
 ORDER BY 1;


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Constraint + where

2017-03-19 Thread Melvin Davidson
On Sun, Mar 19, 2017 at 8:53 PM, Patrick B <patrickbake...@gmail.com> wrote:

>
>
> 2017-03-20 13:27 GMT+13:00 Melvin Davidson <melvin6...@gmail.com>:
>
>>
>>
>> On Sun, Mar 19, 2017 at 8:16 PM, Patrick B <patrickbake...@gmail.com>
>> wrote:
>>
>>> Hi guys,
>>>
>>> I've got a column 'type_note' on a new table that it's being designed:
>>>
>>> type_note varchar(32) NOT NULL;
>>>
>>> On that column, there will be three different data:
>>>
>>> 1. yes
>>> 2. no
>>> 3. maybe
>>>
>>> I wanna create a FK but just when the data on that column is = maybe.
>>>
>>> How can I do that? Thanks!
>>>
>>>
>> Why just "maybe"? Since there can only be three valid answers, why not FK
>> for all three?
>>
>> --
>>
>
>
>
> I was able to get what I needed this way:
>
> create table testing_fk_conditional_1 (
>> id serial NOT NULL PRIMARY KEY,
>> account_id bigint,
>> user_id bigint,
>> type_note integer NOT NULL,
>> CHECK (type_note = 100 AND user_id IS NOT NULL OR type_note = 200 AND
>> account_id IS NOT NULL)
>> );
>
>
That is not what you asked for originally, but as long as you are happy.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Constraint + where

2017-03-19 Thread Melvin Davidson
On Sun, Mar 19, 2017 at 8:27 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Sun, Mar 19, 2017 at 8:16 PM, Patrick B <patrickbake...@gmail.com>
> wrote:
>
>> Hi guys,
>>
>> I've got a column 'type_note' on a new table that it's being designed:
>>
>> type_note varchar(32) NOT NULL;
>>
>> On that column, there will be three different data:
>>
>> 1. yes
>> 2. no
>> 3. maybe
>>
>> I wanna create a FK but just when the data on that column is = maybe.
>>
>> How can I do that? Thanks!
>>
>>
> Why just "maybe"? Since there can only be three valid answers, why not FK
> for all three?
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>

Also "type_note varchar(32) NOT NULL;" is pointless.
You should make it either type_note(1)  with values of Y,N or M.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Constraint + where

2017-03-19 Thread Melvin Davidson
On Sun, Mar 19, 2017 at 8:16 PM, Patrick B <patrickbake...@gmail.com> wrote:

> Hi guys,
>
> I've got a column 'type_note' on a new table that it's being designed:
>
> type_note varchar(32) NOT NULL;
>
> On that column, there will be three different data:
>
> 1. yes
> 2. no
> 3. maybe
>
> I wanna create a FK but just when the data on that column is = maybe.
>
> How can I do that? Thanks!
>
>
Why just "maybe"? Since there can only be three valid answers, why not FK
for all three?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Maximum of connections in PG

2017-03-14 Thread Melvin Davidson
On Tue, Mar 14, 2017 at 12:08 PM, Durumdara <durumd...@gmail.com> wrote:

> Dear Melvin!
>
> What is the meaning of PgBouncer with persistent, non-interruptable
> connections? To I know it (for learn).
>
> They are non web connections (request, get connection, result, drop
> connection), they are pure, native applications which are keeping
> connection from the start to the termination.
>
> Thank you!
>
> dd
>
>
>
>
> 2017-03-14 15:29 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>:
>
>>
>>
>> On Tue, Mar 14, 2017 at 10:15 AM, Durumdara <durumd...@gmail.com> wrote:
>>
>>> Dear Members!
>>>
>>> In a very strong Linux machine (with many 16-30 GB RAM) what is the
>>> limit of the PGSQL server (9.4-9.5) "maximum connections"?
>>>
>>> 1000?
>>> 2000?
>>>
>>> The clients are native applications (Windows executables) with
>>> persistent connections, with more than 100 databases (every client have
>>> only one database connection).
>>>
>>> Now we must determine where is the upper limit to know when we must buy
>>> a new machine to customers clients (which have to migrate in future).
>>>
>>> I know my question is too common without precise numbers, but what is I
>>> need is your experiences in this theme?
>>> What areas are problematic when we increase the "max_connection" number?
>>>
>>> Thanks for any info!
>>>
>>> Best wishes
>>>dd
>>>
>>
>>
>> *>In a very strong Linux machine (with many 16-30 GB RAM) what is the
>> limit of the PGSQL server (9.4-9.5) "maximum connections"?*
>>
>>
>> *It all depends on the amount of shared memory, which in turn is
>> dependent on the O/S memory.*
>>
>>
>> *But if you are going to have thousands of users, you are better off
>> using a connection pooler. My preference is for
>> PgBouncer.https://pgbouncer.github.io/ <https://pgbouncer.github.io/>*
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
*>What is the meaning of PgBouncer with persistent, non-interruptable
connections? *
*Adrian has provided you with the most pertinent link. I strongly suggest
you rtfm.*
*If that does not answer your question, then start here
https://wiki.postgresql.org/wiki/PgBouncer
<https://wiki.postgresql.org/wiki/PgBouncer>*


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Maximum of connections in PG

2017-03-14 Thread Melvin Davidson
On Tue, Mar 14, 2017 at 10:15 AM, Durumdara <durumd...@gmail.com> wrote:

> Dear Members!
>
> In a very strong Linux machine (with many 16-30 GB RAM) what is the limit
> of the PGSQL server (9.4-9.5) "maximum connections"?
>
> 1000?
> 2000?
>
> The clients are native applications (Windows executables) with persistent
> connections, with more than 100 databases (every client have only one
> database connection).
>
> Now we must determine where is the upper limit to know when we must buy a
> new machine to customers clients (which have to migrate in future).
>
> I know my question is too common without precise numbers, but what is I
> need is your experiences in this theme?
> What areas are problematic when we increase the "max_connection" number?
>
> Thanks for any info!
>
> Best wishes
>dd
>


*>In a very strong Linux machine (with many 16-30 GB RAM) what is the limit
of the PGSQL server (9.4-9.5) "maximum connections"?*


*It all depends on the amount of shared memory, which in turn is dependent
on the O/S memory.*


*But if you are going to have thousands of users, you are better off using
a connection pooler. My preference is for
PgBouncer.https://pgbouncer.github.io/ <https://pgbouncer.github.io/>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unable to start postgresql

2017-03-10 Thread Melvin Davidson
On Thu, Mar 9, 2017 at 10:01 PM, John Iliffe <john.ili...@iliffe.ca> wrote:

> On Thursday 09 March 2017 16:37:28 you wrote:
> > On 03/08/2017 10:12 AM, John Iliffe wrote:
> > > I think you may have hit it but I'm even more confused now.
> > >
> > > I looked at the running Postgesql on the current server and there is a
> > > 5th sub-directory called /data.  That is on the same level as the
> > > /bin, /share, etc.  In this new installation it is not present and
> > > neither is the postgresql.conf file, nor are the hba files that
> > > restrict logins.
> > >
> > > So, my question:  is this a change between version 9.2.1 and 9.6.2 and
> > > if so where is the postgresql.conf file (I can't find it on a scan
> > > but it could be renamed I suppose)?
> > >
> > > Or, is there something wrong with the installation?  I went by the
> > > successful conclusion message from make install and assumed everything
> > > would be as expected in the directories.
> > >
> > > Any ideas as to what may have happened?
> >
> > Just realized this was an old post that just came through. I also
> > realized that my explanation of where the *.conf files are was specific
> > to a source default install. Package installs may locate them elsewhere.
> > In that case as a Postgres superuser(postgres for example) do:
> >
> > test=# select * from pg_settings where sourcefile is not null;
> >   .
> >
> > name| DateStyle
> > setting | ISO, MDY
> > unit| NULL
> > category| Client Connection Defaults / Locale and Formatting
> > short_desc  | Sets the display format for date and time values.
> > extra_desc  | Also controls interpretation of ambiguous date inputs.
> > context | user
> > vartype | string
> > source  | configuration file
> > min_val | NULL
> > max_val | NULL
> > enumvals| NULL
> > boot_val| ISO, MDY
> > reset_val   | ISO, MDY
> > sourcefile  | /usr/local/pgsql/data/postgresql.conf
> > sourceline  | 538
> > pending_restart | f
> >   .
> >
> > And look for the sourcefile location.
> >
> > > John
>
>
> Thanks Adrian.
>
> I had figured that out eventually, and found the missing files where you
> said
> they were.
>
> Appreciate the info on how to get the configuration values.  A lot seems to
> have changed since I started using Postgresql and I guess I should really
> read up on how things work now and not in 2008!
>
> Regards,
>
> John
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*John,*




*If you are really serious about reading up, I suggest you pick from the
books on this
url:https://www.packtpub.com/all-books?search==84==
<https://www.packtpub.com/all-books?search==84==>*

*Probably "PostgreSQL Administration Essentials" would be good to start
with.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unable to start postgresql

2017-03-08 Thread Melvin Davidson
On Wed, Mar 8, 2017 at 10:00 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> John Iliffe <john.ili...@iliffe.ca> writes:
> > When the programme exited it left another postmaster.pid file so I
> deleted
> > that one too.
>
> You haven't shown us the program actually exiting, and basically the only
> way to get the postmaster to exit without removing its pid file is to
> kill -9 it.  Now I am suspicious that you in fact haven't killed any
> postmasters, but only removed their pidfiles out from under them, which is
> an incredibly dangerous thing to do.  Check "ps ax" output to see if any
> postgres processes are lurking in background.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Hey, looking closer, I see this in your original log

LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

That means Postgres WAS started, just that the postgres port was unable to
be opened.
So if you do a pg_ctl stop, change the port in postgresql.conf to 5433 (or
5434) and then attempt
to restart, is your problem resolved?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unable to start postgresql

2017-03-07 Thread Melvin Davidson
On Tue, Mar 7, 2017 at 11:17 PM, John Iliffe <john.ili...@iliffe.ca> wrote:

> I was unable to get postgres started so I did a very basic compile/install
> to test it.  The configuration line was:
>
> ./configure --prefix=/usr/postgres-9.6.2
>
> 
> Then I ran make which completed as expected:
>
> make[1]: Leaving directory '/tmp/postgresql-9.6.2/config'
> All of PostgreSQL successfully made. Ready to install.
>
> --
> Then make check
>
> make check
>
> ===
>  All 167 tests passed.
> ===
>
> make[1]: Leaving directory '/tmp/postgresql-9.6.2/src/test/regress'
> 
>
> Then make install as user root because of directory permissions.  I have
> the expected files in /usr/postgres_9.6.2
>
> Now, running as user postgres I try and start as stated in the manual
>
> postgres -D /usr/pgsql_tablespaces
>
> initdb has already been run and the directory pgsql_tablespaces has a
> number of files as expected.  As yet no database has been defined because
> psql won't start.
>
> The result is:
> [postgres@prod04 postgresql-9.6.2]$ postgres -D /usr/pgsql_tablespaces
> LOG:  could not bind IPv4 socket: Cannot assign requested address
> HINT:  Is another postmaster already running on port 5432? If not, wait a
> few seconds and retry.
> LOG:  database system was shut down at 2017-03-07 22:22:57 EST
> LOG:  MultiXact member wraparound protections are now enabled
> LOG:  database system is ready to accept connections
> LOG:  autovacuum launcher started
>
> Same results if I use pg_ctl to start the process.
>
> I checked with lsof and there is no process bound to socket 5432.  There is
> no entry in /var/run for a socket related to postgresql.
>
> I thought it might be a security issue so I put SELinux in permissive mode
> but the result is the same.  The SELinux journal does not show any warnings
> on this process.
>
> [root@prod04 postgresql-9.6.2]# sestatus
> SELinux status: enabled
> SELinuxfs mount:/sys/fs/selinux
> SELinux root directory: /etc/selinux
> Loaded policy name: targeted
> Current mode:   permissive  <--permissive mode**
> Mode from config file:  permissive
> Policy MLS status:  enabled
> Policy deny_unknown status: allowed
> Max kernel policy version:  30
> -
>
> One thing that I haven't been able to find any the log files.  Where are
> they
> normally stored?
>
> So, any ideas as to where to go next to debug this would be appreciated!
> This is a brand new server that I am trying to configure so I have a fair
> amount of security clearance to chase things.
>
> Thanks in advance.
>
> John
> =
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

This is very suspicicious.

>LOG:  could not bind IPv4 socket: Cannot assign requested address
>HINT:  Is another postmaster already running on port 5432? If not, wait a
f>ew seconds and retry.

So check to see if the file "postmaster.pid" exists.
If it does, and postgres is NOT running, just delet or rename it and try to
start.
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] SELECT x'00000000F'::int leading zeros causes "integer out of range"

2017-02-24 Thread Melvin Davidson
On Fri, Feb 24, 2017 at 2:13 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Justin Pryzby <pry...@telsasoft.com> writes:
> > Is this expected behavior ?
> >   ts=# SELECT x'F'::int;
> >   ERROR:  22003: integer out of range
> >   LOCATION:  bittoint4, varbit.c:1575
>
> Yes.  The provided operation is "convert a bitstring of up to 32 bits
> to an integer".  It's not "guess whether it's okay to throw away some
> bits to make an integer".
>
> As an example, even if you think it's obvious that it would be okay
> to convert that bitstring to "15", would it be all right to convert
> x'F' to "-1" ?
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


In addition to what Tom said

SELECT x'F'::bigint;

works just fine.

IOW, int = 4 bytes, while bigint = 8 bytes.
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] ShmemAlloc maximum size

2017-02-23 Thread Melvin Davidson
On Thu, Feb 23, 2017 at 9:47 AM, hari.prasath <hari.pras...@zohocorp.com>
wrote:

> I am trying to get some 15MB of shared memory using *ShmemAlloc. *
>
> Cross checked all my kernal variables set in OS level and also in
> postgresql.conf.
>
> Cant able to get the required memory(15MB) but if i try with some small
> memory(1MB), its working fine.
>
> Am i missing something.?
>
>
>
> cheers
> *- *Harry
>
>
>Am i missing something.?
Probably. Perhaps if you told us the exact version of PostgreSQL & O/S, it
might be more helpful.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Move rows from one database to other

2017-02-21 Thread Melvin Davidson
On Tue, Feb 21, 2017 at 11:10 AM, Thomas Güttler <
guettl...@thomas-guettler.de> wrote:

>
>> Depending on how much data you want to move, and if the tables have the
>> same structure, you might also want to consider
>> using
>> pg_dump -a
>>
>> OR
>> multiple instances of
>>
>> on satellite
>> COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
>> TO { 'filename' | PROGRAM 'command' | STDOUT }
>> [ [ WITH ] ( option [, ...] ) ]
>>
>> on central
>> COPY table_name [ ( column_name [, ...] ) ]
>> FROM { 'filename' | PROGRAM 'command' | STDIN }
>> [ [ WITH ] ( option [, ...] ) ]
>>
>> A BIG consideration is:
>> Does the Central DB have the same table structures as all satellite DB's?
>>
>
> yes, same schema
>
> Does the Central DB already have records in the tables.
>>
>
> yes, movement of rows should happen every ten minutes.
>
> Do all Satellite tables have unique records for each other?
>>
>
> Yes, UUID primary key.
>
> As Adrian stated, it would be very helpful if you provided us with all O/S
>> and PostgreSQL vesions involved.
>>
>
> Versions are 9.5 and 9.6
>
> I have other concerns: atomar transaction. Movement should happen
> completely or not all.
>
> I don't think you can do this reliable (atomic transaction) with "copy
> table_name".
>
> Regards,
>   Thomas
>
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
>

>I have other concerns: atomar transaction. Movement should happen
completely or not all.
>I don't think you can do this reliable (atomic transaction) with "copy
table_name".

You can if you wrap it in a transaction:
EG:
BEGIN;
COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
TO { 'filename' | PROGRAM 'command' | STDOUT }
[ [ WITH ] ( option [, ...] ) ]
COMMIT;

BEGIN;
COPY table_name [ ( column_name [, ...] ) ]
FROM { 'filename' | PROGRAM 'command' | STDIN }
[ [ WITH ] ( option [, ...] ) ]
COMMIT;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Move rows from one database to other

2017-02-21 Thread Melvin Davidson
On Tue, Feb 21, 2017 at 9:27 AM, William Ivanski <william.ivan...@gmail.com>
wrote:

> You can try OmniDB: http://www.omnidb.com.br/en_index.aspx
>
> OmniDB has a Convert feature, where you can set a data transfer, even if
> the target table exists.
>
> Em ter, 21 de fev de 2017 às 11:18, Adrian Klaver <
> adrian.kla...@aklaver.com> escreveu:
>
>> On 02/21/2017 12:53 AM, Thomas Güttler wrote:
>> > I want to move table rows from one database to an central database.
>>
>> You actually talking about moving from ~100 databases to the central
>> database, correct?
>>
>> >
>> > Both run PostgreSQL.
>>
>> Are all the Postgres instances the same version and what is the version
>> or versions?
>>
>> >
>> > My use case looks like this:
>> >
>> > There are N satellite databases in different data centers. N is about
>> > 100 at the moment.
>> >
>> > There is one central database.
>> >
>> > I need a way to reliably move rows from the satellite databases to the
>> > central one
>>
>> Two ways I can think of:
>>
>> https://www.postgresql.org/docs/9.6/static/dblink.html
>>
>> https://www.postgresql.org/docs/9.6/static/postgres-fdw.html
>>
>> >
>> > Example
>> >
>> > The rows of host1 look like this:
>> >
>> >  host1, 2017-02-21, abc
>> >  host1, 2017-02-20, def
>> >  host1, 2017-02-19, ghi
>> >
>> > The rows of host2 look like this:
>> >
>> >  host2, 2017-02-21, foo
>> >  host2, 2017-02-20, bar
>> >  host2, 2017-02-19, blu
>> >
>> > After syncing, all lines which were transferred should be deleted on the
>> > satellite databases.
>> >
>> > The central table should look like this (it has the same schema)
>> >
>> >  host1, 2017-02-21, abc
>> >  host1, 2017-02-20, def
>> >  host1, 2017-02-19, ghi
>> >  host2, 2017-02-21, foo
>> >  host2, 2017-02-20, bar
>> >  host2, 2017-02-19, blu
>>
>> Is there a Primary Key on the satellite tables or some way of
>> determining unique rows?
>>
>> Is there any existing overlap between the data in the central database
>> and the satellite databases?
>>
>>
>> >
>> >
>> > I don't want to code this myself, since there a tons of possible race
>> > conditions:
>>
>> How much data are you talking about moving from each database?
>>
>> How active are the satellite databases?
>>
>> >
>> >  - inserts can happen during syncing.
>>
>> Can UPDATEs happen?
>>
>> >  - Network can break during syncing.
>> >  - inserts into the central table can break (e.g. disk full): No loss at
>> > the satellite database must happen.
>> >  - ...
>> >
>> > How to solve this with PostgreSQL?
>> >
>> > Regards,
>> >   Thomas Güttler
>> >
>> >
>> >
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
> --
>
> William Ivanski - Microsoft MVP
>
Depending on how much data you want to move, and if the tables have the
same structure, you might also want to consider using
pg_dump -a

OR
multiple instances of

on satellite
COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
TO { 'filename' | PROGRAM 'command' | STDOUT }
[ [ WITH ] ( option [, ...] ) ]

on central
COPY table_name [ ( column_name [, ...] ) ]
FROM { 'filename' | PROGRAM 'command' | STDIN }
[ [ WITH ] ( option [, ...] ) ]

A BIG consideration is:
Does the Central DB have the same table structures as all satellite DB's?
Does the Central DB already have records in the tables.
Do all Satellite tables have unique records for each other?

As Adrian stated, it would be very helpful if you provided us with all
O/S and PostgreSQL vesions involved.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] No space left on device

2017-02-19 Thread Melvin Davidson
2% /run
>> tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
>> /dev/vdb1 100G 100G 28K 100% /opt1
>> tmpfs 799M 0 799M 0% /run/user/1002
>>
>>
>>
>>
>>
>>
>> 2017-02-19 03:46:53.513 UTC [24101]: [36911-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.522 UTC [2240]: [157092-1]
>> host=192.168.60.103,user=agent2,db=pem ERROR: could not extend file
>> "base/16393/112735.5": wrote only 4096 of 8192 bytes at block 722454
>> 2017-02-19 03:46:53.522 UTC [2240]: [157093-1]
>> host=192.168.60.103,user=agent2,db=pem HINT: Check free disk space.
>> 2017-02-19 03:46:53.522 UTC [2240]: [157094-1]
>> host=192.168.60.103,user=agent2,db=pem CONTEXT: SQL statement "INSERT
>> INTO pemhistory.table_frozenxid (recorded_time, server_id, database_name,
>> schema_name, table_name, frozenxid) VALUES (NEW.recorded_time,
>> NEW.server_id, NEW.database_name, NEW.schema_name, NEW.table_name,
>> NEW.frozenxid)"
>> PL/pgSQL function pemdata.copy_table_frozenxid_to_history() line 4 at
>> SQL statement
>> 2017-02-19 03:46:53.522 UTC [2240]: [157095-1]
>> host=192.168.60.103,user=agent2,db=pem STATEMENT: UPDATE
>> pemdata.table_frozenxid SET "frozenxid" = '76483784', recorded_time = now()
>> WHERE "schema_name" = 't_0e2c170dc92d4d05bcf533b6e719cae4' AND
>> "table_name" = 'd20150809_x_c73db8eb0037481fa1f10371179fbf3e' AND
>> "server_id" = '4' AND "database_name" = 'cloud'
>> 2017-02-19 03:46:53.523 UTC [24101]: [36912-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.523 UTC [24101]: [36913-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.524 UTC [24101]: [36914-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.524 UTC [24101]: [36915-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.524 UTC [24101]: [36916-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.525 UTC [24101]: [36917-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.607 UTC [24101]: [36918-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.607 UTC [24101]: [36919-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.713 UTC [24101]: [36920-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.714 UTC [24101]: [36921-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.817 UTC [24101]: [36922-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.818 UTC [24101]: [36923-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.818 UTC [24101]: [36924-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.818 UTC [24101]: [36925-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.818 UTC [24101]: [36926-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.819 UTC [24101]: [36927-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.913 UTC [24101]: [36928-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.913 UTC [24101]: [36929-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.982 UTC [24101]: [36930-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.982 UTC [24101]: [36931-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.985 UTC [24101]: [36932-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:53.985 UTC [24101]: [36933-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device
>> 2017-02-19 03:46:54.045 UTC [24101]: [36934-1] host=,user=,db= LOG: could
>> not open temporary statistics file "pg_stat_tmp/global.tmp": No space left
>> on device2017-02-19 04:39:33.813 UTC [24094]: [8-1] host=,user=,db= FATAL:
>> could not access status of transaction 0
>> 2017-02-19 04:39:33.813 UTC [24094]: [9-1] host=,user=,db= DETAIL: Could
>> not open file "pg_notify/": No space left on device.
>>
>>
>>
>>
>>
>> Regards,
>> SSR
>>
>>
>>
>>
>>
>>
>>
>>
> You did not specify what O/S you are using or your Postgres Version.
However, as per advised by Alvaro,
I strongly suggest you take the following additional steps.

1. Add an additional large hard drive and make a tablespace on it.
2. Move a few of your larger tables to the new tablespace.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgresql how to duplicate rows in result.

2017-02-14 Thread Melvin Davidson
On Tue, Feb 14, 2017 at 1:04 PM, Alessandro Baggi <
alessandro.ba...@gmail.com> wrote:

> Hi list,
> sorry for my english, I will try to example as well. I've a query that
> joins multiple tables and return a result like:
>
>
> id,customers,phone,code,number
> 1 , ,3,123 , 2
> 2 , aassdsds,33322,211 , 1
> 3 , ,21221,221 , 1
>
>
> I need, where "number" field is > 1, to duplicate the row * N(number field
> value) with a result like this:
>
> id,customers,phone,code,number
> 1 , ,3,123 , 2
> 1 , ,3,123 , 2
> 2 , aassdsds,33322,211 , 1
> 3 , ,21221,221 , 1
>
> How I can accomplish to this problem?
>
> I'm using postgresql 9.3.15.
>
> thanks in advance.
>
> Alessandro.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

It would be immensely helpful if you provided the schema of tables involved
with original query.
In the meantime, I suggest you look into the use of UNION.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2017-02-14 Thread Melvin Davidson
On Tue, Feb 14, 2017 at 7:46 AM, Michael Librodo
<mike.librodo(at)gmail(dot)com> <mike.libr...@gmail.com> wrote:

> I had to modify the portion that copies FK constraint:
>
> https://gist.github.com/mlibrodo/6f246c483e650dc716eba752a9d3c79a
>
> Basically, the issue on my end was that the FK constraints on the
> cloned(destination) schema seem to reference the source_schema
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Michael,
Perhaps you can combine that code with the original by Emanuel '3manuek'
and that I modified to be more inclusive
of additional objects. See attached.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: clone_schema(text, text, boolean)

-- DROP FUNCTION clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2015-09-20 by Melvin Davidson
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
   AND objsubid = 0;
  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;
  
  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  -- add sequence comments
SELECT oid INTO tbl_oid
  FROM pg_class 
 WHERE relkind = 'S'
   AND relnamespace = src_oid
   AND relname = quote_ident(object);

SELECT description INTO v_def
  FROM pg_description
 WHE

Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Melvin Davidson
On Mon, Feb 13, 2017 at 1:19 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:

> Il 13/02/2017 18:59, John R Pierce ha scritto:
>
>> option?   query editor window? what software are you talking about?
>>
>> I'm using 1.22.1 version.
>>>
>>
>> 1.22.1 version? PostgreSQL versions currently supported are 9.2.x to
>> 9.6.x
>>
>>
>> I think he's talking about pgAdmin III
>
> Cheers
>
> Moreno
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

In PgAdmin III, AUTO ROLLBACK and AUTO COMMIT are on by default in version
1.22.1

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Melvin Davidson
On Mon, Feb 13, 2017 at 1:10 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 02/13/2017 09:59 AM, John R Pierce wrote:
>
>> On 2/13/2017 7:15 AM, mpomykacz wrote:
>>
>>> So my problem is like this:
>>>
>>> I start the transaction with BEGIN TRANSACTION;
>>> Then I have for example some INSERTs to DB
>>> and at the end COMMIT; and END TRANSACTION;
>>>
>>
>> COMMIT ends the transaction.   In PostgreSQL, END TRANSACTION is
>> redundant, equivalent to COMMIT, do one or the other, not both.
>>
>>
>>> But if one of this INSERTs causes error, the transaction will stop
>>> (but it
>>> is still open and next patch is implemented within the same transaction).
>>>
>>
>> Patch ?
>>
>
> Pretty sure the OP is applying a series of SQL driven patches to
> add/change/remove database schema.
>
>
>>
>>> When I turn the Auto-Rollback on everything is ok : in situation like
>>> this
>>> my transaction is automatically rollbacked and closed.
>>>
>>
>> there is no autorollback option in standard PostgreSQL ?
>>
>>
>>> But I do not want to do the Auto-Rollback click manualy in the Option
>>> menu
>>> or query editor window (because I can not be sure that the person who
>>> will
>>> run the patch would remember about this click).
>>>
>>
>> option?   query editor window? what software are you talking about?
>>
>
> I would say pgAdmin3:
>
> https://www.pgadmin.org/download/source.php
>
>
>> I'm using 1.22.1 version.
>>>
>>
>> 1.22.1 version? PostgreSQL versions currently supported are 9.2.x to
>>
>
> See above.
>
> 9.6.x
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

1.22.1 version refers to PgAdmin3. It is not the PostgreSQL version.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Custom shuffle function stopped working in 9.6

2017-02-11 Thread Melvin Davidson
I don't use an array, but perhaps you can adapt to this function which
works in 9.6.1

CREATE OR REPLACE FUNCTION public.scramble(text)
  RETURNS text AS
$BODY$
DECLARE
p_inALIAS FOR $1;
v_outTEXT DEFAULT '';
v_modTEXT;
v_lenINT4;
v_ctrINT4;
v_posINT4;
v_array CHAR[];

BEGIN

v_ctr = 1;
WHILE v_ctr <= LENGTH(p_in) LOOP
  v_array[v_ctr] = NULL;
  v_ctr := v_ctr +1;
END LOOP;

v_ctr = 1;
WHILE v_ctr <= LENGTH(p_in) LOOP

v_pos := INT4(random() * 100);
IF v_pos > LENGTH(p_in) OR v_array[v_pos] IS NOT NULL THEN
CONTINUE;
END IF;

v_mod := substring(p_in from v_pos for 1);

v_array[v_pos] := '*';
v_out := v_out || v_mod;
v_ctr := v_ctr +1;

END LOOP;

RETURN v_out;
END;
-- SAMPLE CALL
-- SELECT scramble('Honor');
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION public.scramble(text)
  OWNER TO postgres;


On Sat, Feb 11, 2017 at 12:17 PM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> I think ORDER BY RANDOM() has stopped working in 9.6.2:
>
> words=> select unnest(ARRAY['a','b','c','d','e','f']) order by random();
>  unnest
> 
>  a
>  b
>  c
>  d
>  e
>  f
> (6 rows)
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Making changes to PostgreSQL's configure logic so as to have contrib modules installed in a specific directory and make them use PGXS?

2017-01-31 Thread Melvin Davidson
Probably your best best is to install normally, then move the contribs to a
directory of your choice
and make a symbolic link from the old directory. That way it should work
with both PostgreSQL
and PGXS.

On Tue, Jan 31, 2017 at 7:35 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 01/31/2017 03:04 PM, postgres user wrote:
>
>> Hi,
>>
>> I want to configure my PostgreSQL installation in such a manner such
>> that the contrib modules of the PostgreSQL distribution are stored in a
>> specific directory and they should use the PGXS extensions management
>> system supported by Postgres, as they are currently packaged along with
>> Postgres and follow a specific directory structure they are installed
>> along with Postgres but I don't want that. I want all those contrib
>> module extensions to use a separate pg_config and hence want their
>> makefiles to use PGXS. How do I go about doing that.
>>
>
> Not sure, though some more information would help:
>
> 1) Am I right in assuming you are building everything from source?
>
> 2) If not what is your install procedure?
>
> 3) When you say all the contrib modules do really mean all or all in some
> list?
>
> 4) Where is the directory you want them to be installed in located?
>
> 5) What OS are you using?
>
> 6) Have you looked at an OS packaging systems to do this?
>
>
>> Thanks
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] psql only works with -h (even localhost)

2017-01-25 Thread Melvin Davidson
On Wed, Jan 25, 2017 at 11:07 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 01/25/2017 08:02 AM, ProPAAS DBA wrote:
>
>> Hi all;
>>
>>
>> we have a client server where 'psql -h localhost' is the only way psql
>> will connect when ssh'd onto the db server. Would like to be able to
>> just run psql but not sure what the issue/fix is. Anyone have any
>> insight hot to fix this?
>>
>
>
> What is the error you get if you use something other then -h localhost?
>
> What is in pg_hba.conf?
>
>
>>
>> Thanks in advance
>>
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

What is O/S -h localhost points to?
What is the version of PostgreSQL?




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Queries are taking way longer in 9.6 than 9.5

2017-01-18 Thread Melvin Davidson
On Wed, Jan 18, 2017 at 3:06 PM, Merlin Moncure <mmonc...@gmail.com> wrote:

> On Wed, Jan 18, 2017 at 1:04 PM, Ravi Tammineni
> <rtammin...@partner.aligntech.com> wrote:
> > Hi Chris,
> >
> > Here is the query and execution plan in 9.5 and 9.6.
>
> Can you verify tblpuorderstatus and tblpuorderstatushistory have all
> indexes accounted for on both servers?  It seems incredible server
> would prefer wading through 11M records to 1298 nestloop.  I'm curious
> what plans you get if you try playing around with:
>
> set enable_seqscan=false;
> set enable_hashjoin=false;
>
> ...but I think we have two possibilities here:
> 1. schema mismatch
> 2. planner bug
>
> merlin
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*I never got an answer to my question.*

*Have you verified that postgresql.conf is the same of both 9.5 & 9.6?*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Queries are taking way longer in 9.6 than 9.5

2017-01-17 Thread Melvin Davidson
exists(SELECT 1
> >
> > FROM
> >
> > tblCnAccounts a
> >
> > WHERE a.master_user_id = d.master_user_id
> > AND
> >
> >   a.user_name = 'dd'))
> >
> > AND osh.vip_order_type IN (17)--assist
> >
> > --AND osh.tx_submit_date IS NOT NULL
> >
> > AND osh.cancelled_date IS NULL
> >
> > AND osh.cc_accept_date IS NOT NULL;
> >
> >
> >
> >
> >
> > ##  9.5 Execution plan
> >
> >
> >
> >
> > QUERY PLAN
> >
> > --
> > --
> > -
> >
> > Aggregate  (cost=1177.25..1177.26 rows=1 width=0)
> >
> >->  Nested Loop  (cost=67.83..1177.25 rows=5 width=0)
> >
> >  ->  Nested Loop  (cost=67.71..254.81 rows=1322 width=4)
> >
> >->  Nested Loop  (cost=67.63..104.45 rows=1322 width=4)
> >
> >  ->  HashAggregate  (cost=67.54..68.12 rows=192
> > width=4)
> >
> >Group Key: d.vip_patient_id
> >
> >
> >
> >->  Nested Loop  (cost=0.17..67.44 rows=192
> > width=4)
> >
> >  ->  Index Scan using unq_user_name on
> > tblcnaccounts a  (cost=0.08..4.09 rows=1 width=4)
> >
> >Index Cond: ((user_name)::text
> > =
> > 'dd'::text)
> >
> >
> >
> >  ->  Index Only Scan using
> > idx_tblcndoctorpatientmap on tblcndoctorpatientmap d
> > (cost=0.09..62.78
> > rows=192 width=8)
> >
> >Index Cond: (master_user_id =
> > a.master_user_id)
> >
> >  ->  Index Scan using idx_tblcnpatientordermap on
> > tblcnpatientordermap po  (cost=0.09..0.17 rows=7 width=8)
> >
> >Index Cond: (vip_patient_id =
> > d.vip_patient_id)
> >
> >->  Index Scan using tblpuorderstatus_pkey on
> > tblpuorderstatus os  (cost=0.09..0.11 rows=1 width=8)
> >
> >  Index Cond: (vip_order_id = po.vip_order_id)
> >
> >  ->  Index Scan using tblpuorderstatushistory_pkey on
> > tblpuorderstatushistory osh  (cost=0.11..0.69 rows=1 width=4)
> >
> >Index Cond: (order_status_history_id =
> > os.order_status_history_id)
> >
> >Filter: ((cancelled_date IS NULL) AND (cc_accept_date
> > IS NOT NULL) AND (vip_order_type = 17))
> >
> >
> >
> > ##
> > ###
> >
> >
> >
> > ##  9.6 Execution plan
> >
> >  QUERY
> > PLAN
> >
> > --
> > ---
> >
> > Aggregate  (cost=3185034.17..3185034.17 rows=1 width=8)
> >
> >->  Nested Loop Semi Join  (cost=3012833.92..3185010.91 rows=46511
> > width=0)
> >
> >
> >
> >  ->  Nested Loop  (cost=3012833.75..3137291.51 rows=46511
> > width=4)
> >
> >
> >
> >->  Hash Join  (cost=3012833.67..3117860.77 rows=46511
> > width=4)
> >
> >  Hash Cond: (os.order_status_history_id =
> > osh.order_status_history_id)
> >
> >
> >
> >  ->  Seq Scan on tblpuorderstatus os
> > (cost=0.00..96498.46 rows=11185486 width=8)
> >
> >  ->  Hash  (cost=3010979.77..3010979.77
> > rows=529686
> > width=4)
> >
> >->  Gather  (cost=1000.00..3010979.77
> > rows=529686 width=4)
> >
> >  Workers Planned: 2
> >
> >  ->  Parallel Seq Scan on
> > tblpuorderstatushistory osh  (cost=0.00..2957011.17 rows=220702
> > width=4)
> >
> >
> >
> >Filter: ((cancelled_date IS
> > NULL) AND (cc_accept_date IS NOT NULL) AND (vip_order_type = 17))
> >
> >
> >
> >->  Index Scan using tblcnpatientordermap_pkey on
> > tblcnpatientordermap po  (cost=0.09..0.41 rows=1 width=8)
> >
> >
> >
> >  Index Cond: (vip_order_id = os.vip_order_id)
> >
> >
> >
> >  ->  Nested Loop Semi Join  (cost=0.17..1.02 rows=1 width=4)
> >
> >->  Index Scan using tblcndoctorpatientmap_pkey on
> > tblcndoctorpatientmap d  (cost=0.09..0.39 rows=1 width=8)
> >
> >  Index Cond: (vip_patient_id = po.vip_patient_id)
> >
> >->  Index Scan using tblcnaccounts_pkey on
> > tblcnaccounts a  (cost=0.08..0.36 rows=1 width=4)
> >
> >  Index Cond: (master_user_id = d.master_user_id)
> >
> >  Filter: ((user_name)::text = 'dd'::text)
> >
> > (19 rows)
> >
> >
> >
> >
> >
> > Regards,
> >
> > ravi
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>I am not sure whether they are doing the vacuum after the upgrade or not
So just run an ANALYZE on the database!








-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Queries are taking way longer in 9.6 than 9.5

2017-01-17 Thread Melvin Davidson
r_status_history_id =
> osh.order_status_history_id)
>
>
>
>  ->  Seq Scan on tblpuorderstatus os
> (cost=0.00..96498.46 rows=11185486 width=8)
>
>  ->  Hash  (cost=3010979.77..3010979.77 rows=529686
> width=4)
>
>->  Gather  (cost=1000.00..3010979.77
> rows=529686 width=4)
>
>  Workers Planned: 2
>
>  ->  Parallel Seq Scan on
> tblpuorderstatushistory osh  (cost=0.00..2957011.17 rows=220702 width=4)
>
>
>
>Filter: ((cancelled_date IS NULL)
> AND (cc_accept_date IS NOT NULL) AND (vip_order_type = 17))
>
>
>
>->  Index Scan using tblcnpatientordermap_pkey on
> tblcnpatientordermap po  (cost=0.09..0.41 rows=1 width=8)
>
>
>
>  Index Cond: (vip_order_id = os.vip_order_id)
>
>
>
>  ->  Nested Loop Semi Join  (cost=0.17..1.02 rows=1 width=4)
>
>    ->  Index Scan using tblcndoctorpatientmap_pkey on
> tblcndoctorpatientmap d  (cost=0.09..0.39 rows=1 width=8)
>
>  Index Cond: (vip_patient_id = po.vip_patient_id)
>
>->  Index Scan using tblcnaccounts_pkey on tblcnaccounts a
> (cost=0.08..0.36 rows=1 width=4)
>
>  Index Cond: (master_user_id = d.master_user_id)
>
>  Filter: ((user_name)::text = 'dd'::text)
>
> (19 rows)
>
>
>
>
>
> Regards,
>
> ravi
>

1. Have you run ANALYZE on the database after upgrading?
2. Have you insured that the proper changed were done to the
postgresql.conf in 9.6?


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] temporarily disable autovacuum on a database or server ?

2017-01-13 Thread Melvin Davidson
On Thu, Jan 12, 2017 at 12:09 PM, Jonathan Vanasco <postg...@2xlp.com>
wrote:

>
> On Jan 11, 2017, at 8:19 PM, Melvin Davidson wrote:
>
>
>>
> *Yes, you're right about ALTER SYSTER. Unfortunately, the op provided
> neither PostgreSQL version or O/S, so we can't even be sure that is *
> *an option. That is  why I stated "I cannot confirm".*
>
>
>
> I didn't think that would matter, but postgres 9.6.1 and ubuntu 16.04
>
> anyways, thanks.  i'll test that approach.
>
>
>


*Jonathan,*


*I've tested this in PostgreSQL 9.4.6, so it should work for 9.6.1 also*


*Edit the postgresql.conf and change #autovacuum = on*


*toautovacuum = off*



*and save it.*


*Then psql -U postgres -c "SELECT pg_reload_conf();"*



*No need to restart postgres.*
After you finish your processing, do not forget to re-edit postgresql.conf
and change

*autovacuum = off*


*toautovacuum = on*

*save and*

*psql -U postgres -c "SELECT pg_reload_conf();"*




-- 

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] temporarily disable autovacuum on a database or server ?

2017-01-11 Thread Melvin Davidson
On Wed, Jan 11, 2017 at 8:09 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Jan 11, 2017 at 5:50 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> On Wed, Jan 11, 2017 at 7:26 PM, Jonathan Vanasco <postg...@2xlp.com>
>> wrote:
>>
>>>
>>>
>> *I can't confirm this, but have you tried :*
>>
>> *SELECT set_config('autovacuum', 'off'', false);*
>>
>> *SELECT pg_reload_conf(); *
>>
>> *note: you must be a superuser for above*
>>
>
> I'm hoping you meant "ALTER SYSTEM" instead of "set_config()"...
>
> ​The documentation on these parameters is unclear:  the only way to change
> the values is to edit postgresql.conf (or on the command line) but there is
> nothing said regarding whether pg_reload_conf() will work for them.  I'd be
> surprised if it did...
>
> If it does - and for other cases where you can, instead of set_config you
> could use "ALTER SYSTEM" and at least avoid having to manually edit the
> config file.  When done simply:
>  ALTER SYSTEM RESET autovacuum;  SELECT pg_reload_conf();
> to get back to normal operation.
>
> David J.
>
>

*Yes, you're right about ALTER SYSTER. Unfortunately, the op provided
neither PostgreSQL version or O/S, so we can't even be sure that is *

*an option. That is  why I stated "I cannot confirm".*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] temporarily disable autovacuum on a database or server ?

2017-01-11 Thread Melvin Davidson
On Wed, Jan 11, 2017 at 7:26 PM, Jonathan Vanasco <postg...@2xlp.com> wrote:

> I've run into a performance issue, and I think autovacuum may be involved.
>
> does anyone know if its possible to temporarily stop autovacuum without a
> server restart ?
>
> It seems that it either requires a server restart, or specific tables to
> be configured.
>
> Several times a day/week, I run a handful of scripts to handle database
> maintenance and backups:
>
> * refreshing materialized views
> * calculating analytics/derived/summary tables and columns
> * backing up the database (pg_dumpall > bz2 > archiving)
>
> These activities have occasionally overlapped with autovacuum, and the
> performance seems to be affected.
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*I can't confirm this, but have you tried :*

*SELECT set_config('autovacuum', 'off'', false);*

*SELECT pg_reload_conf(); *

*note: you must be a superuser for above*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Materialized view vs. view

2017-01-10 Thread Melvin Davidson
On Tue, Jan 10, 2017 at 2:53 PM, Kevin Grittner <kgri...@gmail.com> wrote:

> On Tue, Jan 10, 2017 at 1:36 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
> > IMHO, I disagree. I feel a better name would be "materialized
> > table".
>
> The dictionary defines "materialize" as meaning "become actual
> fact" or "appear in bodily form".  In the database environment, it
> generally means that the data is actually stored, rather than being
> something which can be generated.  For example, in query execution
> the relation produced by an execution node may feed into a
> Materialize node if the generated relation is expected to be
> scanned multiple times by a higher-level node and scanning a stored
> copy of the relation each time is expected to be faster than
> regenerating the relation each time.  "Materialized table" would be
> redundant; a table is always materialized.  A view is data
> generated by running a query. In the simple case, the resulting
> relation is not stored, but is regenerated on each reference.  The
> "materialized view" feature lets you materialize it, like a table.
>
> If you don't think materializing data means storing a copy of it
> for re-use, I'm not sure what you think it means.
>
> This is not to beat up on you, but to try to keep terminology
> clear, to facilitate efficient communication.  There are some terms
> we have been unable to avoid using with different meanings in
> different contexts (e.g., "serialization"); that's unfortunate, but
> hard to avoid.  I want to keep it to the minimum necessary by
> avoiding creep of other terms to multiple definitions.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
"A rose by any other name would still smell as sweet".
I have expressed my opinion. We are getting off the point of the topic
which is "What is faster, a View or a Materialized View".
Can we all agree that the "Materialized View" should be faster and stop
this pointless bickering about naming convention,
which I have already stated, is just an opinion and too late to change at
this point?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Materialized view vs. view

2017-01-10 Thread Melvin Davidson
On Tue, Jan 10, 2017 at 2:31 PM, Kevin Grittner <kgri...@gmail.com> wrote:

> On Tue, Jan 10, 2017 at 12:44 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
> > fyi, a view is nothing more than just that, a view.
> > A materialized view, afaic, is a misleading name, it is actually
> > a valid table and you can create indexes on them,
>
> I disagree with the notion that defining a relation in terms of a
> query (like a view) and materializing the results (like a table)
> makes "materialized view" a misleading name.  I don't think I can
> say it better than others already have, so I recommend reading the
> first three paragraphs of the "Introduction" section of this paper:
>
> http://homepages.inf.ed.ac.uk/wenfei/qsx/reading/gupta95maintenance.pdf
> Ashish Gupta and Inderpal Singh Mumick.
> Maintenance of Materialized Views: Problems, Techniques, and Applications.
>
> > so theoretically you should be able to reduce response time on
> > them.
>
> As the above-referenced text suggests, a materialized view is
> essentially a cache of the results of the specified query.  While,
> in rare cases, this may be captured to provide the query results as
> of some particular moment in time, the overwhelming reason for
> creating a materialized view is to improve performance over a
> non-materialized view.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
>I disagree with the notion that defining a relation in terms of a
>query (like a view) and materializing the results (like a table)
>makes "materialized view" a misleading name.


*IMHO, I disagree. I feel a better name would be "materialized table". *
*However, it is too late to change that now. Just my personal opinion.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Materialized view vs. view

2017-01-10 Thread Melvin Davidson
On Tue, Jan 10, 2017 at 1:36 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 01/10/2017 10:27 AM, Job wrote:
>
>> Hi guys,
>>
>> iam making some tests with a massive number of "select" queries (only
>> for reading datas) on a view and a materialized view.
>> We use Postgresql 9.6.1 on a 64bit server.
>>
>> Only for "select" queries, which one is faster and less expensive as
>> resources cost?
>> The view or the materialized view?
>>
>
> If are running tests as you state above, you should know or am I missing
> something?
>
>
>
>> The view has about 500K lines.
>>
>> Thank you!
>> /F
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

fyi, a view is nothing more than just that, a view.
A materialized view, afaic, is a misleading name, it is actually a valid
table and you can create indexes on them,
so theoretically you should be able to reduce response time on them.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] i got a process holding the lock

2017-01-10 Thread Melvin Davidson
On Tue, Jan 10, 2017 at 1:28 PM, Edmundo Robles <edmu...@sw-argos.com>
wrote:

> ok, then  is  better kill one by one!
>
> On Tue, Jan 10, 2017 at 12:16 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>> On Tue, Jan 10, 2017 at 1:07 PM, Edmundo Robles <edmu...@sw-argos.com>
>> wrote:
>>
>>> I have a lot of parse waiting can  i kill that process safely?  if
>>> restart postgresql will have problems at recovery?
>>>
>>> On Tue, Jan 10, 2017 at 12:02 PM, Melvin Davidson <melvin6...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> 2017-01-10 12:58 GMT-05:00 Edmundo Robles <edmu...@sw-argos.com>:
>>>>
>>>>> whe i do a ps -fea | grep postgres this is the  output
>>>>> ostgres 11436  2467  0 11:41 ?00:00:01 postgres: argos_admin
>>>>> vacia 127.0.0.1(54880) idle
>>>>>
>>>>> postgres 19648  2467  0 11:46 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(60858) idle
>>>>>
>>>>> postgres 19649  2467  0 11:46 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(60859) idle
>>>>>
>>>>> postgres 19653  2467  0 11:46 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(60863) idle
>>>>>
>>>>> postgres 19656  2467  0 11:46 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(60866) idle
>>>>>
>>>>> postgres 19657  2467  0 11:46 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(60867) idle
>>>>>
>>>>> postgres 20253  2467  0 11:46 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(32961) idle
>>>>>
>>>>> postgres 21280  2467  0 11:47 ?00:00:00 postgres: argos_admin
>>>>> vacia ::1(33804) idle
>>>>>
>>>>> postgres 21692  2467  0 11:48 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(33840) idle
>>>>>
>>>>> postgres 21817  2467  0 11:48 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(33948) idle
>>>>>
>>>>> postgres 21819  2467  0 11:48 ?00:00:01 postgres: argos_admin
>>>>> vacia 127.0.0.1(33949) idle
>>>>>
>>>>> postgres 21823  2467  0 11:48 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(33953) idle
>>>>>
>>>>> postgres 21825  2467  0 11:48 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(33955) idle
>>>>>
>>>>> postgres 21844  2467  0 11:48 ?00:00:01 postgres: argos_admin
>>>>> vacia 127.0.0.1(33974) idle
>>>>>
>>>>> postgres 22576  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia ::1(34604) idle
>>>>>
>>>>> postgres 22713  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia ::1(34673) idle
>>>>>
>>>>> postgres 22811  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia ::1(34718) SELECT waiting
>>>>>
>>>>> postgres 22825  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34512) idle
>>>>>
>>>>> postgres 22828  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34515) idle
>>>>>
>>>>> postgres 22903  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34578) idle
>>>>>
>>>>> postgres 22925  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34600) idle
>>>>>
>>>>> postgres 22926  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34601) idle
>>>>>
>>>>> postgres 22928  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34604) idle
>>>>>
>>>>> postgres 22929  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34605) idle
>>>>>
>>>>> postgres 22939  2467  0 11:49 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.1(34614) idle
>>>>>
>>>>> postgres 23510  2467  0 11:50 ?00:00:00 postgres: argos_admin
>>>>> vacia 127.0.0.

Re: [GENERAL] i got a process holding the lock

2017-01-10 Thread Melvin Davidson
 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38590) PARSE waiting
>
> postgres 28983  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38591) PARSE waiting
>
> postgres 28984  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38592) PARSE waiting
>
> postgres 28986  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38594) PARSE waiting
>
> postgres 28987  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38595) PARSE waiting
>
> postgres 28988  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38596) PARSE waiting
>
> postgres 28991  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38599) PARSE waiting
>
> postgres 28992  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38600) PARSE waiting
>
> postgres 28993  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38601) PARSE waiting
>
> postgres 28995  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38603) PARSE waiting
>
> postgres 28999  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38605) PARSE waiting
>
> postgres 29002  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38608) PARSE waiting
>
> postgres 29003  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38609) PARSE waiting
>
> postgres 29005  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38611) PARSE waiting
>
> postgres 29013  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38615) SELECT waiting
>
> postgres 29046  2467  0 11:56 ?00:00:00 postgres: argos_admin
> vacia 127.0.0.1(38634) SELECT waiting
>
> root 29071  6468  0 11:56 pts/200:00:00 grep postgres
>
>
> On Tue, Jan 10, 2017 at 11:56 AM, Edmundo Robles <edmu...@sw-argos.com>
> wrote:
>
>> how can i  detect and cancel the process??
>>
>>
>> --
>>
>>
>
>
> --
>
>
Perhaps this scripts will help:




















*SELECT c.datname,   c.pid as pid,   c.client_addr,   c.usename
as user,   c.query,   CASE WHEN c.waiting = TRUETHEN
'BLOCKED'ELSE 'no'END as waiting,  l.pid as
blocked_by,  c.query_start,   current_timestamp - c.query_start
as duration   FROM pg_stat_activity c  LEFT JOIN pg_locks l1 ON (c.pid =
l1.pid and not l1.granted)  LEFT JOIN pg_locks l2 on (l1.relation =
l2.relation and l2.granted)  LEFT JOIN pg_stat_activity l ON (l2.pid =
l.pid)  LEFT JOIN pg_stat_user_tables t ON (l1.relation = t.relid) WHERE
pg_backend_pid() <> c.pidORDER BY datname, query_start;*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-06 Thread Melvin Davidson
On Fri, Jan 6, 2017 at 4:56 PM, Alban Hertroys <haram...@gmail.com> wrote:

>
> > On 6 Jan 2017, at 16:56, Job <j...@colliniconsulting.it> wrote:
> >
> > W e use a function, the explain analyze is quite similar:
> > POSTGRESQL 8.4.22:
> >  Total runtime: 1.531 ms
> >
> > POSTGRES 9.6.1:
> >  Execution time: 4.230 ms
>
> Are you being serious? You're complaining about a "big slowdown" for a
> query that goes from 1.5ms to 4ms?
> What is the actual problem you're trying to solve? Because I don't see one
> in the above.
>
> Just saying, you're obviously worried about something, but should you be?
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*In addition to what Alban says, make sure you are comparing apples to
apples.*


*IOW, have you tuned the postgresql.conf on 9.6.1 to the same values as
8.4.22?*

*are you working on the same server & O/S for both versions?*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Difficulty modelling sales taxes

2017-01-02 Thread Melvin Davidson
On Mon, Jan 2, 2017 at 9:58 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 01/02/2017 06:38 AM, Melvin Davidson wrote:
>
>>
>> On Mon, Jan 2, 2017 at 6:29 AM, Frank Millman <fr...@chagford.com
>> <mailto:fr...@chagford.com>> wrote:
>>
>>
>> *From:* amul sul
>> *Sent:* Monday, January 02, 2017 12:42 PM
>> *To:* Frank Millman
>> *Cc:* pgsql-general
>> *Subject:* Re: [GENERAL] Difficulty modelling sales taxes
>>
>> > On Mon, Jan 2, 2017 at 4:03 PM, Frank Millman <fr...@chagford.com
>> <mailto:fr...@chagford.com>> wrote:
>> >
>> > Hi all
>> >
>> >
>> >
>> > It is a bit ugly, because I have to use the ‘NVARCHAR code’ column
>> from
>> >
>> > tax_codes, not the primary key, but I think it would work.
>> >
>> >
>> >
>> NVARCHAR ?  Are you using PostgreSQL as database server?
>> >
>>
>>
>> Oops, sorry.
>>
>> I am testing with PostgreSQL and with SQL Server, so I was in the
>> wrong mindset when I posted.
>>
>> I should have said VARCHAR.
>>
>> Frank
>>
>>
>>
>>
>>
>> *First, there is no need to make row_id's when you already have a valid
>> primary key.
>>
>
> In a perfect world yes, but this is a world with ORM's as I found out the
> hard way:
>
> https://docs.djangoproject.com/en/1.10/ref/models/fields/#primary-key
>
> "The primary key field is read-only. If you change the value of the
> primary key on an existing object and then save it, a new object will be
> created alongside the old one."
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


*>https://docs.djangoproject.com/en/1.10/ref/models/fields/#primary-key
<https://docs.djangoproject.com/en/1.10/ref/models/fields/#primary-key>*








*You are quoting from a django document.  Please don't muddle the waters.So
you are saying this will not work?UPDATE mytable  SET mykey = 'new_value'
WHERE mykey = 'old_value';DELETE FROM mytable  WHERE mykey = 'old_value';*
*Happy New Year Adrian*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Difficulty modelling sales taxes

2017-01-02 Thread Melvin Davidson
On Mon, Jan 2, 2017 at 6:29 AM, Frank Millman  wrote:

>
> *From:* amul sul
> *Sent:* Monday, January 02, 2017 12:42 PM
> *To:* Frank Millman
> *Cc:* pgsql-general
> *Subject:* Re: [GENERAL] Difficulty modelling sales taxes
>
> > On Mon, Jan 2, 2017 at 4:03 PM, Frank Millman 
> wrote:
> >
> > Hi all
> >
> >
> >
> > It is a bit ugly, because I have to use the ‘NVARCHAR code’ column from
> >
> > tax_codes, not the primary key, but I think it would work.
> >
> >
> >
> NVARCHAR ?  Are you using PostgreSQL as database server?
> >
>
>
> Oops, sorry.
>
> I am testing with PostgreSQL and with SQL Server, so I was in the wrong
> mindset when I posted.
>
> I should have said VARCHAR.
>
> Frank
>
>
>
>





























*First, there is no need to make row_id's when you already have a valid
primary key.Next, DO NOT begin object names with underscores.So try this
model instead:CREATE TABLE tax_categories (tax_category VARCHAR() NOT
NULL,description VARCHAR() NOT NULL,CONSTRAINT tax_cats_pk PRIMARY
KEY (tax_category)); CREATE TABLE tax_codes (tax_category VARCHAR()
NOT NULL,code VARCHAR() NOT NULL,description VARCHAR() NOT NULL,
CONSTRAINT tax_codes_pk PRIMARY KEY (tax_category, code),CONSTRAINT
tax_category_fk (tax_category)  FOREIGN KEY REFERENCES tax_categories
(tax_category));CREATE INDEX idx_tax_category  ON tax_codes  USING
BTREE (tax_category);  CREATE INDEX idx_code  ON tax_codes  USING BTREE
(code);-- *


*Melvin DavidsonI reserve the right to fantasize.  Whether or not you wish
to share my fantasy is entirely up to you. *


Re: [GENERAL] LYDB: Feasible to use PG roles instead of application-level security?

2016-12-30 Thread Melvin Davidson
On Fri, Dec 30, 2016 at 10:23 AM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> * Guyren Howe (guy...@gmail.com) wrote:
> > it occurs to me to wonder whether it is practical to use PG’s own roles
> and security model in lieu of using an application-level one.
>
> The short answer is yes.
>
> > It seems that the role system in PG is sufficient for most general
> purposes. One could presumably also have a table with role names and
> associated metainformation (email address etc) as needed.
>
> Yup.  That can get a bit awkward if you have multiple databases inside
> of a single cluster, as you would have to pick which database to put
> that metainformation in, but that isn't a very big issue.
>
> > If I have a system with many thousands of users, is it practical to
> manage these users’ authentication and authorization using *just* Postgres?
>
> For this, it really depends on if the PG authorization model matches the
> requirements you have.  The PG auth model, particularly with RLS, is
> extremely flexible but you would really need to evaluate what the exact
> requirements are and how you would handle that with the PG auth model.
> Of course, if there are just a few exceptions or complicated cases that
> can't be satisfied directly with PG today, you could use security
> definer functions.
>
> One area that isn't fully addressed with the PG auth model today is
> partial access to a certain column.  Consider a table where you want
> users to have access to all of the rows and all of the columns *except*
> for column X for rows where ID is > 1000.  The PG auth model today can
> be used to say "you can't access column X" or to say "you can't access
> rows where ID > 1000" but you can't combine those, yet.
>
> I'm hopeful that we'll get there as there are definitely use-cases for
> that kind of access control, but it's unlikely to happen for PG10.
>
> > It occurs to me that some client frameworks might have issues with their
> connection pools if those connections keep switching users, assuming they
> even can, but let’s set that aside for now. Or perhaps every connection
> could immediately do a SET USER before executing its connection?
>
> Again, yes, connection poolers can be an issue, but it's possible to use
> the role system and do a 'set role X' after having connected as some
> user that has very little access.  The issue here is controlling that
> role change- there's no direct way in PG today to require a password to
> be provided when doing the role change, which is unfortunate.  One
> approach to solving that with RLS is to use a security definer function
> to change a record in a table that is then used in all RLS policies.
> It's a bit complicated and doesn't involve doing 'set role' though, so
> there are some trade-offs there.
>
> If you really want connection pooling and independent users in PG's role
> system then you'll end up having to have the app code do the
> authentication (or maybe auth to PG as the user and, if successful,
> reconnect as the regular user and set role...  that's pretty awkward
> though) and then connect and do the 'set role'.
>
> One big question here, however, is if you're going to have thousands of
> *concurrently connected* users.  Thousands of users shouldn't be too
> much of an issue, but if they're all connected using PG's main auth
> system then you'll have thousands of backend processes running.  That'll
> end up causing some amount of overhead even if they're mostly idle.  If
> your application can handle connecting/disconnecting pretty easily and
> you have a relativly short timeout (though, ideally, not too short) then
> perhaps your number of concurrent connections won't be too bad.
>
> > This seems an attractive proposition from a security standpoint: if I
> use row-level security pervasively, I can have a security system that’s
> nestled nice and close to the data and presumably tricky to work around
> from a hacker given direct access only to the client application.
>
> If that's the threat model you want to address then you'll have to work
> out the concurrent connections question.  One thing which can help is to
> use a common user for 'read-only/public-access (or at least low-value)'
> queries from the app, if there are such.
>
> > Is this practical? Has anyone here done it? What might the caveats be?
>
> Yes, yes, see above.
>
> Thanks!
>
> Stephen
>


*>Postgres roles are global to the cluster,*


*Well, that is true by default, however, you  can make roles database
specific:https://www.postgresql.org/docs/9.4/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SECURITY
<https://www.postgresql.org/docs/9.4/static/runt

Re: [GENERAL] Book or other resource on Postgres-local code?

2016-12-30 Thread Melvin Davidson
On Fri, Dec 30, 2016 at 2:50 AM, Pavel Stehule <pavel.steh...@gmail.com>
wrote:

>
>
> 2016-12-30 8:04 GMT+01:00 Guyren Howe <guy...@gmail.com>:
>
>>
>> > On Dec 29, 2016, at 23:01 , Regina Obe <l...@pcorp.us> wrote:
>> >
>> >
>> >> As an aside from my last question about my LYDB effort:
>> >
>> >> https://medium.com/@gisborne/love-your-database-lydb-23c69f4
>> 80a1d#.4jngp2rcb
>> >
>> >> I would like to find a book or other resource about SQL server-side
>> programming (stored procedures etc) best practices in general and for
>> Postgres in particular.
>> >
>> > Shameless plug
>> >
>> > Have you checked out our book?  The 2nd edition covered PostgreSQL 9.2
>> - 9.4.
>> > http://shop.oreilly.com/product/0636920052715.do
>> >
>> > We are working on the 3rd edition which is a bit fatter (probably will
>> be about 50 pages fatter when we are done) than the 2nd edition.
>> >
>> > http://shop.oreilly.com/product/0636920052715.do
>> >
>> > The 3rd focuses on PostgreSQL 9.5-9.6 (and is in prerelease sale at
>> moment).  By the time of release, we'll probably have some PostgreSQL 10
>> content in there as well.
>> >
>> > It covers fancy SQL constructs and data types (both ANSI ones and ones
>> unique to PostgreSQL), general administration, and writing stored functions
>> with SQL, PLPGSQL and PL/V8.
>> >
>> > In the 3rd we are adding an additional PL/V8 example how to build a
>> window function in PL/V8 (aka PL/JavaScript)
>>
>> I’m sure the book is great. But it looks like much of the material I can
>> find about Postgres: how to write a function, how to write a query, etc.
>>
>> What I’m more looking for is “System Design with Postgres”: *when* to
>> write a function, *when* to use a stored procedure over a client-side
>> function.
>>
>
> Lot of Oracle's books related to this topic is valid for PostgreSQL too.
> The design of stored procedures in PostgreSQL is conceptually similar to
> Oracle.
>
> The theme "stored procedures" is strongly controversial -  from "stored
> procedures are evil" to "do all in procedures".
>
> I like the strategy - what you can do easy in database, do it there - the
> client should to get a results. But don't do communication server from
> database. PostgreSQL is ACID, stored procedures are ACID. Outer world is
> not ACID - and interface ACID/NOACID is better to implement outside
> database.
>
> Regards
>
> Pavel
>
>
>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>



















*> would like to find a book or other resource about SQL server-side
programming (stored procedures etc) best practices in general and for
Postgres in particular.I'll start off with Enumerated types are evil. You
are much better off with Foriegn Key Constraints.That being said, there are
four excellent books that I always recommend to my clients:PostgreSQL
Development
Essentials:https://www.amazon.com/PostgreSQL-Development-Essentials-Manpreet-Kaur/dp/1783989009/ref=sr_1_2?s=books=UTF8=1483111853=1-2=PostgreSQL#reader_B01LFAN8B6
<https://www.amazon.com/PostgreSQL-Development-Essentials-Manpreet-Kaur/dp/1783989009/ref=sr_1_2?s=books=UTF8=1483111853=1-2=PostgreSQL#reader_B01LFAN8B6>PostgreSQL
9 Administration
Cookbookhttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-9-administration-cookbook-second-edition
<https://www.packtpub.com/big-data-and-business-intelligence/postgresql-9-administration-cookbook-second-edition>PostgreSQL
Server
Programminghttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-server-programming-second-edition
<https://www.packtpub.com/big-data-and-business-intelligence/postgresql-server-programming-second-edition>PostgreSQL
9.0 High
Performancehttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-90-high-performance
<https://www.packtpub.com/big-data-and-business-intelligence/postgresql-90-high-performance>In
addition, I've attached my own controversial PostgreSQL Developer Best
Practices*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


PostgreSQL Developer Best Practices.doc
Description: MS-Word document

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing Information

2016-12-28 Thread Melvin Davidson
On Wed, Dec 28, 2016 at 11:49 AM, Rich Shepard <rshep...@appl-ecosys.com>
wrote:

> On Tue, 27 Dec 2016, Chris Weekes wrote:
>
> I was wondering what steps if any need to be taken to ensure that the
>> patient and operational data is secure on a machine and or across the
>> network.
>>
>
> Chris,
>
>   I'm far from an expert but until more knowledgeable folks respond I'll
> offer a couple of quick suggestins. First, in your postgres data/ directory
> modify pg_hba.conf. Specifying hostssl and an appropriate authentication
> method
> will help. Second, partition your users into postgres role specifying what
> each role can access and do.
>
>   The postgresql-9.6-US.pdf manual has all the details.
>
> HTH,
>
> Rich
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>




*Another thing to consider is DO NOT create your tables in the public
schema.Instead, create a separate schema, then only grant access to that
schema and tablesto users that are authorized to access them.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Avoid using index scan backward when limit order desc

2016-12-19 Thread Melvin Davidson
> (30231,30230,30104) AND type IN ('foo', 'bar') ORDER BY created_at DESC
> limit 20;
>
> 
> 
> 
>  Limit  (cost=0.43..3341.84 rows=20 width=187) (actual
> time=60133.701..60133.701 rows=0 loops=1)
>->  Index Scan Backward using 
> index_notifications_on_created_at_and_bucket_id
> on notifications  (cost=0.43..344332.66 rows=2061 width=187) (actual
> time=60133.695..60133.695 rows=0 loops=1)
>  Filter: (((type)::text = ANY ('{foo,bar}'::text[])) AND
> (bucket_id = ANY ('{30231,30230,30104}'::integer[])))
>  Rows Removed by Filter: 3441510
>  Planning time: 1.034 ms
>  Execution time: 60133.740 ms
>
> ** With limit 50 **
>
> EXPLAIN ANALYZE SELECT * FROM notifications WHERE bucket_id IN
> (30231,30230,30104) AND type IN ('foo', 'bar') ORDER BY created_at DESC
> limit 50;
>
> 
> 
> --
>  Limit  (cost=7429.94..7430.06 rows=50 width=187) (actual
> time=0.111..0.111 rows=0 loops=1)
>->  Sort  (cost=7429.94..7435.09 rows=2061 width=187) (actual
> time=0.110..0.110 rows=0 loops=1)
>  Sort Key: created_at
>  Sort Method: quicksort  Memory: 25kB
>  ->  Bitmap Heap Scan on notifications  (cost=71.68..7361.47
> rows=2061 width=187) (actual time=0.107..0.107 rows=0 loops=1)
>Recheck Cond: (((type)::text = ANY ('{foo,bar}'::text[]))
> AND (bucket_id = ANY ('{30231,30230,30104}'::integer[])))
>->  Bitmap Index Scan on 
> index_notifications_on_type_and_bucket_id
>  (cost=0.00..71.16 rows=2061 width=0) (actual time=0.105..0.105 rows=0
> loops=1)
>  Index Cond: (((type)::text = ANY
> ('{foo,bar}'::text[])) AND (bucket_id = ANY ('{30231,30230,30104}'::intege
> r[])))
>  Planning time: 0.151 ms
>  Execution time: 0.139 ms
>
>
> As you can see, when I have the LIMIT 20, the execution time takes around
> 1 minutes (on a very small subset of the entire table).
> Actually I have tried different LIMIT, and when the LIMIT is <= 45, it
> will use the index scan backward.
>
> Removing the index 'index_notifications_on_created_at_and_bucket_id' may
> prevent the planner from choosing the index scan backward for this query,
> but this index is used for other querying on that table...
>
> 1) Why is the planner changing index scanning at the threshold of 45 for
> the LIMIT ? Why not 50 ? 100 ? I may take the solution in my application to
> have a LIMIT > 45 in order to prevent the performance issue, but am I sure
> that this threshold will always be the same ?
>
> 2) Is it possible for a specific query to force the planner on choosing a
> given index or preventing it from choosing one ?
>
> What kind of other options do I have to solve this performance issue ?
>
> Thanks in advance for any help,
>
> Regards,
>
> --
> Christophe Escobar
>


*You can temporarily disable index scanning for a session with*

*SET enable_indexscan = off;*


*and/orSET enable_indexonlyscan = off;*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pgAdmin 4 - auto disconnect

2016-12-19 Thread Melvin Davidson
On Mon, Dec 19, 2016 at 7:28 AM, Paolo Saudin <paolosau...@gmail.com> wrote:

> Hi,
>
> I deployed pgAdmin4 on a server and I have a script that every day restore
> a database from a dump file. The problem I am facing at, is that if
> somebody forget to diconnect from the database, the restore command fails
> with an error "database Test is being accessed by other users".
>
> Is there a way to tell pgAdmin to auto disconnect from all databases?
>
> Thanks
> Paolo Saudin
>

PgAdmin4 actually has nothing to do with it, as it is nothing more than a
tool to monitor PostgreSQL.
However, you can kill all user processes (except your own) by submitting
the following query.

SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE pg_backend_pid() <> pid;

CAVEAT EMPTOR: You must be a superuser for this to work properly.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Concatenating NULL with JSONB value return NULL

2016-12-18 Thread Melvin Davidson
On Sun, Dec 18, 2016 at 6:08 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 12/18/2016 2:52 PM, Jong-won Choi wrote:
>
>>
>> I have a NULL-able JSONB type column and want to perform upsert,
>> concatenating with the existing value.
>>
>
> NULL does not mean 'NO' value in SQL it means UNKNOWN value. sort of like
> the 'indeterminate' in math.
>
> maybe you want a NOT NULL json value that you set to ''  or something when
> its empty.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Have you tried using CASE?

INSERT INTO Fan  (oid, campaigns, facts)  VALUES (189,'{"campaign-id":
"12345"}','{"attended": false}')
ON CONFLICT (oid)
DO UPDATE SET campaigns = EXCLUDED.campaigns,
  CASE WHEN fan.facts is NULL
   THEN facts = EXCLUDED.facts
   ELSE facts = fan.facts || EXCLUDED.facts
   END
RETURNING *;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] SQL query problem of a Quiz program

2016-12-17 Thread Melvin Davidson
On Sat, Dec 17, 2016 at 10:25 AM, Arup Rakshit <aruprakshit1...@outlook.com>
wrote:

> Hi,
>
> Here is a sample data from table "quiz_results":
>
> id | question_id |  user_id
> +-+
>   2 |  25 | 5142670086
>   3 |  26 |
>   4 |  26 |
>   5 |  27 |
>   6 |  25 | 5142670086
>   7 |  25 | 5142670086
>   8 |  25 | 5142670086
>   9 |  26 |
>  10 | 40 | 5142670086
>  11 |  29 | 5142670086
>
>
> As you see above question id 25 appeared more than once. This is basically
> a quiz result table where for users as they answered. question_id 25 always
> the first questions. Any user can go though the quiz N number of time. So,
> I want to find the last occurrence of the question_id 25 for any specific
> user in the table, and select that and all answers the users gave after
> this till the end of the quiz. Any idea how to solve it in a single
> efficient query. My all try didn't work out.
>
>
>
> 
> Regards,
> Arup Rakshit
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Will this work?

WITH quest AS
  (SELECT id, question_id, user_id
FROM questions
WHERE user_id = 5142670086  --> substitute any user_id value
AND question_id = 25 --> substitute any question_id
value
ORDER BY 1, 2, 3)
SELECT * FROM quest
WHERE id IN (SELECT max(id) FROM quest);

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Melvin Davidson
On Tue, Dec 13, 2016 at 8:50 PM, Kevin Grittner <kgri...@gmail.com> wrote:

> On Tue, Dec 13, 2016 at 7:37 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
> > On Tue, Dec 13, 2016 at 7:36 PM, Kevin Grittner <kgri...@gmail.com>
> wrote:
> >> On Tue, Dec 13, 2016 at 5:57 PM, Jonathan Vanasco <postg...@2xlp.com>
> wrote:
> >>
> >>> Is there a way to find out when a materialized view was
> >>> created/refreshed?
> >>
> >>> I can log this manually in postgresql if needed, but was hoping
> >>> there was some "timestamp" on the view in a system table.
> >>
> >> This is not currently tracked in the system catalogs.
>
> > This goes back to a discussion of my request to add relcreated
> > column to pg_class.
> > https://www.postgresql.org/message-id/CANu8FiyiRPGZ+gB=
> 1jayryx3hxcuqnflsfgdzfxsvlmj-jt...@mail.gmail.com
> > Apparently the naysayers do not feel it is worthwhile.
>
> Do you see relcreated as being something to set anew whenever the
> data contents of a materialized view change due to the REFRESH
> command?  I wouldn't have thought so, but I guess the problem with
> that proposal is that everyone has a different idea of what the
> semantics of the column would be.  Suggesting that field as the
> solution here seems to reinforce that perception, anyway.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Originally, all I wanted was a column to record the creation date/time of
an object. One reason it was debunked was that it would lead
to a request for an additional column to record changes in objects. I
maintain that both can be done, but others disagree,

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Melvin Davidson
On Tue, Dec 13, 2016 at 7:36 PM, Kevin Grittner <kgri...@gmail.com> wrote:

> On Tue, Dec 13, 2016 at 5:57 PM, Jonathan Vanasco <postg...@2xlp.com>
> wrote:
>
> > Is there a way to find out when a materialized view was
> > created/refreshed?
>
> > I can log this manually in postgresql if needed, but was hoping
> > there was some "timestamp" on the view in a system table.
>
> This is not currently tracked in the system catalogs.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Kevin,
This goes back to a discussion of my request to add relcreated column to
pg_class.
https://www.postgresql.org/message-id/CANu8FiyiRPGZ+gB=1jayryx3hxcuqnflsfgdzfxsvlmj-jt...@mail.gmail.com
Apparently the naysayers do not feel it is worthwhile.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Index size

2016-12-11 Thread Melvin Davidson
On Sat, Dec 10, 2016 at 5:42 PM, Peter J. Holzer <hjp-pg...@hjp.at> wrote:

> On 2016-12-09 21:45:35 -0500, Melvin Davidson wrote:
> > On Fri, Dec 9, 2016 at 6:40 PM, Samuel Williams <
> space.ship.travel...@gmail.com>
> > wrote:
> > >I also read that when you change a column which is not index, all the
> > >indexes for that row need to be updated anyway. Is that correct?
> >
> > That is not correct. Indexes are changed under the following conditions:
> > A. An insert is done to the table which involves an index.
> > B. A delete is done to the table which involves an index.
> > C. An update is done that involves columns included in an index.
> > D. An index is REINDEXed
> >
> > Indexes point to the tid of the row for which the column(s) in the index
> > are involved. So if columns updated are not involved in the index,
> > there is no need to change the index.
>
> I don't think this is generally correct. The TID is a (block,item)
> tuple. It the updated version of the row doesn't fit into the same block
> it has to be stored in a different block, so the TID will change (AIUI
> there is a bit of trickery to avoid changing the TID if the new version
> is stored in the same block). This means that all the index entries for
> this row (not just for the changed field) will have to be updated. You
> can set fillfactor to a smaller value to make this less likely.
>
> hp
>
> --
>_  | Peter J. Holzer| A coding theorist is someone who doesn't
> |_|_) || think Alice is crazy.
> | |   | h...@hjp.at | -- John Gordon
> __/   | http://www.hjp.at/ |http://downlode.org/Etext/alicebob.html
>






*Yes, I see your point, but the case where the row does not fit into the
same block would only occur with unlimited field types such as var[],
bytea[], etc. I believe that to be the exception, and not the rule, so can
we agree that we are both right in that for the general case indexes are
updated as I have described and for the exception they act as you describe?
*--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Index size

2016-12-09 Thread Melvin Davidson
On Fri, Dec 9, 2016 at 6:40 PM, Samuel Williams <
space.ship.travel...@gmail.com> wrote:

> Thanks Kevin, that makes sense. Yeah, I understand the architectural
> difference a bit more now. I also read that when you change a column
> which is not index, all the indexes for that row need to be updated
> anyway. Is that correct?
>
> On 7 December 2016 at 05:27,  <kbran...@pwhome.com> wrote:
> > Samuel Williams <space.ship.travel...@gmail.com> wrote:
> >> So, uh, my main question was, does MySQL add null values to an index,
> and is this different from Postgres...
> >
> > Samuel,
> >
> > A quick google says that Mysql does index NULLs. Ask a Mysql group to
> get a more definitive answer.
> >
> > More relevant to your original question, I'll go out on a limb as I
> struggle to recall a fuzzy memory.
> >
> > The difference between Mysql and Postgresql is fundamental architecture,
> so yes the index creation will be very different, as others have said. IIRC
> (and I may not be), Mysql stores where a row is on the disk via the PK
> index. That means that secondary indexes point to the proper row in the PK
> index, which does mean that when you use a secondary index to get data that
> there is a double look up. They claim that's faster for updates and other
> stuff because a change to a row only requires 1 index to be changed.
> >
> > Postgresql stores the direct disk location in each index, which slows
> down updates a little, but makes selects faster (and I find this really
> amusing because so many people say Mysql is great because it's fast at
> reads, yet architecturally PG is faster). If I'm wrong here, I'm sure I'll
> be corrected. :)
> >
> > So you can see that Mysql indexes should be smaller than PG indexes
> because of what they carry. Personally, I think the diff is small enough
> I'm not going to worry about it, but math is such that some numbers are
> smaller than others. :) So that should explain what you're seeing.
> >
> > My opinion is that you shouldn't worry about the index size. Which DB
> does what you want the best? That obviously depends on what your needs are,
> but after using both Mysql and PG, I'll take PG whenever possible, thank
> you. Mysql has gotten better over the last 5-8 years, but there are still
> many pits of quicksand ready to swallow you up there that PG doesn't have.
> If you know where those pits are and/or don't travel into that part of the
> jungle, great for you; personally, I prefer to get the job done without
> having to watch where I'm walking. ;)
> >
> > HTH,
> > Kevin
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>








*>I also read that when you change a column which is not index, all the
>indexes for that row need to be updated anyway. Is that correct?That is
not correct. Indexes are changed under the following conditions:A. An
insert is done to the table which involves an index.B. A delete is done to
the table which involves an index.C. An update is done that involves
columns included in an index.*

*D. An index is REINDEXed *







*Indexes point to the tid of the row for which the column(s) in the
indexare involved. So if columns updated are not involved in the index,
there is no need to change the
index.https://www.postgresql.org/docs/9.4/static/sql-createindex.html
<https://www.postgresql.org/docs/9.4/static/sql-createindex.html>https://www.postgresql.org/docs/9.4/static/sql-reindex.html
<https://www.postgresql.org/docs/9.4/static/sql-reindex.html>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Who dropped a role?

2016-12-08 Thread Melvin Davidson
On Thu, Dec 8, 2016 at 4:34 AM, Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

> Hello
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:pgsql-general-owner@
> postgresql.org] *On Behalf Of *Durumdara
> *Sent:* Donnerstag, 8. Dezember 2016 10:13
> *To:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] Who dropped a role?
>
>
>
> Hello!
>
>
>
> Sorry, meanwhile I found it as Group Role.
>
> I never used this kind of role. How to restore it to normal login role?
>
>
>
> ALTER ROLE ... LOGIN;
>
>
>
> You probably need to reassign the password (I don’t remember right now if
> the pw is dropped when changing form LOGIN to NOLOGIN).
>
>
>
> \password ...
>
>
>
> Thanks for it!
>
>
>
> dd
>
>
>
> 2016-12-08 9:53 GMT+01:00 Durumdara <durumd...@gmail.com>:
>
> Dear PG Masters!
>
>
>
> In a server today I didn't find an important role.
>
> I don't know what my colleagues did with it, but it seems to be lost.
>
> Do you know a trick, how get info about it?
>
>
>
> 1.) Who dropped it?
>
> 2.) If not dropped what happened with it?
>
>
>
> Does role dropping logged somewhere?
>
>
>
> Thanks for your help!
>
>
>
> dd
>
>
>


*In addition to the info on how to reset the role to login, you might want
to make sure that, at minimum, you have  log_statement = 'ddl' in
postgresql.conf and do a SELECT pg_reload_conf();*



*That way, all DDL type statements will be recorded in the postgres log.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Problems Importing table to pgadmin

2016-12-07 Thread Melvin Davidson
On Wed, Dec 7, 2016 at 8:25 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 12/07/2016 05:19 PM, metaresolve wrote:
>
>> Uh, yeah, it was a SELECT * from cc_20161207;
>>
>> I know, it was dumb. I didn't realize it would break it or at least run
>> for
>> a while. I tend to do things in small steps, run a query, check my
>> results,
>> then tweak.
>>
>> You're right, I wouldn't want to be viewing those million. so I guess I
>> could just be doing the top 1 rows to get a sense, then be doing my
>> aggregate group bys/etc to get my final numbers? That's how I hacked
>> Access
>> to be my data analysis program until I got to Alteryx. But I was also
>> never
>> using files of 1M then.
>>
>>
> FYI pgAdmin, if I remember correctly, has a setting that limits the
> maximum number of rows that it will fetch at a time.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

With regard to John's comments
>Postgres, on a properly scaled and tuned database server, can handle
billions of records.
>Obviously, doing something silly like querying all billions at once will
never be fast, thats a lot of data to marshal and process.

On a similar note, your table structure is not exactly optimized for a
relational database.
I see no primary key to identify each row. In addition, the election & vote
columns
belong in a separate table that is a child of cc_20161207, since they are
repeating fields.

eg:
ALTER TABLE cc_20161207
  ADD COLUMN cc_20161207_pk serial;

UPDATE cc_20161207
  SET cc_20161207_pk = nextval(cc_20161207_pk_seq)
WHERE cc_20161207_pk IS NULL;

ALTER TABLE cc_20161207
  ALTER COLUMN cc_20161207_pk SET NOT NULL;
ALTER TABLE cc_20161207
  ADD CONSTRAINT cc_20161207 PRIMARY KEY (cc_20161207_pk);


CREATE TABLE election_data (
cc_20161207_pk bigint NOT NULL,
election_id serial NOT NULL,
election varchar(4),
vote_type varchar(1)
CONSTRAINT election_data_pk PRIMARY KEY (cc_20161207_pk, election_id)
);

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres pg_restore append data

2016-12-05 Thread Melvin Davidson
On Mon, Dec 5, 2016 at 4:36 AM, dhanuj hippie <dhanuj.hip...@gmail.com>
wrote:

> Hi,
> I have a pg backup created using pg_dump custom format. I'm trying to
> restore into a DB which already has some data, using pg_restore data-only.
> This fails in scenarios where same data (some rows) is present in DB as
> well as dump. Is there a way to ignore such rows and proceed with restoring
> the rest of data ? I don't want to lose the present data in DB. I need to
> append the backup onto current content in DB.
>
> Thanks,
> Dhanuj
>












*The best I can advise you is this.1. Create a new database x.2. Restore
the backup to the new database.3. Rename the table to something like
table_old.4. Rename the primary index and any other index   so that they
will not be the same as ones in the original table.5. Dump table_old only
-> pg_dump -t table_old x > table_old.sql6. Restore table_old to your
database with your original table (table_orig).Then you can:INSERT INTO
table_orig   SELECT * FROM table_old   WHERE your_primary_key NOT IN
(SELECT your_primary_key FROM table_orig );*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Where would I be able to get instructions regarding postgresql installation on Windows 10?

2016-12-04 Thread Melvin Davidson
Just click on the version of Win x-86 32 PostgreSQL you want from here and
use the downloaded installer.

http://www.enterprisedb.com/products-services-training/pgdownload#windows

On Sun, Dec 4, 2016 at 7:57 PM, Varuna Seneviratna <
varunasenevira...@gmail.com> wrote:

> Hello,
> I looked through the documentation But was not able to find any
> information. If possible please guide me
>
> Varuna
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Index size

2016-12-04 Thread Melvin Davidson
On Sun, Dec 4, 2016 at 4:43 AM, Samuel Williams <
space.ship.travel...@gmail.com> wrote:

> Melvin, of course there are differences. However, I suspect there are at
> least SOME tangible differences which can be identified.
>
> On 4 December 2016 at 15:53, Melvin Davidson <melvin6...@gmail.com> wrote:
>
>>
>>
>> On Sat, Dec 3, 2016 at 9:32 PM, Steve Atkins <st...@blighty.com> wrote:
>>
>>>
>>> > On Dec 3, 2016, at 3:57 PM, Samuel Williams <
>>> space.ship.travel...@gmail.com> wrote:
>>> >
>>> > Thanks everyone for your feedback so far. I've done a bit more digging:
>>> >
>>> > MySQL in MBytes (about 350 million rows):
>>> >
>>> > index_user_event_on_what_category_id_created_at_latlng | 22806.00
>>> > index_user_event_for_reporting | 18211.00
>>> > index_user_event_on_created_at | 9519.00
>>> > index_user_event_on_user_id | 6884.00
>>> > index_user_event_on_poi_id | 4891.00
>>> > index_user_event_on_deal_id | 3979.00
>>> >
>>> > Postgres (about 250 million rows):
>>> >
>>> > index_user_event_on_what_category_id_created_at_latlng | 25 GB
>>> > index_user_event_for_reporting | 19 GB
>>> > index_user_event_on_created_at | 7445 MB
>>> > index_user_event_on_user_id | 7274 MB
>>> > index_user_event_on_deal_id | 7132 MB
>>> > index_user_event_on_poi_id | 7099 MB
>>> >
>>> > So, the index is a bit bigger, plus there is also the PKEY index which
>>> > increases disk usage by another whole index. Keep in mind in the
>>> > above, MySQL has about 40% more data.
>>> >
>>> > With some indexes, it looks like MySQL might not be adding all data to
>>> > the index (e.g. ignoring NULL values). Does MySQL ignore null values
>>> > in an index? Can we get the same behaviour in Postgres to minimise
>>> > usage? What would be the recommendation here?
>>>
>>> It's unlikely anyone will be able to usefully answer the questions you
>>> should be asking without seeing the schema and index definitions,
>>> and maybe some clues about how you're querying the data.
>>>
>>> Cheers,
>>>   Steve
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>> You are comparing apples to oranges. MySQL and PostgreSQL engines are
>> different by design, so likewise
>> the size of the indexes will be different.
>> You may as well ask why a 2015 Porsche 911 Carrera 6 cyl, 3.4 L,
>> Auto(AM-S7) gets 22 MPG,
>> but the 2016 Jaguar XF MPG gets 24 mpg.
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>

*Samuel, *



*Please note that the accepted convention in this forum is to bottom post.
Please do not top post.>Melvin, of course there are differences. However, I
suspect there are at least SOME tangible differences which can
be identified.*

*I have to ask, WHY do you think it is important to identify any
differences in the index methods.*








*What is important is that you understand why you need indexes and how they
are used.Tutorial -
Indexeshttps://www.tutorialspoint.com/postgresql/postgresql_indexes.htm
<https://www.tutorialspoint.com/postgresql/postgresql_indexes.htm>CREATE
INDEXhttps://www.postgresql.org/docs/9.4/static/sql-createindex.html
<https://www.postgresql.org/docs/9.4/static/sql-createindex.html>*
-- 

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Index size

2016-12-03 Thread Melvin Davidson
On Sat, Dec 3, 2016 at 9:32 PM, Steve Atkins <st...@blighty.com> wrote:

>
> > On Dec 3, 2016, at 3:57 PM, Samuel Williams <space.ship.traveller@gmail.
> com> wrote:
> >
> > Thanks everyone for your feedback so far. I've done a bit more digging:
> >
> > MySQL in MBytes (about 350 million rows):
> >
> > index_user_event_on_what_category_id_created_at_latlng | 22806.00
> > index_user_event_for_reporting | 18211.00
> > index_user_event_on_created_at | 9519.00
> > index_user_event_on_user_id | 6884.00
> > index_user_event_on_poi_id | 4891.00
> > index_user_event_on_deal_id | 3979.00
> >
> > Postgres (about 250 million rows):
> >
> > index_user_event_on_what_category_id_created_at_latlng | 25 GB
> > index_user_event_for_reporting | 19 GB
> > index_user_event_on_created_at | 7445 MB
> > index_user_event_on_user_id | 7274 MB
> > index_user_event_on_deal_id | 7132 MB
> > index_user_event_on_poi_id | 7099 MB
> >
> > So, the index is a bit bigger, plus there is also the PKEY index which
> > increases disk usage by another whole index. Keep in mind in the
> > above, MySQL has about 40% more data.
> >
> > With some indexes, it looks like MySQL might not be adding all data to
> > the index (e.g. ignoring NULL values). Does MySQL ignore null values
> > in an index? Can we get the same behaviour in Postgres to minimise
> > usage? What would be the recommendation here?
>
> It's unlikely anyone will be able to usefully answer the questions you
> should be asking without seeing the schema and index definitions,
> and maybe some clues about how you're querying the data.
>
> Cheers,
>   Steve
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

You are comparing apples to oranges. MySQL and PostgreSQL engines are
different by design, so likewise
the size of the indexes will be different.
You may as well ask why a 2015 Porsche 911 Carrera 6 cyl, 3.4 L,
Auto(AM-S7) gets 22 MPG,
but the 2016 Jaguar XF MPG gets 24 mpg.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Any work being done on materialized view?

2016-12-03 Thread Melvin Davidson
What exactly are the features you are looking for? Materialized views are
treated as tables in PostgreSQL and you can create indexes on them.

https://www.postgresql.org/docs/9.4/static/sql-creatematerializedview.html

"CREATE MATERIALIZED VIEW is *similar to **CREATE TABLE* AS, except that it
also remembers the query used to initialize the view, so that it can be
refreshed later upon demand. *A materialized view has many of the same
properties as a table*, but there is no support for temporary materialized
views or automatic generation of OIDs."

On Sat, Dec 3, 2016 at 8:55 AM, inspector morse <inspectormors...@gmail.com>
wrote:

> Is there any work being done on materialized views for version 9.7? This
> postgresql feature is severely lacking compared to similar features like
> indexed views by sql server.
>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Moving pg_xlog

2016-12-01 Thread Melvin Davidson
On Thu, Dec 1, 2016 at 10:17 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Dec 1, 2016 at 7:59 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>
>> On Thu, Dec 1, 2016 at 5:55 AM, Robert Inder <rob...@interactive.co.uk>
>> wrote:
>>
>>>
>>> I'd really like to read an explicit discussion of this in the official
>>> documentation, rather than just glean what I can from answers to
>>> questions.
>>>
>>
>> The official documentation cannot have a dissertation on every
>> combination of hardware, OS, file-system type, version of that file-system,
>> and your usage pattern.  That is inherently the realm of the wiki or the
>> blogs.
>>
>>
> ​The documentation has enough information at this level of detail that I
> wouldn't object to adding commentary addressing the above should someone
> take the time to write it.​
>
> Given that the location of pg_xlog is not "configurable" placing such
> commentary in Server Configuration would be a no-go, however.  At a quick
> glance a new section under "Server Setup and Operation - Creating a
> Database Cluster" would probably be a better home.  It already discusses
> Secondary File Systems and in many ways this is just an extension of that
> discussion.
>
> David J.
>
>
>Given that the location of pg_xlog is not "configurable"

 Well, while the location of pg_xlog is not currently configurable, on
Linux system the way to do it is  to:
 1. stop PostgreSQL
 2. move the pg_xlog directory to a separate partition
 3. create a symbolic link to point to the new partition
 4. restart PostgreSQL

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Overwrite pg_catalog?

2016-12-01 Thread Melvin Davidson
On Thu, Dec 1, 2016 at 7:07 AM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> Juliano:
>
> On Thu, Dec 1, 2016 at 12:16 PM, Juliano <jpli...@protonmail.com> wrote:
> > I tried to restore pg_catalog to my new database, but the existing
> > pg_catalog can't be overwritten or dropped, and postgres auto creates
> > pg_catalog when I create a new DB.
>
> This is because, in general, pg_catalog is maintained by DML
> statements, restoring it is not going to do what you think.
>
> I.e., if you create a table, a row goes into pg_class, but if you
> somehow manage to insert into pg_class a table is not properly created
> ( more things need to be done ).
>
> Copying pg_catalog from one db to other is like trying to copy the
> root dir and FAT from one floppy ( how old fashioned ) to other, it
> cannot be done with normal tools and probably won't do what you think
> it does.
>
>
> > So, there is a way to restore the pg_catalog to a new database?
>
> Probably not, but this has the faint smell of http://xyproblem.info/ ,
> what are you truing to achieve by doing that?
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*Juliano,*

*The right way to do what you want is to:*

*A:*

*1. pg_dump from the database you want to copy
https://www.postgresql.org/docs/9.6/static/app-pgdump.html
<https://www.postgresql.org/docs/9.6/static/app-pgdump.html>*

*2. pg_restore the dump file too the new database.
https://www.postgresql.org/docs/9.6/static/app-pgrestore.html
<https://www.postgresql.org/docs/9.6/static/app-pgrestore.html>*

*OR*

*B:*

*1: pg_dumpall from the cluster you want to copy
https://www.postgresql.org/docs/9.6/static/app-pg-dumpall.html
<https://www.postgresql.org/docs/9.6/static/app-pg-dumpall.html>*

*2: use psql to restore the dump file into the new cluster
https://www.postgresql.org/docs/9.6/static/app-psql.html
<https://www.postgresql.org/docs/9.6/static/app-psql.html>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Monitoring Replication - Postgres 9.2

2016-11-30 Thread Melvin Davidson
On Wed, Nov 30, 2016 at 8:04 AM, Cachique <cachi...@gmail.com> wrote:

> You can try pg_cron.
> https://github.com/citusdata/pg_cron
> "pg_cron is a simple cron-based job scheduler for PostgreSQL (9.5 or
> higher) that runs inside the database as an extension. It uses the same
> syntax as regular cron, but it allows you to schedule PostgreSQL commands
> directly from the database"
>
> It looks like what you want.
>
> Walter.
>
> On Tue, Nov 29, 2016 at 10:40 PM, Patrick B <patrickbake...@gmail.com>
> wrote:
>
>>
>>
>> 2016-11-30 14:21 GMT+13:00 John R Pierce <pie...@hogranch.com>:
>>
>>> On 11/29/2016 5:10 PM, Patrick B wrote:
>>>
>>>
>>> Yep.. once a minute or so. And yes, I need to store a history with
>>> timestamp.
>>>
>>> Any idea? :)
>>>
>>>
>>> so create a table with a timestamptz, plus all the fields you want, have
>>> a script (perl?  python?  whatever your favorite poison is with database
>>> access) that once a minute executes those two queries (you'll need two
>>> database connections since only the slave knows how far behind it is), and
>>> inserts the data into your table.
>>>
>>>
>>> --
>>> john r pierce, recycling bits in santa cruz
>>>
>>>
>>
>> Can't I do it on the DB size? Using a trigger maybe? instead of using
>> Cron?
>>
>> Patrick
>>
>>
>
>The OP wants to run queries on the master and the slave, and combine them.

Another option, although a bit convoluted, would be to extract the data to
a csv file, scp it to destination server, and then copy in from there
eg:
Contents of bash script
===
#!/bin/bash
psql -U postgres
\t
\f c
\o results.csv
select now() as time_pk,
   client_addr,
   state,
   sent_location,
   write_location,
   flush_location,
   replay_location,
   sync_priority
  from pg_stat_replication;
\q

scp results.csv destination_server/tmp/.

psql -U postgres -h destination_server/tmp/.
COPY data_table
FROM '\tmp\results.csv'
WITH csv;
 \q

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Monitoring Replication - Postgres 9.2

2016-11-29 Thread Melvin Davidson
On Tue, Nov 29, 2016 at 8:55 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 11/29/2016 5:40 PM, Patrick B wrote:
>
>>
>>
>> Can't I do it on the DB size? Using a trigger maybe? instead of using
>> Cron?
>>
>
> triggers are only called on database events like insert, update, select.
>  even something like the pgagent scheduler thats frequently bundled with
> pgadmin uses cron to run its master time process, which checks to see if
> there are any pending pgagent jobs and invokes them.
>
>
>
> for a every-minute event, i wouldn't use cron, I would write a little
> script/application in something like perl or python, which keeps persistent
> connections open, samples your data, inserts it, and sleeps til the next
> minute then repeats. running it from cron would require multiple
> process forks every sample, which is fairly expensive.
>
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*There is no reason you can't execute a cron job on production to a remote
db.*

















*eg:contents of cron*/5 * * * *  psql -U postgres -h 123.4.56.789 -d
remote_db_name -f /path_to/exec.sqlcontents of
exec.sql==INSERT INTO your_table   SELECT now(),
 client_addr,  state,  sent_location,
 write_location,  flush_location,  replay_location,
 sync_priority from pg_stat_replication;*--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Melvin Davidson
On Mon, Nov 28, 2016 at 10:50 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 11/28/2016 07:44 AM, Melvin Davidson wrote:
>
>>
>>
>
>>
>> *To clarify, you cannot dump the pg_catalog schema. It is the main
>> control of how all other objects are
>>
>
> Actually you can. I would not of thought so, but tried Achilleas's
> suggestion and it worked. Example:
>
> --
> -- Name: pg_aggregate; Type: TABLE; Schema: pg_catalog; Owner: postgres
> --
>
> CREATE TABLE pg_aggregate (
> aggfnoid regproc NOT NULL,
> aggkind "char" NOT NULL,
> aggnumdirectargs smallint NOT NULL,
> aggtransfn regproc NOT NULL,
> aggfinalfn regproc NOT NULL,
> aggmtransfn regproc NOT NULL,
> aggminvtransfn regproc NOT NULL,
> aggmfinalfn regproc NOT NULL,
> aggfinalextra boolean NOT NULL,
> aggmfinalextra boolean NOT NULL,
> aggsortop oid NOT NULL,
> aggtranstype oid NOT NULL,
> aggtransspace integer NOT NULL,
> aggmtranstype oid NOT NULL,
> aggmtransspace integer NOT NULL,
> agginitval text,
> aggminitval text
> );
>
> COPY pg_aggregate (aggfnoid 
>
>
> and you do get errors:
>
> pg_dump: WARNING: typtype of data type "any" appears to be invalid
> pg_dump: WARNING: typtype of data type "anyarray" appears to be invalid
> pg_dump: WARNING: typtype of data type "anyelement" appears to be invalid
> pg_dump: WARNING: typtype of data type "anyenum" appears to be invalid
> pg_dump: WARNING: typtype of data type "anynonarray" appears to be invalid
> pg_dump: WARNING: typtype of data type "anyrange" appears to be invalid
> pg_dump: WARNING: typtype of data type "cstring" appears to be invalid
>
> 
>
> Still not sure why you would want to, but you can.
>
>
> stored in the cluster. There is no point in dumping it and all it's
>> tables and views are already clearly
>> documented.
>> https://www.postgresql.org/docs/9.6/static/catalogs.html
>>
>> pg_catalog itself is generated with the initdb command when a new
>> postgresql cluster is generated.
>> https://www.postgresql.org/docs/9.6/static/creating-cluster.html
>> https://www.postgresql.org/docs/9.6/static/app-initdb.html*
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


*Hmmm, well you learn something new every day. Albeit, although you can
view the dump file, I'm pretty sure you cannot restore it to a database
because,*
*by definition, those tables would already exist in the database.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Melvin Davidson
On Mon, Nov 28, 2016 at 9:33 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 11/28/2016 03:40 AM, Juliano wrote:
>
>> I'm not knew that system catalogs could be saved as schema, thanks
>> Achilleas.
>>
>> So, I would like to get the data and the schema of the system catalogs.
>> How can I include the data in a dump file with system schemas? (The
>> parameter -a makes sense together the parameter --schema?)
>>
>
> No.
>
> Use Achilleas's command and see what happens.
>
>
>
>
>> Regards,
>> Juliano
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>








*To clarify, you cannot dump the pg_catalog schema. It is the main control
of how all other objects are stored in the cluster. There is no point in
dumping it and all it's tables and views are already
clearlydocumented.https://www.postgresql.org/docs/9.6/static/catalogs.html
<https://www.postgresql.org/docs/9.6/static/catalogs.html>pg_catalog itself
is generated with the initdb command when a new postgresql cluster is
generated.https://www.postgresql.org/docs/9.6/static/creating-cluster.html
<https://www.postgresql.org/docs/9.6/static/creating-cluster.html>https://www.postgresql.org/docs/9.6/static/app-initdb.html
<https://www.postgresql.org/docs/9.6/static/app-initdb.html>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread Melvin Davidson
On Sat, Nov 26, 2016 at 2:52 PM, twoflower <standa.ku...@gmail.com> wrote:

> That makes perfect sense.
>
> Thank you for a great help, Adrian!
>
>
>
> --
> View this message in context: http://postgresql.nabble.com/
> pg-ctl-restart-does-not-terminate-tp5932070p5932095.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*I feel it important to add one note to this thread. The version of
PostgreSQL has not been stated, but it is very *



*important to note that the default behavior of pg_ctl has changed between
v9.4 and 9.5.https://www.postgresql.org/docs/9.5/static/release-9-5.html
<https://www.postgresql.org/docs/9.5/static/release-9-5.html>*

   -

   *Change pg_ctl
   <https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html>'s default
   shutdown mode from smart to fast (Bruce Momjian)*

   *This means the default behavior will be to forcibly cancel existing
   database sessions, not simply wait for them to exit.*


*So if the PostgreSQL version is 9.4 or lower, and there are any open
connections, the pg_ctl will wait until all connections terminate (either
normally or *

*are forced off). That "might " explain why pg_ctl never terminated (unless
-m fast was also specified). However, if it is version 9.5 or above, then *

*-m fast is the default so existing connections will not be a problem. *
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Invoice Table Design

2016-11-24 Thread Melvin Davidson
On Thu, Nov 24, 2016 at 9:17 AM, Robert Heinen <r...@216software.com> wrote:

> I was wondering if anyone might be able to help me out with a table design
> question.
>
> A quick intro -- I'm helping a company switch from a mongo database over
> to postgresql (yay!). The company is a marketplace app for musicians and
> hosts. The basic idea is that a host can book a musician for an event, like
> a wedding or a birthday. Also, an artist and a host can be either basic or
> "pro" accounts -- if they're "pro" then they pay a little bit more and get
> some extra features.
>
> The design I'm struggling with is how to handle invoices and transactions
> in postgres. In mongo, everything is stuffed into a single 'invoices' table
> that includes sender and receiver addresses, the amount of the invoice,
> taxes, etc. It also contains a reference to the booked event, the artist
> and the host, as well as some state information through nullable columns --
> created date, sent date, paid date.
>
> At the same time the table also tracks the above mentioned "pro"
> subscriptions by utilizing a type field (so 'concertfee' vs
> 'subscription'). So both type of invoices are stuffed into the table and
> it's up to the application to understand the difference in the types.
>
> To translate this to postgres, I'm leaning towards breaking out the
> different types of invoices into their own tables but keeping the basics of
> an invoice (sender, receiver, amount) and then referencing from specific
> tables like -- subscription_invoices and event_invoices.
>
> so tables would be:
> invoices (invoice_uuid primary key)
> event_invoices (invoice_uuid FK, event_uuid FK)
> artist_subscription_invoices (invoice_uuid FK, artist_uuid FK)
>
> There is one last interesting part. When an event is booked, two invoices
> are generated -- one from the artist to the host for the payment of the
> concert, and then a second one from my company to the artist for the
> booking fee. Again, these seem like two separate tables, with, I suppose,
>  a kind of a parent-child relationship (we can't have a booking fee unless
> we have the original invoice for the booking).
>
> Thanks for reading --any insight, comments, or questions are appreciated!
>
> Rob
>



*Maybe it's just me, but I would go with a different design. tables:*
















* artistartist_uuidartist_type -- pro, basicartist_name
artist...eventevent_uuid event_type -- wedding, birthday, etc;
event_...    invoiceinv_uuidinv_type -- event, artist
artist_uuid -- can be NULL (depends on inv_type)event_uuid -- can be
NULL (depends on inv_type)*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Thank you

2016-11-23 Thread Melvin Davidson
*This is not a problem or a query.*

*Here in the U.S.A. it is the day before Thanksgiving. In keeping with
that, I'd like to thank all those that have helped me, be thankful for all
those I have helped, and forgive all those that have refused to see my
point of view.*


*Happy Thanksgiving to everyone.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to change order sort of table in HashJoin

2016-11-19 Thread Melvin Davidson
On Sat, Nov 19, 2016 at 12:46 AM, Man Trieu <man.tr...@gmail.com> wrote:

> Hi Experts,
>
> As in the example below, i think the plan which hash table is created on
> testtbl2 (the fewer tuples) should be choosen.
> Because creating of hash table should faster in testtbl2. But it did not.
>
> I have tried to change the ordering of table by tuning parameter even if
> using pg_hint_plan but not success.
>
> Why does planner do not choose the plan which hash table is created on
> testtbl2 (which can take less time)?
> And how to change the order?
>
> # I also confirm planner info by rebuild postgresql but not found related
> usefull info about hash table
>
> ---
> postgres=# create table testtbl1(id integer, c1 text, c2 text, c3 text,
> primary key (c1,c2,c3));
> CREATE TABLE
> postgres=# create table testtbl2(id integer, c1 text, c2 text, c3 text,
> primary key (c1,c2,c3));
> CREATE TABLE
> postgres=# insert into testtbl1 select generate_series(1,100),
> random()::text,random()::text,random()::text;
> INSERT 0 100
> postgres=# insert into testtbl2 select * from testtbl1 where id%7 = 0;
> INSERT 0 142857
>
> postgres=# explain analyze select * from testtbl1 inner join testtbl2
> using(c1,c2,c3);
>QUERY PLAN
> 
> -
>  Hash Join  (cost=38775.00..47171.72 rows=1 width=59) (actual
> time=1120.824..1506.236 rows=142857 loops=1)
>Hash Cond: ((testtbl2.c1 = testtbl1.c1) AND (testtbl2.c2 = testtbl1.c2)
> AND (testtbl2.c3 = testtbl1.c3))
>->  Seq Scan on testtbl2  (cost=0.00..3039.57 rows=142857 width=56)
> (actual time=0.008..27.964 rows=142857 loops=1)
>->  Hash  (cost=21275.00..21275.00 rows=100 width=55) (actual
> time=1120.687..1120.687 rows=100 loops=1)
>  Buckets: 131072  Batches: 1  Memory Usage: 89713kB
>  ->  Seq Scan on testtbl1  (cost=0.00..21275.00 rows=100
> width=55) (actual time=0.035..458.522 rows=100 loops=1)
>  Planning time: 0.922 ms
>  Execution time: 1521.258 ms
> (8 rows)
>
> postgres=# set pg_hint_plan.enable_hint to on;
> SET
> postgres=# /*+
> postgres*# HashJoin(testtbl1 testtbl2)
> postgres*# Leading(testtbl1 testtbl2)
> postgres*# */
> postgres-# explain analyze select * from testtbl1 inner join testtbl2
> using(c1,c2,c3);
>QUERY PLAN
> 
> -
>  Hash Join  (cost=48541.00..67352.86 rows=1 width=59) (actual
> time=1220.625..1799.709 rows=142857 loops=1)
>Hash Cond: ((testtbl2.c1 = testtbl1.c1) AND (testtbl2.c2 = testtbl1.c2)
> AND (testtbl2.c3 = testtbl1.c3))
>->  Seq Scan on testtbl2  (cost=0.00..3039.57 rows=142857 width=56)
> (actual time=0.011..58.649 rows=142857 loops=1)
>->  Hash  (cost=21275.00..21275.00 rows=100 width=55) (actual
> time=1219.295..1219.295 rows=100 loops=1)
>  Buckets: 8192  Batches: 32  Memory Usage: 2851kB
>  ->  Seq Scan on testtbl1  (cost=0.00..21275.00 rows=100
> width=55) (actual time=0.021..397.583 rows=100 loops=1)
>  Planning time: 3.971 ms
>  Execution time: 1807.710 ms
> (8 rows)
>
> postgres=#
> ---
>
>
> Thanks and best regard!
>




*AFAIK, the only way to change a sort order is to use the ORDER BY clause
in the SELECT.https://www.postgresql.org/docs/9.4/static/sql-select.html
<https://www.postgresql.org/docs/9.4/static/sql-select.html>"8. If the
ORDER BY clause is specified, the returned rows are sorted in the specified
order. If ORDER BY is not given, the rows are returned in whatever order
the system finds fastest to produce."*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_class (system) table increasing size.

2016-11-17 Thread Melvin Davidson
On Thu, Nov 17, 2016 at 1:33 PM, dhaval jaiswal <dhava...@hotmail.com>
wrote:

> select * from pg_stat_sys_tables where relname = 'pg_class';
>
> -[ RECORD 1 ]---+---
> relid   | 1259
> schemaname  | pg_catalog
> relname | pg_class
> seq_scan| 1838
> seq_tup_read| 3177416
> idx_scan| 1027456557
> idx_tup_fetch   | 959682909
> n_tup_ins   | 0
> n_tup_upd   | 0
> n_tup_del   | 0
> n_tup_hot_upd   | 0
> n_live_tup  | 0
> n_dead_tup  | 0
> n_mod_since_analyze | 0
> last_vacuum |
> last_autovacuum |
> last_analyze|
> last_autoanalyze|
> vacuum_count| 0
> autovacuum_count| 0
> analyze_count   | 0
> autoanalyze_count   | 0
>
>
> Yes, the size of pg_class table is of 5 GB.  However, the existing row is
> only 2380 only. It's got fragmented.
>
> --
> *From:* Adrian Klaver <adrian.kla...@aklaver.com>
> *Sent:* Thursday, November 17, 2016 8:29 PM
> *To:* dhaval jaiswal; David G. Johnston
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] pg_class (system) table increasing size.
>
> On 11/16/2016 07:08 PM, dhaval jaiswal wrote:
> >
> >>> Because you are creating (specific) objects.
> >
> > I have gone through the link and  how would i figure out which
> > specific object is causing this.  Can you please elaborate more here.
> >
> >
> > We do not have the much temporary table usage.
> >
> >
> > Since the size is bigger (5 GB) to maintain. does it requires
> > maintenance as well for thepg_class.
>
> Should have added to my previous post. What does:
>
> select * from pg_stat_sys_tables where relname = 'pg_class';
>
> show?
>
> >
> >
> > It seems its affecting performance.
> >
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
























*>Yes, the size of pg_class table is of 5 GB.  However, the existing row is
only 2380 only. It's got fragmented. I strongly believe you are incorrect
about the size of the pg_class table.The correct way to determine that size
is:SELECT n.nspname as schema,   c.relname as table,   a.rolname as
owner,   c.relfilenode as filename,   c.reltuples::bigint,
pg_size_pretty(pg_relation_size(n.nspname|| '.' || c.relname)) as
size,   pg_size_pretty(pg_total_relation_size(n.nspname|| '.' ||
c.relname)) as total_size,   pg_relation_size(n.nspname|| '.' ||
c.relname) as size_bytes,   pg_total_relation_size(n.nspname|| '.' ||
c.relname) as total_size_bytes,   CASE WHEN c.reltablespace =
0THEN 'pg_default'ELSE (SELECT t.spcname
FROM pg_tablespace t WHERE (t.oid = c.reltablespace)
)END as tablespaceFROM pg_class c  JOIN
pg_namespace n ON (n.oid = c.relnamespace)  JOIN pg_authid a ON ( a.oid =
c.relowner )  WHERE relname = 'pg_class' ;*

 * What does that show for reltuples and total_size ?*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] which work memory parameter is used for what?

2016-11-09 Thread Melvin Davidson
On Tue, Nov 8, 2016 at 1:53 PM, Hector Yuen <hec...@infer.com> wrote:

> Hello,
>
> I am confused on which are the parameters for different queries. I am
> trying to run VACUUM on a big table, and it is easier for me to set the
> work memory for the specific session instead of tuning it in
> postgresql.conf.
>
> I noticed that if I do:
>
> set work_mem='1GB';
>
> it doesn't help VACUUM, I have to do:
>
> set maintenance_work_mem='1GB';
>
> to accelerate the operation. I could notice that by running VACUUM VERBOSE
> and see that the table was scanned less times an the operation finished a
> lot faster.
>
> My question is, for which operations does work_mem matter and for which
> ones does maintenance_work_mem do? I am specially interested in operations
> like ANALYZE and VACUUM, I believe ANALYZE depends on work_mem and VACUUM
> on maintenance_work_mem.
>
> Can you confirm my understanding?
>
> Thanks
>
> --
> -h
>


*>I believe ANALYZE depends on work_mem and VACUUM on maintenance_work_mem.*
*No, ANALYZE is part of VACUUM. They both use maintenance_work_mem.*

*https://www.postgresql.org/docs/9.4/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY
<https://www.postgresql.org/docs/9.4/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Role and grants

2016-11-09 Thread Melvin Davidson
On Wed, Nov 9, 2016 at 2:05 PM, Fran ... <bryan...@hotmail.com> wrote:

> Hi,
>
>
> I am a new user with PostgreSQL, I came from MySQL and I am experiencing
> some issues with roles and privileges.
>
>
> I have created a DB,user and grant privilege on this DB to this user. How
> could I check what is the privileges/permissions for this user?
>
>
> Transcript:
>
>
> postgres=# create database test;
> CREATE DATABASE
> postgres=# create user test with password 'test';
> CREATE ROLE
> postgres=# grant all privileges on database test to test;
> GRANT
> postgres=# \l
>   List of databases
>Name|  Owner   | Encoding |   Collate   |Ctype|   Access
> privileges
> ---+--+--+-+
> -+---
>  postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>  template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
>  +
>|  |  | | |
> postgres=CTc/postgres
>  template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
>  +
>|  |  | | |
> postgres=CTc/postgres
>  test  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
> =Tc/postgres +
>|  |  | | |
> postgres=CTc/postgres+
>|  |  | | |
> test=CTc/postgres
>  test1 | test1| UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/test1
>  +
>|  |  | | |
> test1=CTc/test1
> (5 rows)
>
> With "\l" command It's no clear.
>
>
> Finally, I don't find some command like "show grants for..." in MySQL.
>
>
> Regards.
>
> Bryan
>

You can use the following query to show what table   can access.
Just replace  with the actual username you want.

SELECT *
  FROM information_schema.table_privileges
 WHERE grantee = ''
ORDER BY table_schema,table_name, privilege_type;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] which work memory parameter is used for what?

2016-11-09 Thread Melvin Davidson
On Wed, Nov 9, 2016 at 10:05 AM, Andreas Joseph Krogh <andr...@visena.com>
wrote:

> På onsdag 09. november 2016 kl. 15:54:13, skrev Adrian Klaver <
> adrian.kla...@aklaver.com>:
>
> On 11/08/2016 06:45 PM, Andreas Joseph Krogh wrote:
> > På onsdag 09. november 2016 kl. 03:02:54, skrev John R Pierce
> > <pie...@hogranch.com <mailto:pie...@hogranch.com>>:
> >
> > On 11/8/2016 2:34 PM, Hector Yuen wrote:
> > > I am confused on which are the parameters for different queries. I
> am
> > > trying to run VACUUM on a big table, and it is easier for me to set
> > > the work memory for the specific session instead of tuning it in
> > > postgresql.conf.
> > >
> > > I noticed that if I do:
> > >
> > > set work_mem='1GB';
> > >
> > > it doesn't help VACUUM, I have to do:
> > >
> > > set maintenance_work_mem='1GB';
> > >
> > > to accelerate the operation. I could notice that by running VACUUM
> > > VERBOSE and see that the table was scanned less times an the
> operation
> > > finished a lot faster.
> > >
> > > My question is, for which operations does work_mem matter and for
> > > which ones does maintenance_work_mem do? I am specially interested
> in
> > > operations like ANALYZE and VACUUM, I believe ANALYZE depends on
> > > work_mem and VACUUM on maintenance_work_mem.
> > >
> > > Can you confirm my understanding?
> > >
> >
> > https://www.postgresql.org/docs/current/static/runtime-conf
> ig-resource.html
> >
> >
> > maintenance_work_mem is used by vacuum and create index operations
> > (including implicit index creation such as add foreign key).
> >
> >
> > There is no such thing in PG.
>
> Can you be more specific as;
>
> https://www.postgresql.org/docs/9.5/static/runtime-config-
> resource.html#GUC-MAINTENANCE-WORK-MEM
>
>
> "implicit index creation such as add foreign key"
> No implicit index will be created.
>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andr...@visena.com
> www.visena.com
> <https://www.visena.com>
>
>










*> There is no such thing in PG.>No implicit index will be createI believe
he is referring to: >(including implicit index creation such as add foreign
key).To clarify, indexes are NOT implicitly created for foriegn
keys.Indexes ARE created for PRIMARY KEYs when specified in CREATE
TABLE...or ALTER TABLE ADD CONSTRAINT PRIMARY KEY ...I believe the
confusion is caused by the wording "ALTER TABLE ADD FOREIGN KEY".In that
case, maintenance_work_mem is used to create the FK _CONSTRAINT_, butan
associated index is not created implicitly.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Must I create an index for child table if master table have one?

2016-11-08 Thread Melvin Davidson
*To clarify, if you are talking about partitioning, then you almost
certainly want to create a similar index on the child table.*
*If you are referring to a Foreign Key Constraint, then it is also good
idea to make that index, but not necessary.*

On Tue, Nov 8, 2016 at 1:46 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Tue, Nov 8, 2016 at 1:33 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Tue, Nov 8, 2016 at 11:08 AM, Edmundo Robles <edmu...@sw-argos.com>
>> wrote:
>>
>>> Hi!
>>>
>>>  i have a  simple question,  if the master table have an index, must  i
>>> have create  the same index in a child  table?
>>> what is  the best practice  for  indexes  in table partition ?
>>>
>>
>> Indexes are self-contained within the physical table on which they are
>> defined.  The index on the master contains no rows that are present in the
>> child table.
>>
>> David J.​
>>
>>
> >Indexes are self-contained within the physical table on which they are
> defined
> *IOW, yes. *
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Must I create an index for child table if master table have one?

2016-11-08 Thread Melvin Davidson
On Tue, Nov 8, 2016 at 1:33 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Nov 8, 2016 at 11:08 AM, Edmundo Robles <edmu...@sw-argos.com>
> wrote:
>
>> Hi!
>>
>>  i have a  simple question,  if the master table have an index, must  i
>> have create  the same index in a child  table?
>> what is  the best practice  for  indexes  in table partition ?
>>
>
> Indexes are self-contained within the physical table on which they are
> defined.  The index on the master contains no rows that are present in the
> child table.
>
> David J.​
>
>
>Indexes are self-contained within the physical table on which they are
defined
*IOW, yes. *

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] timestamp without timezone to have timezone

2016-11-06 Thread Melvin Davidson
On Sun, Nov 6, 2016 at 9:11 AM, Benjamin Adams <benjamindad...@gmail.com>
wrote:

> I have a server that has a column timestamp without timezone.
>
> Is the time still saved?
> if I select column with timestamp it will show server timestamp with
> timezone.
>
> But If I move the data from EST to Central will the timestamp with
> timezone be correct?
> Or will it just not make the adjustment?
>
> Thanks
> Ben
>







*>But If I move the data from EST to Central will the timestamp with
timezone be correct?The correct way to do that is to use the  AT TIME ZONE
function.https://www.postgresql.org/docs/9.4/static/functions-datetime.html#FUNCTIONS-DATETIME-ZONECONVERT
<https://www.postgresql.org/docs/9.4/static/functions-datetime.html#FUNCTIONS-DATETIME-ZONECONVERT>*

*eg:*








*postgres=> SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'EST'
as Eastern,postgres->TIMESTAMP '2001-02-16 20:38:40' AT TIME
ZONE 'CST' as Central,postgres->TIMESTAMP '2001-02-16
20:38:40' AT TIME ZONE 'MST' as Mountain,postgres->TIMESTAMP
'2001-02-16 20:38:40' AT TIME ZONE 'PST' as Pacific;eastern
 |central |mountain|
pacific+++
2001-02-16 20:38:40-05 | 2001-02-16 21:38:40-05 | 2001-02-16
22:38:40-05 | 2001-02-16 23:38:40-05(1 row)
*

*-- *
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Statistics on index usage

2016-11-01 Thread Melvin Davidson
On Tue, Nov 1, 2016 at 10:43 AM, François Battail <
francois.batt...@sipibox.fr> wrote:

> Dear List,
>
> does pgsql maintains statistics on index usage? I mean just a counter
> for each index in the database, incremented each time time it is used.
> It would be useful to help cleaning almost unused index and to avoid
> poisoning the global cache.
>
> I've found nothing so far but may be I've been missing something.
>
> Best regards.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>does pgsql maintains statistics on index usage?
Yes, try this query.  Look at idx_scna, & idx_tup_fetch

SELECT n.nspname as schema,
   i.relname as table,
   i.indexrelname as index,
   i.idx_scan,
   i.idx_tup_read,
   i.idx_tup_fetch,
   CASE WHEN idx.indisprimary
THEN 'pkey'
WHEN idx.indisunique
THEN 'uidx'
ELSE 'idx'
END AS type,
   idx.indisexclusion,
   pg_get_indexdef(idx.indexrelid),
   CASE WHEN idx.indisvalid
THEN 'valid'
ELSE 'INVALID'
END as statusi,
   pg_relation_size(quote_ident(n.nspname)|| '.' ||
quote_ident(i.relname)) as size_in_bytes,
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname)|| '.' ||
quote_ident(i.relname))) as size
  FROM pg_stat_all_indexes i
  JOIN pg_class c ON (c.oid = i.relid)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
 WHERE i.relname LIKE '%%'
   AND n.nspname NOT LIKE 'pg_%'
ORDER BY 1, 2, 3;


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?

2016-10-31 Thread Melvin Davidson
On Mon, Oct 31, 2016 at 4:41 PM, Evan Martin <postgres...@realityexists.net>
wrote:

> On 31/10/2016 8:26 PM, Melvin Davidson wrote:
>
> I have tried using an event trigger to detect table creation (ie:
> tg_event_audit_all ) however, that does not parse the schema_name and objid
> as does pg_event_trigger_dropped_objects(), so I am not sure that is a
> practical way to audit.
>
>
> Event triggers seem like the most promising suggestion so far (given that
> I only really need to know that *something* has changed, not necessarily
> what).
>
> Still, I was hoping for a solution that doesn't rely on modifying the
> database at all, i.e. something built into Postgres, but it's pretty clear
> from everyone's answers nothing like this exists. (I wasn't looking for a
> creation date, exactly, because I'd want to know when something was
> modified, too).
>

Evan,

I
* did a little digging. Note that PostgreSQL 9.5 iand above s required for
this solution, but it should provide most of what you need.*

*If you go to
http://stackoverflow.com/questions/23488228/how-to-get-sql-text-from-postgres-event-trigger/35616026
<http://stackoverflow.com/questions/23488228/how-to-get-sql-text-from-postgres-event-trigger/35616026>*

*and scroll toward the bottom, you will see an example of how to trap
object mods. *

*Additional info as to what else can be captured is detailed in
https://www.postgresql.org/docs/9.5/static/functions-event-triggers.html
<https://www.postgresql.org/docs/9.5/static/functions-event-triggers.html>*

*You can also use now() or transaction_timestamp(), etc to record the
date/time the change occurred.*

*Finally, although the schema_name is provided in
pg_event_trigger_ddl_commands(), you will need to *
*SELECT relname FROM pg_class WHERE relnamespace IN (SELECT oid from
pg_namespace WHERE nspname = schema_name) INTO tbl_var;**just to *get the
table name. I'm not sure why they didn't just give the table name firectly,
but hey, at least there is a solution
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?

2016-10-31 Thread Melvin Davidson
On Mon, Oct 31, 2016 at 2:09 PM, John DeSoi <de...@pgedit.com> wrote:

>
> > On Oct 31, 2016, at 8:14 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
> >
> > That would certainly work, but the problem is, that trigger would have
> to be created for every table in the database.
> > When you have more than a couple dozen tables, as in hundreds, it
> becsmes a huge undertaking.
>
> Unless I'm misunderstanding the documentation, you create the trigger on
> the "ddl event" not a  table. The events are ddl_command_start,
> ddl_command_end, table_rewrite and sql_drop. I have not used this feature,
> but it seems like you would just need one function.
>
> https://www.postgresql.org/docs/current/static/event-
> trigger-definition.html
>
> John DeSoi, Ph.D.
>
>
I have tried using an event trigger to detect table creation (ie:
tg_event_audit_all ) however, that does not parse the schema_name and objid
as does pg_event_trigger_dropped_objects(), so I am not sure that is a
practical way to audit.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] initdb createuser commands

2016-10-31 Thread Melvin Davidson
On Mon, Oct 31, 2016 at 10:50 AM, Christofer C. Bell <
christofer.c.b...@gmail.com> wrote:

> On Sun, Oct 30, 2016 at 11:10 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>>
>>
>> On Sun, Oct 30, 2016 at 8:08 PM, Samuel Williams <
>> space.ship.travel...@gmail.com> wrote:
>>
>>> Sorry, just to clarify, b "worst" I don't mean functionality, I mean
>>> the way the commands are named and organised.
>>>
>>> On 31 October 2016 at 13:07, Samuel Williams
>>> <space.ship.travel...@gmail.com> wrote:
>>> > Mike, I agree with "the postgres way of doing things". I'm suggesting
>>> that
>>> >
>>> >>  these commands are sufficiently generic that they might clash
>>> > with other commands.
>>> >
>>> >> It's also not obvious they are part of postgresql.
>>> >
>>> >> Wouldn't it make more sense to make them subcommand, of, say, a top
>>> > level pga (postgres admin) command, a bit like how `mysqladmin` works
>>> >
>>> > and finally
>>> >
>>> >> the naming of these commands seems overly generic
>>> > and for a new user it's hard to know what commands are available since
>>> > there is no common prefix (e.g. pg_) for these commands
>>> >
>>> > Just because things are working how they currently are doesn't mean
>>> > they can't be improved.
>>> >
>>> >> If someone isn’t skilled in sql, the requests you’ve made won’t
>>> assist them at all.
>>> >
>>> > This isn't just about someone who is or isn't skilled. I work with
>>> > MySQL, CouchDB, Redis, and various other technologies. Out of those
>>> > three, I'd say that Postgres has the worst and most inconsistently
>>> > named command line tools. It's a large overhead for day to day
>>> > operation to deal with inconsistency at any level.
>>> >
>>> > It's not a particularly hard problem to fix and thus I think it's
>>> > worthy of some attention.
>>> >
>>> > On 31 October 2016 at 12:51, Mike Sofen <mso...@runbox.com> wrote:
>>> >> From: Samuel Williams  Sent: Sunday, October 30, 2016 3:42 PM
>>> >> As a community I'd think that having feedback from a new user would be
>>> >> valuable since as you say, sometimes when you get ingrained into the
>>> "way of
>>> >> doing things" that you don't see how they could be improved or
>>> different.
>>> >>
>>> >> Samuel
>>> >>
>>> >>
>>> >>
>>> >> I’d take a different tack.  I spent 20 years with SQL Server and
>>> easily
>>> >> (almost gleefully) hopped over to Postgres and especially pgplsql and
>>> >> PgAdmin III, from using SqlServer Management Studio (SSMS – their
>>> >> admin/coding app).
>>> >>
>>> >>
>>> >>
>>> >> Sure, I had to learn the PG way of doing things, but really, it was a
>>> >> no-brainer.  I had to spend a few extra cycles learning the PG best
>>> >> practices and particular way of doing things but it was
>>> trivial…google and
>>> >> done.  The vast community has created massive amounts of examples for
>>> nearly
>>> >> everything imaginable – and some things I would never have imagined
>>> anyone
>>> >> would try to do – such that I don’t have to Lewis and Clark it but
>>> just dive
>>> >> right in and write code.
>>> >>
>>> >>
>>> >>
>>> >> IMO, nothing major needs changing in the language or command syntax –
>>> it’s
>>> >> logical and easy for anyone skilled in sql.  If someone isn’t skilled
>>> in
>>> >> sql, the requests you’ve made won’t assist them at all.
>>> >>
>>> >>
>>> >>
>>> >> Mike Sofen (Synthetic Genomics)
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>>
>> *Samuel,*
>>
>> *I believe you are over simplifying things. Simply renaming a command
>> does not make it easier to learn or clarify i

Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?

2016-10-31 Thread Melvin Davidson
On Mon, Oct 31, 2016 at 9:48 AM, Karsten Hilbert <karsten.hilb...@gmx.net>
wrote:

> On Mon, Oct 31, 2016 at 09:14:07AM -0400, Melvin Davidson wrote:
>
> >> Maybe create an event trigger that updates a simple table with the last
> >> modification time or sends a notification?
> ...
> > That would certainly work, but
> > the problem is, that trigger would have to be created for every table in
> > the database. When you have more than a couple dozen tables, as in
> > hundreds, it becsmes a huge undertaking.*--
>
> Well, it could be generated.
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>Well, it could be generated.
True, but it still is more code to maintain, whereas I maintain the
addition addition of one column in pg_class and pg_attribute with a default
of now()
would make more sense. That being said, there were so many naysayers
grasping at corner cases the last time I brought this up I have given up
pursuing it.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?

2016-10-31 Thread Melvin Davidson
On Mon, Oct 31, 2016 at 8:54 AM, John DeSoi <de...@pgedit.com> wrote:

>
> > On Oct 30, 2016, at 4:45 AM, Evan Martin <postgres...@realityexists.net>
> wrote:
> >
> > If I have a query that reads from system tables like pg_class,
> pg_namespace, pg_attribute, pg_type, etc. and I'd like to cache the results
> in my application is there any fast way to detect when any changes have
> been made to these system catalogs? I don't  need to know exactly what has
> changed. Some kind of a global "database version" would do, just so I know
> that I need to invalidate my cache (the database definition is rarely
> modified in practice).
>
> Maybe create an event trigger that updates a simple table with the last
> modification time or sends a notification?
>
> https://www.postgresql.org/docs/current/static/sql-createeventtrigger.html
>
> John DeSoi, Ph.D.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>






*>Maybe create an event trigger that updates a simple table with the last
modification time or sends a notification?That would certainly work, but
the problem is, that trigger would have to be created for every table in
the database. When you have more than a couple dozen tables, as in
hundreds, it becsmes a huge undertaking.*--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] initdb createuser commands

2016-10-30 Thread Melvin Davidson
On Sun, Oct 30, 2016 at 8:08 PM, Samuel Williams <
space.ship.travel...@gmail.com> wrote:

> Sorry, just to clarify, b "worst" I don't mean functionality, I mean
> the way the commands are named and organised.
>
> On 31 October 2016 at 13:07, Samuel Williams
> <space.ship.travel...@gmail.com> wrote:
> > Mike, I agree with "the postgres way of doing things". I'm suggesting
> that
> >
> >>  these commands are sufficiently generic that they might clash
> > with other commands.
> >
> >> It's also not obvious they are part of postgresql.
> >
> >> Wouldn't it make more sense to make them subcommand, of, say, a top
> > level pga (postgres admin) command, a bit like how `mysqladmin` works
> >
> > and finally
> >
> >> the naming of these commands seems overly generic
> > and for a new user it's hard to know what commands are available since
> > there is no common prefix (e.g. pg_) for these commands
> >
> > Just because things are working how they currently are doesn't mean
> > they can't be improved.
> >
> >> If someone isn’t skilled in sql, the requests you’ve made won’t assist
> them at all.
> >
> > This isn't just about someone who is or isn't skilled. I work with
> > MySQL, CouchDB, Redis, and various other technologies. Out of those
> > three, I'd say that Postgres has the worst and most inconsistently
> > named command line tools. It's a large overhead for day to day
> > operation to deal with inconsistency at any level.
> >
> > It's not a particularly hard problem to fix and thus I think it's
> > worthy of some attention.
> >
> > On 31 October 2016 at 12:51, Mike Sofen <mso...@runbox.com> wrote:
> >> From: Samuel Williams  Sent: Sunday, October 30, 2016 3:42 PM
> >> As a community I'd think that having feedback from a new user would be
> >> valuable since as you say, sometimes when you get ingrained into the
> "way of
> >> doing things" that you don't see how they could be improved or
> different.
> >>
> >> Samuel
> >>
> >>
> >>
> >> I’d take a different tack.  I spent 20 years with SQL Server and easily
> >> (almost gleefully) hopped over to Postgres and especially pgplsql and
> >> PgAdmin III, from using SqlServer Management Studio (SSMS – their
> >> admin/coding app).
> >>
> >>
> >>
> >> Sure, I had to learn the PG way of doing things, but really, it was a
> >> no-brainer.  I had to spend a few extra cycles learning the PG best
> >> practices and particular way of doing things but it was trivial…google
> and
> >> done.  The vast community has created massive amounts of examples for
> nearly
> >> everything imaginable – and some things I would never have imagined
> anyone
> >> would try to do – such that I don’t have to Lewis and Clark it but just
> dive
> >> right in and write code.
> >>
> >>
> >>
> >> IMO, nothing major needs changing in the language or command syntax –
> it’s
> >> logical and easy for anyone skilled in sql.  If someone isn’t skilled in
> >> sql, the requests you’ve made won’t assist them at all.
> >>
> >>
> >>
> >> Mike Sofen (Synthetic Genomics)
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*Samuel,*

*I believe you are over simplifying things. Simply renaming a command does
not make it easier to learn or clarify it's use.*

*That is the purpose of documentation. A beginner does not get a better
understanding of command usage by the name of a command,*

*they get it by actually using the command. In addition, I don't know any
DBA that is in favor of longer  command names (as you *

*propose prefixing with pg_ ). The fact is, the commands are already self
explanatory. The _only_ way to learn how to be a good DBA*

*is to actually use the commands, and that also includes pg_ctl and psql
commands. I agree that GUI tools make it easier to learn,*

*but is essential to learn the command line tools and how to use. So again,
it is not the name that is important, but the actual usage.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?

2016-10-30 Thread Melvin Davidson
On Sun, Oct 30, 2016 at 8:04 AM, Alban Hertroys <haram...@gmail.com> wrote:

>
> > On 30 Oct 2016, at 10:45, Evan Martin <postgres...@realityexists.net>
> wrote:
> >
> > If I have a query that reads from system tables like pg_class,
> pg_namespace, pg_attribute, pg_type, etc. and I'd like to cache the results
> in my application is there any fast way to detect when any changes have
> been made to these system catalogs? I don't  need to know exactly what has
> changed. Some kind of a global "database version" would do, just so I know
> that I need to invalidate my cache (the database definition is rarely
> modified in practice).
>
> I think the usual practice for such situations is to do database changes
> through SQL scripts[1] that are under version control. Since they are under
> VC, you can automatically write the version[2] into the SQL script on
> commit of changes to said script through a commit hook.
> That version in the SQL script can then be used in an UPDATE statement to
> some database-global settings table[3].
>
> And there you have your database version.
>
> Ad 1. Never do changes directly in the database when you go this route!
> Ad 2. Those are often hashes these days.
> Ad 3. You could even have the UPDATE statement be automatically added by
> the commit hook of your VC of choice.
>
> Regards,
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Evan,

FWIW, a long time ago I made a request in Customer Feedback
https://postgresql.uservoice.com/forums/21853-general
to add the creation time for all objects.
https://postgresql.uservoice.com/forums/21853-general/suggestions/5587129-add-relcreated-timestamp-column-to-pg-class-cata

I made the same request in this forum.
Unfortunately, few people agree that it would be worthwhile, despite the
fact that the creation times are available in Oracle & MS SQL..
What you are asking would require a similar mod to pg_attribute, but based
on my request, that seems unlikely. So the current solution
is to implement version control software. However, that does not solve the
problem of gremlins (developers) that like to play and make
changes while bypassing CVS.



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] initdb createuser commands

2016-10-29 Thread Melvin Davidson
On Sat, Oct 29, 2016 at 6:25 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 10/29/2016 3:02 PM, Samuel Williams wrote:
>
>> FYI,https://wiki.archlinux.org/index.php/PostgreSQL  mentions initdb,
>> createuser, createdb and several others. I think my suggestion is
>> still relevant and something that would improve the system for new
>> users
>>
>
> and it would break it for the existing community that has up to 20 years
> of experience with it, as well as breaking virtually all existing
> documentation, books, howto articles, etc, including the one you linked.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

I

*'d like to put my two cents in at this point. IMHO, the whole concept of
renaming commands "to make it easier for beginners to learn" is
ludicrous.If we rename a saw to woodcutter, or drill to holemaker, it would
do nothing to make someone learn carpentry easier. Likewise, renaming
commands would not make it easier to learn PostgreSQL. If someone truly
wants to learn to be a PostgreSQL  DBA, there are many fine books already
in existence that guide them step by step how to do so. In addition, as
others have already pointed out, the PostgreSQL  documentation  clearly
describes how to use all commands in question and renaming  (or prefixing
)them would do little to add to that. In fact, I'm pretty sure, existing
DBA's like myself would be quite upset if commands were arbitrarily
renamed.* *So as a wise man once said, If it ain't broke, don't fix it!*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) - Postgres9.1

2016-10-28 Thread Melvin Davidson
On Fri, Oct 28, 2016 at 1:54 PM, Joanna Xu <joanna...@amdocs.com> wrote:

> > All you have to do on the slave is:
>
> > *sudo su postgres*
>
> > *touch **/opt/postgres/9.1/data/trigger'*
>
>
>
> > Then the slave will go to standalone mode./
>
> --
>
> > *Melvin Davidson*
>
> Thanks Melvin.  It works.
>
> One question:  Before touch the trigger file on the slave, do I need to
> run “pg_ctl stop” on the MASTER first?
>
>
>
> Thanks,
>
> Joanna
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:pgsql-general-owner@po
> stgresql.org] *On Behalf Of *Melvin Davidson
> *Sent:* Thursday, October 27, 2016 11:04 AM
> *To:* Scott Mead <sco...@openscg.com>
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) -
> Postgres9.1
>
>
>
>
>
> On Thu, Oct 27, 2016 at 10:27 AM, Scott Mead <sco...@openscg.com> wrote:
>
>
>
> On 10/27/16 10:12 AM, Joanna Xu wrote:
> > We need to keep the data as it is so remving data fold is not feasible.
> > If I run "pg_ctl promote" on the slave to make it fail over, this will
> break the replication and then clean up the configuration related to
> replication.  Any feedback?
>
> Just to [hopefully] clear up some of the confusion..
>
> Somewhere I saw you mention 9.1 .  Since you're using 9.1, you won't
> have any replication slots in use, if you were using replication slots,
> then the master server could started to queue up WAL files waiting for
> that node to come back.  This could cause you to run out of disk space.
> If you were using a replication slot, you would need to (after
> promotion) connect to the master and drop it:
> - SELECT * FROM pg_replication_slots;
> - SELECT pg_drop_replication_slot(slot_name);
>
> https://www.postgresql.org/docs/9.4/static/functions-admin.
> html#FUNCTIONS-REPLICATION
> https://www.postgresql.org/docs/9.4/static/catalog-pg-replic
> ation-slots.html
>
> Again, since you're on 9.1, this doesn't matter, BUT, it's important to
> keep in mind for the future.
>
> If you are using pure streaming replication, there is nothing that needs
> 'clean-up' on the master.  Doing a promote will indeed do what you want
> to accomplish (NB: See below if you are also archiving your WAL).
>
> That being said, many setups of streaming will also include WAL
> (pg_xlog) archiving (check the archive_mode and archive_command in your
> postgresql.conf on the master).  If you have configured archiving,
> chances are, you will have some sort of cleanup job on the slave that
> deals with old archived logs.  You would either
>
> A. Want to stop archiving (for the node you are promoting)
> B. Ensure that cleanup is still taking place
>
>   These steps are not done for you when you promote a slave, it's
> something that you need to deal with yourself.
>
> There is one other thing to keep in mind.  If you are promoting a
> database, it's going to become read-write.  I would make sure to either
> erect a firewall around the promoted slave or edit your pg_hba.conf file
> to reject traffic from your applications.  If, for some reason your
> application were to connect, you could end up with a split-brain
> scenario which would mean that some of your data is in the old master
> and some is in the newly promoted slave.  This can be very, very
> dangerous.  As long as you exercise care, you will be fine, it's just an
> important point that I've seen haunt people in the past.
>
> > Thanks,
> > Joanna
> >
> > -Original Message-
> > From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> > Sent: Wednesday, October 26, 2016 7:50 PM
> > To: Michael Paquier <michael.paqu...@gmail.com>; Joanna Xu <
> joanna...@amdocs.com>
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) -
> Postgres9.1
> >
> > On 10/26/2016 04:43 PM, Michael Paquier wrote:
> >> On Wed, Oct 26, 2016 at 11:18 PM, Joanna Xu <joanna...@amdocs.com>
> wrote:
> >>> I’ve been looking for a procedure (step by step) on how to remove
> >>> Standby
> >>> (SLAVE) from Primary (MASTER) for Postgres9.1 in google and the
> >>> archived postings, but no luck.
> >> Standby and master servers are two separate Postgres instances. So if
> >> you would like to remove a standby from a cluster. You just need to
> >> basically stop it, then remove its data folder. And you are done.
> >> There is no complicated science here.
> > Not sure that is what OP wants. From the original post:
> >
> > "

Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) - Postgres9.1

2016-10-27 Thread Melvin Davidson
On Thu, Oct 27, 2016 at 10:27 AM, Scott Mead <sco...@openscg.com> wrote:

>
>
> On 10/27/16 10:12 AM, Joanna Xu wrote:
> > We need to keep the data as it is so remving data fold is not feasible.
> > If I run "pg_ctl promote" on the slave to make it fail over, this will
> break the replication and then clean up the configuration related to
> replication.  Any feedback?
>
> Just to [hopefully] clear up some of the confusion..
>
> Somewhere I saw you mention 9.1 .  Since you're using 9.1, you won't
> have any replication slots in use, if you were using replication slots,
> then the master server could started to queue up WAL files waiting for
> that node to come back.  This could cause you to run out of disk space.
> If you were using a replication slot, you would need to (after
> promotion) connect to the master and drop it:
> - SELECT * FROM pg_replication_slots;
> - SELECT pg_drop_replication_slot(slot_name);
>
> https://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-
> REPLICATION
> https://www.postgresql.org/docs/9.4/static/catalog-pg-
> replication-slots.html
>
> Again, since you're on 9.1, this doesn't matter, BUT, it's important to
> keep in mind for the future.
>
> If you are using pure streaming replication, there is nothing that needs
> 'clean-up' on the master.  Doing a promote will indeed do what you want
> to accomplish (NB: See below if you are also archiving your WAL).
>
> That being said, many setups of streaming will also include WAL
> (pg_xlog) archiving (check the archive_mode and archive_command in your
> postgresql.conf on the master).  If you have configured archiving,
> chances are, you will have some sort of cleanup job on the slave that
> deals with old archived logs.  You would either
>
> A. Want to stop archiving (for the node you are promoting)
> B. Ensure that cleanup is still taking place
>
>   These steps are not done for you when you promote a slave, it's
> something that you need to deal with yourself.
>
> There is one other thing to keep in mind.  If you are promoting a
> database, it's going to become read-write.  I would make sure to either
> erect a firewall around the promoted slave or edit your pg_hba.conf file
> to reject traffic from your applications.  If, for some reason your
> application were to connect, you could end up with a split-brain
> scenario which would mean that some of your data is in the old master
> and some is in the newly promoted slave.  This can be very, very
> dangerous.  As long as you exercise care, you will be fine, it's just an
> important point that I've seen haunt people in the past.
> > Thanks,
> > Joanna
> >
> > -Original Message-
> > From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> > Sent: Wednesday, October 26, 2016 7:50 PM
> > To: Michael Paquier <michael.paqu...@gmail.com>; Joanna Xu <
> joanna...@amdocs.com>
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) -
> Postgres9.1
> >
> > On 10/26/2016 04:43 PM, Michael Paquier wrote:
> >> On Wed, Oct 26, 2016 at 11:18 PM, Joanna Xu <joanna...@amdocs.com>
> wrote:
> >>> I’ve been looking for a procedure (step by step) on how to remove
> >>> Standby
> >>> (SLAVE) from Primary (MASTER) for Postgres9.1 in google and the
> >>> archived postings, but no luck.
> >> Standby and master servers are two separate Postgres instances. So if
> >> you would like to remove a standby from a cluster. You just need to
> >> basically stop it, then remove its data folder. And you are done.
> >> There is no complicated science here.
> > Not sure that is what OP wants. From the original post:
> >
> > " ...
> > 2.   After the standby is removed from the primary, both of nodes
> > are in standalone configuration. "
> >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com
> >
> > This message and the information contained herein is proprietary and
> confidential and subject to the Amdocs policy statement,
> > you may review at http://www.amdocs.com/email_disclaimer.asp
> >
>
> --
> Scott Mead
> Sr. Architect
> OpenSCG
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

All you have to do on the slave is:


*sudo su postgres*
*touch /opt/postgres/9.1/data/trigger'*

Then the slave will go to standalone mode./
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Drop user cascade

2016-10-19 Thread Melvin Davidson
On Wed, Oct 19, 2016 at 10:03 AM, Alex Ignatov (postgrespro) <
a.igna...@postgrespro.ru> wrote:

>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
> Sent: Wednesday, October 19, 2016 4:31 PM
> To: Alex Ignatov (postgrespro) <a.igna...@postgrespro.ru>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Drop user cascade
>
> "Alex Ignatov \(postgrespro\)" <a.igna...@postgrespro.ru> writes:
> > Why do Postgres have no such functionality as DROP USER CASCADE? Is
> > there any reasons in that absence?
>
> The short answer is that DROP USER couldn't reach across databases to get
> rid of owned objects in other databases.  See
>
> https://www.postgresql.org/docs/9.6/static/role-removal.html
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
> changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
> Some security consideration bear in mind that DROP OWNED cant delete  own
> objects in other DBs? In general what stops  us  to do inter DBs connection
> like MSSQL?
>
> --
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com The Russian Postgres
> Company
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>In general what stops  us  to do inter DBs connection like MSSQL?
*It currently is not generic to PostgreSQL, but you can do that with the
dblink extension/functions.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_sample

2016-10-18 Thread Melvin Davidson
On Tue, Oct 18, 2016 at 10:21 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 10/18/2016 06:30 PM, Patrick B wrote:
>
>>
>>
>> 2016-10-19 13:39 GMT+13:00 Michael Paquier <michael.paqu...@gmail.com
>> <mailto:michael.paqu...@gmail.com>>:
>>
>> On Wed, Oct 19, 2016 at 9:24 AM, Patrick B <patrickbake...@gmail.com
>> <mailto:patrickbake...@gmail.com>> wrote:
>> > However, this new database test server doesn't need to have all the
>> data. I
>> > would like to have only the first 100 rows(example) of each table
>> in my
>> > database.
>> >
>> > I'm using pg_sample to do that, but unfortunately it doesn't work
>> well.
>> > It doesn't get the first 100 rows. It gets random 100 rows.
>>
>> Why aren't 100 random rows enough to fulfill what you are looking for?
>> What you are trying here is to test the server with some sample data,
>> no? In this case, having the first 100 rows, or a set of random ones
>> should not matter much (never tried pg_sample to be honest).
>> --
>> Michael
>>
>>
>>
>> Actually it does matter because there is some essential data that has to
>> be in there so the code can work.
>>
>
> Well random does not know essential, it is after all random. If you want
> to test specific cases then you will need to build appropriate data sets.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

The following query should generate statements you can use to get the first
100 rows of every table.
You may need to tweak a bit as order is not guaranteed.

SELECT 'COPY ' || quote_ident(n.nspname) || '.' || quote_ident(c.relname)
|| ' TO '''
--|| 'C:\temp\'
|| '/tmp/'
|| quote_ident(n.nspname) || '_' || quote_ident(c.relname) || '.csv' ||

|| ' WITH CSV HEADER FORCE_QUOTE *;'
  FROM pg_class c
  JOIN pg_namespace n ON n.oid = c.relnamespace
 WHERE relkind = 'r'
   AND relname NOT LIKE 'pg_%'
   AND relname NOT LIKE 'sql_%'
   LIMIT 100;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-18 Thread Melvin Davidson
On Tue, Oct 18, 2016 at 3:11 PM, Patrick B <patrickbake...@gmail.com> wrote:

> Thank you guys... good to know that pg_dump does all the job for me :)
>
> So.. If I only dump using the --schema-only option, it will dump all the
> schemas, constraints, indexes and tables?
>
> Because probably, I'll have to import the data manually. NOt in a single
> pg_restore I mean. (AWS issue)
>

>...If I only dump using the --schema-only option, it will dump all the
schemas, constraints, indexes and tables?


*Yes! Please RTFM, it's there for a reason. --->
https://www.postgresql.org/docs/9.5/static/app-pgdump.html
<https://www.postgresql.org/docs/9.5/static/app-pgdump.html>*


*Also, please note you can use -a to only dump data*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Problem changing default data_directory in PG 9.6 + CentOS6

2016-10-18 Thread Melvin Davidson
On Tue, Oct 18, 2016 at 10:20 AM, Edilmar LISTAS <edili...@intersite.com.br>
wrote:

> 1) I changed /etc/rc.d/init.d/postgresql-9.6 like this:
> PGDATA=/sistemas/sat4/bdpg
> #PGDATA=/var/lib/pgsql/9.6/data
>
> 2) I copied postgresql.conf and pg_hba.conf from /var/lib/pgsql/9.6/data
> to /sistemas/sat4/bdpg
>
> 3) I changed postgresql.conf like this:
> data_directory = '/sistemas/sat4/bdpg'
>
> 4) service postgresql-9.6 start
> Iniciando o serviço postgresql-9.6: [FAILED]
>
> In my devel machine, I only did step 3), PG starts lookup for default
> configs in /var/lib/pgsql/data and uses my databases in the alternative
> path /sistemas/sat4/bdpg.
>
>
> Em 17-10-2016 21:22, Adrian Klaver escreveu:
>
>> On 10/17/2016 01:38 PM, Edilmar LISTAS wrote:
>>
>>> I have an env running a changed data_directory fine in a devel machine
>>> PG 9.4 using Fedora23.
>>> Now, I have a server machine with CentOS where I downloaded the RPMs
>>> from repo
>>> https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhe
>>> l-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm.
>>>
>>> All the configs run fine if I doesn't change the default data_directory.
>>> But I need to use the path /sistemas/sat4/bdpg.
>>>
>>> I did these commands:
>>>
>>> mkdir /sistemas/sat4/bdpg
>>> chown postgres /sistemas/sat4/bdpg
>>> chmod 700 /sistemas/sat4/bdpg
>>> su - postgres
>>> /usr/pgsql-9.6/bin/initdb -D /sistemas/sat4/bdpg
>>> exit
>>>
>>> Then, I changed data_directory to /sistemas/sat4/bdpg and tried to
>>>
>>
>> Changed data_directory where?
>>
>> restart PG:
>>> service postgresql-9.6 restart
>>> STOP => OK
>>> START => FAILED
>>>
>>> I disabled se_linux.
>>> The file /var/lib/pgsql/9.6/pgstartup.log just said to see future output
>>> in pg_log.
>>> The file data/pg_log/postgresql-Mon.log doesn't say anything about
>>> errors.
>>>
>>> The strange is that startup arises a FAILED message, but the process
>>> "/usr/pgsql-9.6/bin/postmaster -D /var/lib/pgsql/9.6/data" is running
>>>
>>
>> So /var/lib/pgsql/9.6/data is where the original install is?
>>
>> Best guess is that some script is starting the original install and when
>> you go to start your custom location it fails because the new cluster is
>> trying to use the port(5432 I am assuming) as the original cluster.
>>
>> Have you tried giving the new cluster a different port number, say 5442,
>> and the starting it?
>>
>>
>> (and the children logger/checkpointer/etc). But I don't get to connect
>>> using pgAdmin3. Then, I have to kill manually postmaster (because
>>> service script doesn't understand postmaster.pid in the new data dir),
>>> comment data_directory to use default place, start and connect to
>>> pgAdmin3. Then, start/stop/start/etc run fine lookup for postmaster.pid
>>> in /var/lib/pgsql/9.6/data.
>>>
>>
>> So you either need to change the start script to point to the new
>> cluster or create a new one for it.
>>
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


I

*nstead of >4) service postgresql-9.6 start*


*Try:*

*sudo su postgres*
*pg_ctl start -D /sistemas/sat4/bdpg*


*If that works, then your problem is in the postgresql service file*
*-- *
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Problem changing default data_directory in PG 9.6 + CentOS6

2016-10-17 Thread Melvin Davidson
On Mon, Oct 17, 2016 at 4:38 PM, Edilmar LISTAS <edili...@intersite.com.br>
wrote:

> I have an env running a changed data_directory fine in a devel machine PG
> 9.4 using Fedora23.
> Now, I have a server machine with CentOS where I downloaded the RPMs from
> repo https://download.postgresql.org/pub/repos/yum/9.6/redhat/
> rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm.
> All the configs run fine if I doesn't change the default data_directory.
> But I need to use the path /sistemas/sat4/bdpg.
>
> I did these commands:
>
> mkdir /sistemas/sat4/bdpg
> chown postgres /sistemas/sat4/bdpg
> chmod 700 /sistemas/sat4/bdpg
> su - postgres
> /usr/pgsql-9.6/bin/initdb -D /sistemas/sat4/bdpg
> exit
>
> Then, I changed data_directory to /sistemas/sat4/bdpg and tried to restart
> PG:
> service postgresql-9.6 restart
> STOP => OK
> START => FAILED
>
> I disabled se_linux.
> The file /var/lib/pgsql/9.6/pgstartup.log just said to see future output
> in pg_log.
> The file data/pg_log/postgresql-Mon.log doesn't say anything about errors.
>
> The strange is that startup arises a FAILED message, but the process
> "/usr/pgsql-9.6/bin/postmaster -D /var/lib/pgsql/9.6/data" is running (and
> the children logger/checkpointer/etc). But I don't get to connect using
> pgAdmin3. Then, I have to kill manually postmaster (because service script
> doesn't understand postmaster.pid in the new data dir), comment
> data_directory to use default place, start and connect to pgAdmin3. Then,
> start/stop/start/etc run fine lookup for postmaster.pid in
> /var/lib/pgsql/9.6/data.
>

What is the value of data_directory in postgresql.conf ?

Is there anything in /var/log/postgresql/{log_filename}   ... where
{log_file} is the value set in postgresql.conf ?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] My DB has has 5TB, many operations are very slow (on Google Cloud Compute)

2016-10-11 Thread Melvin Davidson
On Tue, Oct 11, 2016 at 3:16 PM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> Rakesh:
>
> On Tue, Oct 11, 2016 at 9:00 PM, Rakesh Kumar
> <rakeshkumar...@outlook.com> wrote:
> >>Cores do not help, postgres is single-threaded. RAM MAY help, but I
> > I hope this is no longer true from 9.6 for those queries where PG can
> use parallelism.
>
> It does, AFAIK, but for queries, not AFAIK for this kind of data
> moving ops ( and I doubt it will, as presently you can easily saturate
> the channels with a single core for that kind of simple ops, and
> normally if you want to optimize this kind of op is better to target
> concurrency ( table can be used while moving ) than pure speed .
>
> Francisco Olarte.
>

>Requiring and exclusive table lock does not imply slownes. Just try
>'lock table x in exclusive mode' on an idle system. Pretty fast.

Sure on an idle system, you will get a table lock right away, but OP's
statements imply a large busy system.
And if there are transactions occurring against that table, there is no
telling how long it will take. Since we
do not have enough specific info, I stand by my statement.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] My DB has has 5TB, many operations are very slow (on Google Cloud Compute)

2016-10-11 Thread Melvin Davidson
On Tue, Oct 11, 2016 at 2:17 PM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> On Thu, Oct 6, 2016 at 12:36 AM, David A <da...@scalaacademy.com> wrote:
> > My DB has has 5TB, it's on Google Cloud Compute, mostly on SSDs, part on
> > Standard Persistent Disks.
> > Querying is ok, but deleting, moving between tablespaces, dropping
> tables,
> > etc, etc is hugely slow
> > (note: I do have a number of indexes)
>
> Are you sure it's slow and not just lengthy? Hostings tend to have and
> publish limits on their bandwidths and similar things, and may be you
> are just saturating your capacity. If you can at least compare the
> time to move a tablespace against the time of copying a similarly
> sized file between the same disks someone may be able to say
> something.
>
> > Instance RAM: 60GB
> > Instance CPU: 16Cores
>
> Cores do not help, postgres is single-threaded. RAM MAY help, but I
> suspect your operations are IO bound. Of course, with the sparseness
> of the details, one can not say too much.
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

You have stated a very generic problem and not provided  much detail.
Perhaps you could be a "little" more specific and give needed info we might
be able to help.
IE:
Which version of PostgreSQL?
What are the memory parameters in postgresql.conf (shared_memory,
max_connections, work_mem, maintenance_work_mem, etc..) ?
How big are the tables being moved?
What type of disks
etc.

FYI, moving between tablespaces requires an exclusive table lock, so it's
naturally going to be slow.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Melvin Davidson
On Mon, Oct 10, 2016 at 2:14 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:

>
> Il 10/10/2016 18:24, Periko Support ha scritto:
>
>> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
>> terminated by signal 9: Killed
>>
>
> 2016-09-12 10:00:01 PDT LOG:  server process (PID 30766) was
>> terminated by signal 9: Killed
>>
>
> 2016-09-12 15:00:01 PDT LOG:  server process (PID 22030) was
>> terminated by signal 9: Killed
>>
>>
>> These datetimes could be suspect. Every crash (kill) is done at
> "00"minutes and "01" minutes, that makes me ask "Isn't there something like
> cron running something that interfere with postgres?"
>
> Cheers,
> Moreno.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

The general philosphy is to start by setting shared_memory to 1/4 system
memory, so
shared_buffers should be 20480 MB

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Multi tenancy : schema vs databases

2016-10-01 Thread Melvin Davidson
On Sat, Oct 1, 2016 at 4:52 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 10/1/2016 12:52 PM, Rakesh Kumar wrote:
>
> Do your clients authenticate directly to the database, or to the app server?
>
> thru app server.
>
>
> do you run a separate instance of the app for each tenant, or is there one
> app that identifies the tenant and handles them accordingly ?
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>

*I would like to comment on the multiple schema vs databases situation.
First of all, 1000's of databases is insanity and just asking for trouble.
Next, 1000's of schemas is a nightmare to maintain. I understand the
requirement for client data to be "isolated", but in reality, data is never
really separated. Once it's on the server, any good hacker with a knowledge
of SQL can find it. So, IMHO, the best solution is to isolate by a client
ID in the tables of one database. Then make sure you have sufficient and
correct security on those tables.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Time travel?

2016-09-29 Thread Melvin Davidson
On Thu, Sep 29, 2016 at 3:20 PM, Thomas Munro <thomas.mu...@enterprisedb.com
> wrote:

> On Fri, Sep 30, 2016 at 8:17 AM, Kevin Grittner <kgri...@gmail.com> wrote:
> > On Thu, Sep 29, 2016 at 2:06 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
> >
> >> I am using PostgreSQL 9.4.0 (Yes, I know 9.4.9 is most recent but
> >> it's out of my control)
> >
> > As long as the decision-maker is aware that 9.4.0 has known bugs
> > (fixed in later minor releases) that can render the database
> > unusable without warning...
> >
> >> How is it possible for the WAL file to be accessed BEFORE it was
> >> created?
> >
> > Perhaps renaming it counts as "creation" without affecting access
> > time.
>
> Apparently some filesystems change the ctime for rename and others
> don't, and POSIX tolerates both.
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>

>Apparently some filesystems change the ctime for rename and others
>don't, and POSIX tolerates both.

OK, thanks. Back to the future! :D

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Time travel?

2016-09-29 Thread Melvin Davidson
*So here is a strange question. I am using PostgreSQL 9.4.0 (Yes, I know
9.4.9 is most recent but it's out of my control) O/S is AWS Debian 3.16.7-2
(2014-11-06) x86_64 GNU/Linux. I list the creation time for a WAL file and
it shows: /home/mdavidson/dba$ ls -l --time=ctime
/d-log/pg_xlog/0001000D00C9-rw--- 1 postgres postgres
16777216 Sep 29 07:14 /d-log/pg_xlog/0001000D00C9but then I
list the access time for a WAL file and it shows: /home/mdavidson/dba$ ls
-l --time=atime /d-log/pg_xlog/0001000D00C9-rw--- 1
postgres postgres 16777216 Sep  7 14:33
/d-log/pg_xlog/0001000D00C9How is it possible for the WAL file
to be accessed BEFORE it was created?Am I overlooking something simple?*



*Melvin Davidson*


Re: [GENERAL] how to monitor the progress of really large bulk operations?

2016-09-27 Thread Melvin Davidson
On Tue, Sep 27, 2016 at 5:03 PM, Mike Sofen <mso...@runbox.com> wrote:

> Hi gang,
>
>
>
> On PG 9.5.1, linux, I’m running some large ETL operations, migrate data
> from a legacy mysql system into PG, upwards of 250m rows in a transaction
> (it’s on a big box).  It’s always a 2 step operation – extract raw mysql
> data and pull it to the target big box into staging tables that match the
> source, the second step being read the landed dataset and transform it into
> the final formats, linking to newly generated ids, compressing big subsets
> into jsonb documents, etc.
>
>
>
> While I could break it into smaller chunks, it hasn’t been necessary, and
> it doesn’t eliminate my need:  how to view the state of a transaction in
> flight, seeing how many rows have been read or inserted (possible for a
> transaction in flight?), memory allocations across the various PG
> processes, etc.
>
>
>
> Possible or a hallucination?
>
>
>
> Mike Sofen (Synthetic Genomics)
>

AFAIK, it is not currently possible to monitor the progress/status of a
query. However, I do see that this is planned for "sometime in the  future".
*https://postgresql.uservoice.com/forums/21853-general/suggestions/249745-query-progress-reported-by-percentage-in-pg-stat
<https://postgresql.uservoice.com/forums/21853-general/suggestions/249745-query-progress-reported-by-percentage-in-pg-stat>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Determining server load

2016-09-27 Thread Melvin Davidson
On Tue, Sep 27, 2016 at 2:46 PM, Israel Brewster <isr...@ravnalaska.net>
wrote:

> >I'm still curious as to how I can track concurrent connections, ...
>>
>
> Have you considered enabling the following  in postgresql.conf?
> log_connections=on
> log_disconnections=on
>
> It will put a bit of a bloat in you postgres log, but it will all allow
> you extract connects/disconnects over a time range. That should allow you
> to determine concurrent connections during that that.
>
>
> I do have those on, and I could write a parser that scans through the logs
> counting connections and disconnections to give a number of current
> connections at any given time. Trying to make it operate "in real time"
> would be interesting, though, as PG logs into different files by
> day-of-the-week (at least, with the settings I have), rather than into a
> single file that gets rotated out. I was kind of hoping such a tool, such
> as pgbadger (which, unfortunately, only seems to track connections per
> second and not consecutive connections), already existed, or that there was
> some way to have the database itself track this metric. If not, well, I
> guess that's another project :)
>
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
>
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>
>
>


*Does this help?*

*--Total concurrent connections*


















*SELECT COUNT(*)  FROM pg_stat_activity;--concurrent connections by
userSELECT usename,   count(*) FROM pg_stat_activityGROUP BY 1ORDER BY
1;--concurrent connections by databaseSELECT datname,   usename,
count(*) FROM pg_stat_activityGROUP BY 1, 2ORDER BY 1, 2;*

*-- database connections by user*








*SELECT usename,   datname,   count(*) FROM pg_stat_activityGROUP
BY 1, 2ORDER BY 1, 2;-- *
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Determining server load

2016-09-27 Thread Melvin Davidson
On Tue, Sep 27, 2016 at 2:25 PM, Israel Brewster <isr...@ravnalaska.net>
wrote:

> On Sep 27, 2016, at 10:07 AM, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
> >
> > On 09/27/2016 09:54 AM, Israel Brewster wrote:
> >> I have a Postgresql (9.4.6) cluster that hosts several databases, used
> >> by about half-a-dozen different in-house apps. I have two servers set up
> >> as master-slave with streaming replication. Lately I've been running
> >> into an issue where one of the apps periodically can't connect to the
> >> db. The problem is always extremely short lived (less than a minute),
> >> such that by the time I can look into it, there is no issue. My
> >> *suspicion* is that I am briefly hitting the max connection limit of my
> >> server (currently set at 100). If so, I can certainly *fix* the issue
> >> easily by increasing the connection limit, but I have two questions
> >> about this:
> >
> > What does your Postgres log show around this time?
>
> So in looking further, I realized the actual error I was getting was "no
> route to host", which is obviously a networking issue and not a postgres
> issue - could not connect was only the end result. The logs then, of
> course, show normal operation. That said, now that I am thinking about it,
> I'm still curious as to how I can track concurrent connections, with the
> revised goal of simply seeing how heavily loaded my server really is, and
> when tools such as pgpool or the pgbouncer that another user mentioned
> start making sense for the number of connections I am dealing with. Thanks.
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
> >
> >>
> >> 1) Is there some way I can track concurrent connections to see if my
> >> theory is correct? I know I can do a count(*) on pg_stat_activity to get
> >> the current number of connections at any point (currently at 45 BTW),
> >> but aside from repeatedly querying this at short intervals, which I am
> >> afraid would put undue load on the server by the time it is frequent
> >> enough to be of use, I don't know how to track concurrent connections.
> >>
> >> I did look at pgbadger, which tells me I have gotten as high as 62
> >> connections/second, but given that most of those connections are
> >> probably very short lived that doesn't really tell me anything about
> >> concurrent connections.
> >>
> >> 2) Is increasing the connection limit even the "proper" fix for this, or
> >> am I at a load point where I need to start looking at tools like pgpool
> >> or something to distribute some of the load to my hot standby server? I
> >> do realize you may not be able to answer that directly, since I haven't
> >> given enough information about my server/hardware/load, etc, but answers
> >> that tell me how to better look at the load over time and figure out if
> >> I am overloaded are appreciated.
> >>
> >> For reference, the server is running on the following hardware:
> >>
> >> 2x 8-core Xeon E5-2630 v3 2.4 GHz processor (so plenty of horsepower
> there)
> >> 32 GB Ram total, currently with 533144k showing as "free" and 370464k of
> >> swap used
> >> 371 GB SSD RAID 10 (currently only using 40GB of space)
> >> Dual Gigabit ethernet
> >>
> >> Thanks for any advice that can be provided!
> >> ---
> >> Israel Brewster
> >> Systems Analyst II
> >> Ravn Alaska
> >> 5245 Airport Industrial Rd
> >> Fairbanks, AK 99709
> >> (907) 450-7293
> >> ---
> >>
> >>
> >>
> >>
> >>
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>I'm still curious as to how I can track concurrent connections, ...

Have you considered enabling the following  in postgresql.conf?
log_connections=on
log_disconnections=on

It will put a bit of a bloat in you postgres log, but it will all allow you
extract connects/disconnects over a time range. That should allow you
to determine concurrent connections during that that.


*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


<    1   2   3   4   5   6   7   >