Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread Mike Sofen
This is such a poorly designed, hacked together “thing” – it isn’t a database, 
it’s someone’s idea of how to store data when they don’t know how to store 
data, like they moved it from Access or Excel.  

 

Just start over and design a proper relational schema with best practices and 
you’ll save, oh, perhaps 10 years of wasted effort and 12 million emails.  This 
is as close to bandaids on bandaids on steroids that it comes.  Really – 
rethink your solution model.

 

Mike

 

From: drum.lu...@gmail.com Sent: Tuesday, 
April 19, 2016 7:40 PM
Just forgot to say:


Does increment_client_code relate to users or some other table, say clients?

nope.. there is no link between them

If the users.code is empty/null, then the trigger has to get the last number 
from client_code_increment and put on the users.code column



Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
Just forgot to say:


[...]


>
>
>>
>> Does increment_client_code relate to users or some other table, say
>> clients?
>>
>>
> nope.. there is no link between them
>
>

If the users.code is empty/null, then the trigger has to get the last
number from client_code_increment and put on the users.code column



>
>
[...]


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
>
> Information. eg.:
>

> The schema for the tables.
>
> Why is not just adding a DEFAULT value to the users.code not an option?
>
>

The customer can add their own value to the users.code column.
That's why I can't have a default value.



> What the default code should be or how it is to be calculated?
>

the default value is 1000.

So the customer can set their own code value. But if they don't do that,
I've to provide the next available value. 1001, 1002, 1003, etc



>
> What is increment_client_code?
>

It's a column:

ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
ALTER TABLE public.companies ALTER COLUMN client_code_increment SET DEFAULT
1000;



>
> Does increment_client_code relate to users or some other table, say
> clients?
>
>
nope.. there is no link between them



table users:

> CREATE TABLE
> users
> (
> id INTEGER DEFAULT nextval('users_id_seq'::regclass) NOT NULL,
> email CHARACTER VARYING DEFAULT ''::CHARACTER VARYING NOT NULL,
> encrypted_password CHARACTER VARYING DEFAULT ''::CHARACTER VARYING
> NOT NULL,
> reset_password_token CHARACTER VARYING,
> reset_password_sent_at TIMESTAMP(6) WITHOUT TIME ZONE,
> remember_created_at TIMESTAMP(6) WITHOUT TIME ZONE,
> sign_in_count INTEGER DEFAULT 0 NOT NULL,
> current_sign_in_at TIMESTAMP(6) WITHOUT TIME ZONE,
> last_sign_in_at TIMESTAMP(6) WITHOUT TIME ZONE,
> current_sign_in_ip INET,
> last_sign_in_ip INET,
> created_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> updated_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> name CHARACTER VARYING,
> confirmation_token CHARACTER VARYING,
> confirmed_at TIMESTAMP(6) WITHOUT TIME ZONE,
> confirmation_sent_at TIMESTAMP(6) WITHOUT TIME ZONE,
> company_name CHARACTER VARYING,
> country CHARACTER VARYING,
> mobile_number CHARACTER VARYING,
> landline_number CHARACTER VARYING,
> staff_colour CHARACTER VARYING,
> company_id INTEGER,
> role_id INTEGER,
> active BOOLEAN DEFAULT false,
> deleted BOOLEAN DEFAULT false,
> avatar_file_name CHARACTER VARYING,
> avatar_content_type CHARACTER VARYING,
> avatar_file_size INTEGER,
> avatar_updated_at TIMESTAMP(6) WITHOUT TIME ZONE,
> fax CHARACTER VARYING,
> website CHARACTER VARYING,
> business_type CHARACTER VARYING,
> lead_source CHARACTER VARYING,
> code CHARACTER VARYING,
> notes TEXT,
> status CHARACTER VARYING,
> tsv TSVECTOR,
> origin CHARACTER VARYING,
> origin_id CHARACTER VARYING,
> first_name CHARACTER VARYING,
> last_name CHARACTER VARYING,
> billed_client_id INTEGER,
> username CHARACTER VARYING,
> is_client BOOLEAN DEFAULT false,
> job_share BOOLEAN DEFAULT true
> );


Table companies:

> CREATE TABLE
> companies
> (
> id INTEGER DEFAULT nextval('companies_id_seq'::regclass) NOT NULL,
> name CHARACTER VARYING,
> country CHARACTER VARYING,
> timezone CHARACTER VARYING,
> mobile_number CHARACTER VARYING,
> email CHARACTER VARYING,
> website CHARACTER VARYING,
> phone CHARACTER VARYING,
> created_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> updated_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> reference_increment INTEGER DEFAULT 1000,
> activated BOOLEAN DEFAULT true,
> enable_quotes BOOLEAN DEFAULT false,
> allow_billing_client BOOLEAN DEFAULT true,
> allow_templates_recurrence BOOLEAN DEFAULT true,
> recurrence_limit INTEGER DEFAULT 30,
> job_title_dropdown BOOLEAN DEFAULT false,
> default_reference_prefix CHARACTER VARYING,
> default_reference_increment INTEGER,
> default_visit_start_day INTEGER,
> default_visit_start_hour INTEGER,
> default_visit_start_min INTEGER,
> job_date_entry_duration BOOLEAN DEFAULT true,
> default_visit_duration_hour INTEGER DEFAULT 0,
> default_visit_duration_min INTEGER DEFAULT 30,
> date_entry_short BOOLEAN DEFAULT true,
> time_entry_24 BOOLEAN DEFAULT true,
> time_field_increment INTEGER DEFAULT 10,
> enable_job_share BOOLEAN DEFAULT true,
> token CHARACTER VARYING
> );


Re: [GENERAL] [ADMIN] how to upgrade PG

2016-04-19 Thread David G. Johnston
On Mon, Apr 18, 2016 at 10:19 PM, Huang, Suya 
wrote:

> Hello,
>
>
>
> Our PG runs on Ubuntu boxes and it’s 9.3.4, thinking of upgrading to
> 9.3.12 or 9.5.2.
>
>
>
> Is the normal approach of upgrade something like below, no matter if it’s
> major version or minor version upgrade? My focus is on pg_dump at the
> moment.
>
> #1 pg_dump
>
> Dump DB
>
> Apt-get update
>
> Apt-get install new version
>
> Createdb
>
> Import DB
>
>
>
> #2 pg_upgrade (a little complicated, worth trying on test environment)
>
>
>
> #3 using trigger based replication tool (not interested at this moment)
>
>
>

​How you upgrade largely depends on how you installed in the first place.

Minor version upgrades, i.e., where only the last digit changes, does not
require any kind of data transfer - only the binaries are changed and they
can be replaced with minimal downtime.

Major version upgrades can be done via pg_dump/pg_restore or the utility
pg_upgrade.  These perform the data transfers - it is up to you to actually
install the newer version and setup the system properly before using those
commands.  The documentation for both is excellent and should you have
specific questions, or just observations, this list is an excellent
resource.

In most cases, as you are showing apt-get commands, I am reasonably certain
that apt-get update/apt-get upgrade on the PostgreSQL package will
automatically stop and restart the postgresql service are the appropriate
time so as to minimize downtime during the minor version upgrade.

Please note that "pg_dump" does not export globals - you also need to use
the pg_dumpall utility to obtain those.  Again, the documentation - and the
Internet - have copious information on the topic; it is a very well
traveled path.
​

Typically on the major version upgrade you would perform the apt-get
install on the new version, put it through its paces, then perform the
upgrade commands.  After you are done you may chose to remove the old
version or just leave it around - the choice is yours.

David J.


Re: [GENERAL] Enhancement Request

2016-04-19 Thread David G. Johnston
On Tue, Apr 19, 2016 at 5:59 PM, Melvin Davidson 
wrote:

>
> Enhancement or feature requests should probably go to  *Custormer
> Feedback *
> https://postgresql.uservoice.com/forums/21853-general
>

​I'm on the PostgreSQL home page: what series of links would lead to "
uservoice.com" so that I can submit my feature request?

If that site is going to be recommended over posting to pgsql-general it
should feature prominently on the main website.

​I've seen or heard nothing that suggests anywhere except pgsql-general,
this list, is the correct place to submit such requests.  I do think we are
lacking sufficient communication in this area but would rather see that
corrected and formalized before we go pointing people to third-party sites.

David J.
​


[GENERAL] [ADMIN] how to upgrade PG

2016-04-19 Thread Huang, Suya
Hello,

Our PG runs on Ubuntu boxes and it's 9.3.4, thinking of upgrading to 9.3.12 or 
9.5.2.

Is the normal approach of upgrade something like below, no matter if it's major 
version or minor version upgrade? My focus is on pg_dump at the moment.
#1 pg_dump
Dump DB
Apt-get update
Apt-get install new version
Createdb
Import DB

#2 pg_upgrade (a little complicated, worth trying on test environment)

#3 using trigger based replication tool (not interested at this moment)


Thanks,
Suya


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-19 Thread David Bennett
> From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
> 
> > I am attempting to create a new language implementation. The language
> > is Andl (andl.org), so the handler is plandl.
> > This is a question about executing SPI queries from inside plandl.
> >
> > The documentation makes it clear that SPI allows nested queries; that
> > in some instances it will be necessary to call SPI_push() and
> > SPI_pop(), but in others this will be handled automatically. Se
> http://www.postgresql.org/docs/9.5/interactive/spi-spi-push.html.
> >
> > It is an important design feature of plandl to allow nested queries.
> >
> > My question is: where are the transaction boundaries if the
> > inner/outer query do or do not contain BEGIN/ABORT/COMMIT? Do they
> > nest, or does an inner COMMIT finish a transaction started by an outer
> BEGIN, or is it ignored?
> 
> You cannot have BEGIN or COMMIT inside a function.

Are you sure you meant it like that? I already have BEGIN/COMMIT inside a 
function and it works perfectly. If it did not, then it would be impossible to 
use BEGIN/COMMIT in any language handler, since every call to a language 
handler is a call to a function.

Did you mean 'inside a nested function'? Or something else?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enhancement Request

2016-04-19 Thread Melvin Davidson
On Tue, Apr 19, 2016 at 4:37 PM, Rob Brucks 
wrote:

> I'd like to propose two enhancements to the PostgreSQL code, but I'm not
> sure if this is the correct mailing list.  So if it's not then please let
> me know where I need to post this.
>
> These are monitoring-centric enhancement requests since I'm trying to
> implement accurate monitoring in a secure fashion.
>
> * General monitoring:
> We have a need for a "monitoring" role in PostgreSQL that has read-only
> access to any "pg_stat" view.  As of 9.4, only a super-user can read all
> columns of "pg_stat_activity", "pg_stat_replication", and
> "pg_stat_archiver" (there may be other restricted views as well).  These
> views provide critical insight on how well the cluster is operating and
> what is going on.
>
> There appears to be only two ways to gain access to these views:
>
>1. grant super-user to my monitoring user
>2. write custom functions owned by a super-user with "SECURITY
>DEFINER" and grant access to my monitoring user
>
> Option 1 is too big of a security risk to grant to a monitoring user.  If
> the monitoring system gets compromised then the DB will be at risk too.
>
> Option 2 requires creating, deploying, and maintaining sets of monitoring
> functions, which is a decent chunk of work in a large environment with
> dozens or hundreds of deployments, many running different versions of
> postgres possibly needing custom versions of the functions.  When you add
> the bureaucracy of a large IT organization and SOX and PCI compliance
> requirements it ends up being a PITA implementing or changing these
> functions when you only have a small DBA team.
>
>
> * Streaming Replication Monitoring:
> Make the "pg_stat_replication" view more persistent (maybe keep the rows
> for 24 hours or have a registration process?).
>
> There appears to be no way to *accurately* monitor streaming replication
> via SQL alone currently.  This is due to three different problems:
>
>1. When a standby gets disconnected from the master then the
>corresponding row is immediately removed from pg_stat_replication on the
>master.  Once the row is gone you cannot tell via simple SQL whether a
>standby is behind or not (or that it even existed at all) without storing
>prior values and extrapolating from them.
>2. On the standby, if the WAL streaming process gets disconnected from
>the master then it does not report that it is behind
>because pg_last_xlog_receive_location() has not been updated from the
>master.  The standby has no idea how far ahead the master has gotten and
>just blindly reports the last value.
>3. On a "quiet" system there may not be any update activity on the
>master for a long time, which makes the pg_last_xact_replay_timestamp()
>function report an ever-increasing interval.  So it is not useable for
>accurately measuring lag on quiet systems.
>
> Ideally the master should be able to report standby lag time via SQL, even
> when there has been a disruption in connectivity with a standby.
>
> The only accurate method I have found to measure standby lag is to create
> a synthetic update that runs periodically.  This works, but is less than
> ideal and requires adding a table to every cluster (which then has to be
> vacuumed frequently too) and writing and maintaining a process to update it
> and purge it.
>
>
> These two enhancements would go a long way in making it easier to monitor
> PostgreSQL clusters and replication because it would eliminate a lot of
> custom coding requirements and enable us to pull metrics directly via
> simple SQL.
>
> If anyone can provide insight on how I could accomplish these in a simple
> manner by other means then I'm all ears!
>
> Thanks,
> Rob
>

Rob,
Enhancement or feature requests should probably go to  *Custormer Feedback
*
https://postgresql.uservoice.com/forums/21853-general
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread Adrian Klaver

On 04/19/2016 03:23 PM, drum.lu...@gmail.com wrote:

Hi all,

I've got two tables:

- users
- companies

I'm trying to create a function that:

  * if users.code is empty, it gives a default value
  * And the increment_client_code in company should auto increment for
the next client code

What I've done so far:

DROP FUNCTION IF EXISTS client_code_increment_count();
CREATE OR REPLACE FUNCTION "public"."client_code_increment_count" ()
RETURNS TABLE("code" INT) AS
$BODY$
SELECT MAX(CAST(users.code AS INT)) FROM users WHERE users.code ~
'^\d+$' AND company_id = 2
$BODY$
LANGUAGE sql;
SELECT * FROM "client_code_increment_count"();





CREATE OR REPLACE FUNCTION
"public"."auto_generate_client_code_if_empty" () RETURNS "trigger"
  VOLATILE
AS $dbvis$
BEGIN
END;
$dbvis$ LANGUAGE plpgsql;






CREATE TRIGGER "increment_client_code"
BEFORE INSERT OR UPDATE ON users
FOR EACH ROW
EXECUTE PROCEDURE "auto_generate_client_code_if_empty"();



But still can't do that works.. What Am I missing?


Information. eg.:

The schema for the tables.

Why is not just adding a DEFAULT value to the users.code not an option?

What the default code should be or how it is to be calculated?

What is increment_client_code?

Does increment_client_code relate to users or some other table, say clients?





Cheers




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
On 20 April 2016 at 10:38, David G. Johnston 
wrote:

> On Tue, Apr 19, 2016 at 3:23 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>> I've got two tables:
>>
>> - users
>> - companies
>>
>> I'm trying to create a function that:
>>
>>
>>- if users.code is empty, it gives a default value
>>- And the increment_client_code in company should auto increment for
>>the next client code
>>
>> What I've done so far:
>>
>> DROP FUNCTION IF EXISTS client_code_increment_count();
>>> CREATE OR REPLACE FUNCTION "public"."client_code_increment_count" ()
>>> RETURNS TABLE("code" INT) AS
>>> $BODY$
>>> SELECT MAX(CAST(users.code AS INT)) FROM users WHERE users.code ~
>>> '^\d+$' AND company_id = 2
>>> $BODY$
>>> LANGUAGE sql;
>>> SELECT * FROM "client_code_increment_count"();
>>
>>
>>
> The need to do "WHERE users.code ~ '^\d+$' means your model is poorly
> specified.
> ​
>
>
>>
>>
>>
>> CREATE OR REPLACE FUNCTION "public"."auto_generate_client_code_if_empty"
>>> () RETURNS "trigger"
>>>  VOLATILE
>>> AS $dbvis$
>>> BEGIN
>>> END;
>>> $dbvis$ LANGUAGE plpgsql;
>>
>>
>>
> It would be nice if you actually showed some work here...​
>
>
>>
>>
>>> CREATE TRIGGER "increment_client_code"
>>> BEFORE INSERT OR UPDATE ON users
>>> FOR EACH ROW
>>> EXECUTE PROCEDURE "auto_generate_client_code_if_empty"();
>>
>>
>>
> ​
> ​I'd question the need to execute this trigger on UPDATE...​
>
> ​
>
>
>>
>> But still can't do that works.. What Am I missing?
>>
>>
>
> ​The stuff that goes between "BEGIN" and "END" in
> auto_generate_client_code_if_empty...?
>
>
That's all I got David.. working on it and would like some help if
possible...
Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread David G. Johnston
On Tue, Apr 19, 2016 at 3:23 PM, drum.lu...@gmail.com 
wrote:

> Hi all,
>
> I've got two tables:
>
> - users
> - companies
>
> I'm trying to create a function that:
>
>
>- if users.code is empty, it gives a default value
>- And the increment_client_code in company should auto increment for
>the next client code
>
> What I've done so far:
>
> DROP FUNCTION IF EXISTS client_code_increment_count();
>> CREATE OR REPLACE FUNCTION "public"."client_code_increment_count" ()
>> RETURNS TABLE("code" INT) AS
>> $BODY$
>> SELECT MAX(CAST(users.code AS INT)) FROM users WHERE users.code ~ '^\d+$'
>> AND company_id = 2
>> $BODY$
>> LANGUAGE sql;
>> SELECT * FROM "client_code_increment_count"();
>
>
>
The need to do "WHERE users.code ~ '^\d+$' means your model is poorly
specified.
​


>
>
>
> CREATE OR REPLACE FUNCTION "public"."auto_generate_client_code_if_empty"
>> () RETURNS "trigger"
>>  VOLATILE
>> AS $dbvis$
>> BEGIN
>> END;
>> $dbvis$ LANGUAGE plpgsql;
>
>
>
It would be nice if you actually showed some work here...​


>
>
>> CREATE TRIGGER "increment_client_code"
>> BEFORE INSERT OR UPDATE ON users
>> FOR EACH ROW
>> EXECUTE PROCEDURE "auto_generate_client_code_if_empty"();
>
>
>
​
​I'd question the need to execute this trigger on UPDATE...​

​


>
> But still can't do that works.. What Am I missing?
>
>

​The stuff that goes between "BEGIN" and "END" in
auto_generate_client_code_if_empty...?

David J.
​


[GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
Hi all,

I've got two tables:

- users
- companies

I'm trying to create a function that:


   - if users.code is empty, it gives a default value
   - And the increment_client_code in company should auto increment for the
   next client code

What I've done so far:

DROP FUNCTION IF EXISTS client_code_increment_count();
> CREATE OR REPLACE FUNCTION "public"."client_code_increment_count" ()
> RETURNS TABLE("code" INT) AS
> $BODY$
> SELECT MAX(CAST(users.code AS INT)) FROM users WHERE users.code ~ '^\d+$'
> AND company_id = 2
> $BODY$
> LANGUAGE sql;
> SELECT * FROM "client_code_increment_count"();





CREATE OR REPLACE FUNCTION "public"."auto_generate_client_code_if_empty" ()
> RETURNS "trigger"
>  VOLATILE
> AS $dbvis$
> BEGIN
> END;
> $dbvis$ LANGUAGE plpgsql;






> CREATE TRIGGER "increment_client_code"
> BEFORE INSERT OR UPDATE ON users
> FOR EACH ROW
> EXECUTE PROCEDURE "auto_generate_client_code_if_empty"();



But still can't do that works.. What Am I missing?

Cheers


Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Bill Ross

Thanks for the fast response!

Server was completely idle except the one client (and one doing a slow 
update that I forgot). Updating maintenance_work_mem to 8G I see more 
memory now in use:


  PID USER PR   NI  VIRT  RES   SHR S %CPU %MEMTIME+ COMMAND
 4531 ec2-user  20   0 10.1g 3.4g 1.7g R 99.8 11.4   2:02.17 postgres

When restarting postgres I noticed that I had a background process 
trying to update the table, which might have locked it and would also 
explain why postgres was running at 100%.


It now takes ~5min to build the index.

Bill

On 4/19/16 2:36 PM, Scott Mead wrote:



On Tue, Apr 19, 2016 at 5:28 PM, Bill Ross > wrote:


I've been running an index build for almost an hour on my 30G
server that takes ~ 20 mins on my puny old macbook.

It seems like I've tuned all I can.. what am I missing?


Concurrent traffic on the server ? Locks / conflicts with running traffic?

From a parameter perspective, look at maintenance_work_mem.

--Scott

Thanks,
Bill

Records to index: 33305041

--- Server:

 PostgreSQL 9.2.15 on x86_64-redhat-linux-gnu, compiled by gcc
(GCC) 4.8.3 20140
911 (Red Hat 4.8.3-9), 64-bit

shared_buffers = 8GB# min 128kB
temp_buffers = 2GB# min 800kB
work_mem = 8GB# min 64kB
checkpoint_segments = 256# in logfile segments, min 1,
16MB each
seq_page_cost = 1.0# measured on an arbitrary scale
random_page_cost = 1.0# same scale as above
effective_cache_size = 20GB

  PID   USERPR  NI  VIRTRES  SHR S %CPU %MEM   TIME+
COMMAND
 4069 ec2-user  20   0 8596m 1.7g 1.7g R 99.8  5.6 67:48.36 postgres

Macbook:
 PostgreSQL 9.4.4 on x86_64-apple-darwin14.3.0, compiled by Apple
LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn), 64-bit

shared_buffers = 2048MB# min 128kB
temp_buffers = 32MB# min 800kB
work_mem = 8MB# min 64kB
dynamic_shared_memory_type = posix# the default is the first
option
checkpoint_segments = 32# in logfile segments, min 1, 16MB
each

PIDCOMMAND  %CPU TIME #TH   #WQ  #PORTS MEM PURG
CMPRS  PGRP
52883  postgres 91.0 02:16:14 1/1   07  1427M+ 0B
622M-  52883





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Scott Mead
On Tue, Apr 19, 2016 at 5:28 PM, Bill Ross  wrote:

> I've been running an index build for almost an hour on my 30G server that
> takes ~ 20 mins on my puny old macbook.
>
> It seems like I've tuned all I can.. what am I missing?
>
>
Concurrent traffic on the server ? Locks / conflicts with running traffic?

>From a parameter perspective, look at maintenance_work_mem.

--Scott



> Thanks,
> Bill
>
> Records to index: 33305041
>
> --- Server:
>
>  PostgreSQL 9.2.15 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.3
> 20140
> 911 (Red Hat 4.8.3-9), 64-bit
>
> shared_buffers = 8GB# min 128kB
> temp_buffers = 2GB# min 800kB
> work_mem = 8GB# min 64kB
> checkpoint_segments = 256# in logfile segments, min 1, 16MB each
> seq_page_cost = 1.0# measured on an arbitrary scale
> random_page_cost = 1.0# same scale as above
> effective_cache_size = 20GB
>
>   PID   USERPR  NI  VIRTRES  SHR S %CPU %MEMTIME+ COMMAND
>  4069 ec2-user  20   0 8596m 1.7g 1.7g R 99.8  5.6  67:48.36 postgres
>
> Macbook:
>  PostgreSQL 9.4.4 on x86_64-apple-darwin14.3.0, compiled by Apple LLVM
> version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn), 64-bit
>
> shared_buffers = 2048MB# min 128kB
> temp_buffers = 32MB# min 800kB
> work_mem = 8MB# min 64kB
> dynamic_shared_memory_type = posix# the default is the first option
> checkpoint_segments = 32# in logfile segments, min 1, 16MB each
>
> PIDCOMMAND  %CPU TIME #TH   #WQ  #PORTS MEMPURG CMPRS  PGRP
> 52883  postgres 91.0 02:16:14 1/1   07  1427M+ 0B 622M-  52883
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Tory M Blue
On Tue, Apr 19, 2016 at 2:30 PM, John R Pierce  wrote:
> On 4/19/2016 2:28 PM, Bill Ross wrote:
>>
>> I've been running an index build for almost an hour on my 30G server that
>> takes ~ 20 mins on my puny old macbook.
>>
>> It seems like I've tuned all I can.. what am I missing?
>
>
>
>
> maintenance_work_mem is the main thing that affects index building times, I
> didn't see that in your list of tuning parameters.I generally set it to
> 1GB.
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --

I've got some indexes that take well over 7 hours to create on a
256GB machine. Last time I posted for help, I was told they have yet
to really fix the tuple issue and that using a tiny number in work_mem
helped allieviate some of the issues. Still takes a long time but i'm
at 512MB, no slower than 7GB+

Here is what was told to me before

"Yes. There is a hard limit on the number of tuples than can be sorted
in memory prior to PostgreSQL 9.4. It's also the case that very large
work_mem or maintenance_work_mem settings are unlikely to help unless
they result in a fully internal sort.

There is evidence that the heap that tuple sorting uses benefits from
*lower* settings. Sometimes as low as 64MB.

We're working to make this better in 9.6."

Tory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Peter Devoy
I see you are comparing 9.2 and 9.4 -- if you were not already aware
there is a kernel restriction on shared memory on Linux systems which,
if I recall correctly, must be adjusted in versions before 9.3...

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#shared_buffers

Not saying it's the answer but maybe another thing to look at...

All the best


Peter


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread John R Pierce

On 4/19/2016 2:28 PM, Bill Ross wrote:
I've been running an index build for almost an hour on my 30G server 
that takes ~ 20 mins on my puny old macbook.


It seems like I've tuned all I can.. what am I missing? 




maintenance_work_mem is the main thing that affects index building 
times, I didn't see that in your list of tuning parameters.I 
generally set it to 1GB.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Bill Ross
I've been running an index build for almost an hour on my 30G server 
that takes ~ 20 mins on my puny old macbook.


It seems like I've tuned all I can.. what am I missing?

Thanks,
Bill

Records to index: 33305041

--- Server:

 PostgreSQL 9.2.15 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 
4.8.3 20140

911 (Red Hat 4.8.3-9), 64-bit

shared_buffers = 8GB# min 128kB
temp_buffers = 2GB# min 800kB
work_mem = 8GB# min 64kB
checkpoint_segments = 256# in logfile segments, min 1, 16MB each
seq_page_cost = 1.0# measured on an arbitrary scale
random_page_cost = 1.0# same scale as above
effective_cache_size = 20GB

  PID   USERPR  NI  VIRTRES  SHR S %CPU %MEMTIME+ COMMAND
 4069 ec2-user  20   0 8596m 1.7g 1.7g R 99.8  5.6  67:48.36 postgres

Macbook:
 PostgreSQL 9.4.4 on x86_64-apple-darwin14.3.0, compiled by Apple LLVM 
version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn), 64-bit


shared_buffers = 2048MB# min 128kB
temp_buffers = 32MB# min 800kB
work_mem = 8MB# min 64kB
dynamic_shared_memory_type = posix# the default is the first option
checkpoint_segments = 32# in logfile segments, min 1, 16MB each

PIDCOMMAND  %CPU TIME #TH   #WQ  #PORTS MEMPURG CMPRS  PGRP
52883  postgres 91.0 02:16:14 1/1   07  1427M+ 0B 622M-  52883





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Enhancement Request

2016-04-19 Thread Rob Brucks
I'd like to propose two enhancements to the PostgreSQL code, but I'm not sure 
if this is the correct mailing list.  So if it's not then please let me know 
where I need to post this.

These are monitoring-centric enhancement requests since I'm trying to implement 
accurate monitoring in a secure fashion.

* General monitoring:
We have a need for a "monitoring" role in PostgreSQL that has read-only access 
to any "pg_stat" view.  As of 9.4, only a super-user can read all columns of 
"pg_stat_activity", "pg_stat_replication", and "pg_stat_archiver" (there may be 
other restricted views as well).  These views provide critical insight on how 
well the cluster is operating and what is going on.

There appears to be only two ways to gain access to these views:

  1.  grant super-user to my monitoring user
  2.  write custom functions owned by a super-user with "SECURITY DEFINER" and 
grant access to my monitoring user

Option 1 is too big of a security risk to grant to a monitoring user.  If the 
monitoring system gets compromised then the DB will be at risk too.

Option 2 requires creating, deploying, and maintaining sets of monitoring 
functions, which is a decent chunk of work in a large environment with dozens 
or hundreds of deployments, many running different versions of postgres 
possibly needing custom versions of the functions.  When you add the 
bureaucracy of a large IT organization and SOX and PCI compliance requirements 
it ends up being a PITA implementing or changing these functions when you only 
have a small DBA team.


* Streaming Replication Monitoring:
Make the "pg_stat_replication" view more persistent (maybe keep the rows for 24 
hours or have a registration process?).

There appears to be no way to *accurately* monitor streaming replication via 
SQL alone currently.  This is due to three different problems:

  1.  When a standby gets disconnected from the master then the corresponding 
row is immediately removed from pg_stat_replication on the master.  Once the 
row is gone you cannot tell via simple SQL whether a standby is behind or not 
(or that it even existed at all) without storing prior values and extrapolating 
from them.
  2.  On the standby, if the WAL streaming process gets disconnected from the 
master then it does not report that it is behind because 
pg_last_xlog_receive_location() has not been updated from the master.  The 
standby has no idea how far ahead the master has gotten and just blindly 
reports the last value.
  3.  On a "quiet" system there may not be any update activity on the master 
for a long time, which makes the pg_last_xact_replay_timestamp() function 
report an ever-increasing interval.  So it is not useable for accurately 
measuring lag on quiet systems.

Ideally the master should be able to report standby lag time via SQL, even when 
there has been a disruption in connectivity with a standby.

The only accurate method I have found to measure standby lag is to create a 
synthetic update that runs periodically.  This works, but is less than ideal 
and requires adding a table to every cluster (which then has to be vacuumed 
frequently too) and writing and maintaining a process to update it and purge it.


These two enhancements would go a long way in making it easier to monitor 
PostgreSQL clusters and replication because it would eliminate a lot of custom 
coding requirements and enable us to pull metrics directly via simple SQL.

If anyone can provide insight on how I could accomplish these in a simple 
manner by other means then I'm all ears!

Thanks,
Rob


Re: [GENERAL] Multimaster

2016-04-19 Thread Konstantin Knizhnik



On 19.04.2016 15:56, Craig Ringer wrote:
On 18 April 2016 at 16:28, Konstantin Knizhnik 
> wrote:



I intend to make the same split in pglogical its self - a
receiver and apply worker split. Though my intent is to have them
communicate via a shared memory segment until/unless the apply
worker gets too far behind and spills to disk.



In case of multimaster  "too far behind" scenario can never happen.


I disagree. In the case of tightly coupled synchronous multi-master it 
can't happen, sure. But that's hardly the only case of multi-master 
out there.


Sorry, it is just matter of terms meaning. By multimaster I really mean 
"synchronous multimaster", because from my point of view the main 
characteristic of multimaster is symmetric access to all nodes. If there 
is no warranty that all cluster nodes have the same state, then, from my 
point of view, it is not a multimaster at all.  But i have registered 
"multimaster" trademark, so can not insists on such treatment of this term:)






2. Logical backup: transfer data to different database (including
new version of Postgres)


I think that's more HA than logical backup. Needs to be able to be 
synchronous or asynchronous, much like our current phys.rep.


Closely related but not quite the same is logical read replicas/standbys.


This is use case from real production system (Galera use case). If 
customer want to migrate data to new data center, then multimaster is 
one of the possible (and safest) ways to do it. You can ste-by-step and 
transparently for users redirect workload to new data center.



3. Change notification: there are many different subscribers which
can be interested in receiving notifications about database changes.


Yep. I suspect we'll want a json output plugin for this, separate to 
pglogical etc, but we'll need to move a bunch of functionality from 
pglogical into core so it can be shared rather than duplicated.


JSON is not is efficient format for it. And here performance may be 
critical.



4. Synchronous replication: multimaster


"Synchronous multimaster". Not all multimastrer is synchronous, not 
all synchronous replication is multimaster.


We are not enforcing order of commits as Galera does. Consistency
is enforces by DTM, which enforce that transactions at all nodes
are given consistent snapshots and assigned same CSNs. We have
also global deadlock detection algorithm which build global lock
graph (but still false positives are possible because  this graphs
is build incrementally and so it doesn't correspond to some global
snapshot).


OK, so you're relying on a GTM to determine safe, conflict-free apply 
orderings.


I'm ... curious ... about how you do that. Do you have a global lock 
manager too? How do you determine ordering for things that in a 
single-master case are addressed via unique b-tree indexes, not (just) 
heavyweight locking?





We have tried both DTM with global arbiter (analogue of XL GTM) and DTM 
based on timestamps. In the last case there is no centralized arbiter. 
But we are using "raftable" - yet another our plugin which provides 
consistent distributed storage based on RAFT protocol.

Using this raftable we build global deadlock graph based on local subgraphs.




Or need to add to somehow add original DDL statements to the log.


Actually you need to be able to add normalized statements to the xlog. 
The original DDL text isn't quite good enough due to issues with 
search_path among other things. Hence DDL deparse.


Yes, for general purpose we need some DBMS-independent representation of 
DDL.

But for multimaster needs original SQL statement will be enough.





--
 Craig Ringer http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Re: what's the exact command definition in read committed isolation level?

2016-04-19 Thread Jinhua Luo
2016-04-18 23:16 GMT+08:00 Tom Lane :
>
> No, that's not true: a trigger is a function and what it can see is
> determined by the rules of the PL it's written in.  Typically a
> function that's marked STABLE or IMMUTABLE will see the same snapshot
> as the calling query, but a function that's VOLATILE will take a new
> snapshot for each query it contains.

Thank you, Tom. I think I find the answer now.

Yes, for all the cases (trigger, CTE, other sub-query), as long as
they use function written in PL, marked as VOLATILE (by default), then
each query contained in the function may see new data.

http://www.postgresql.org/docs/current/static/xfunc-volatility.html

"STABLE and IMMUTABLE functions use a snapshot established as of the
start of the calling query, whereas VOLATILE functions obtain a fresh
snapshot at the start of each query they execute."

I think this page:
http://www.postgresql.org/docs/current/static/transaction-iso.html
should refer to above page to clarify the function affect.




Here is an example for CTE or other sub-query:

create or replace function test_fn() returns setof int as $$
begin
raise INFO 'sleep 30 secs, modify bar in another session...';
perform pg_sleep(30);
return query select * from bar;
end;
$$ language plpgsql;

=> create table foo(a int);
CREATE TABLE
=> insert into foo values(1);
INSERT 0 1
=> create table bar(a int);
CREATE TABLE
=> insert into foo values(100);
INSERT 0 1
=> with t as (select a from test_fn() as tt(a)) select * from foo, t
where foo.a = t.a;
INFO:  sleep 30 secs, modify bar in another session...
 a | a
---+---
(0 rows)

=> with t as (select a from test_fn() as tt(a)) select * from foo, t
where foo.a = t.a;
INFO:  sleep 30 secs, modify bar in another session...
< run below command in another session:
< insert into bar values(1);
  a  |  a
-+-
   1 |   1
 100 | 100
(2 rows)

=> select * from foo where exists (select a from test_fn() as tt(a)
where tt.a = foo.a);
INFO:  sleep 30 secs, modify bar in another session...
  a
-
   1
 100
(2 rows)

=> select * from foo where exists (select a from test_fn() as tt(a)
where tt.a = foo.a);
INFO:  sleep 30 secs, modify bar in another session...
< run below command in another session:
< delete from bar where a = 1;
  a
-
 100
(1 row)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error while installing auto_explain contrib module

2016-04-19 Thread Tom Lane
Sachin Kotwal  writes:
> On Tue, Apr 19, 2016 at 7:02 PM, Adrian Klaver 
> wrote:
>> Did you install the 9.3.11 server from source also?

> Yes. I have clone git repository and checkout for  pg-9.3.11

The reason for that question is that the most obvious explanation for this
failure is that the auto_explain build is seeing a different value for
PG_USE_INLINE than the main server build did.  So probably your include
path is picking up a pg_config.h that does not match the postgres
executable the link is happening against.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How are files of tables/indexes/etc deleting?

2016-04-19 Thread Albe Laurenz
Иван Фролков wrote:
> When I drop a table, the file which contains real data will be deleted, but 
> only when I will commit
> transaction. It seems like such job must do vacuum, but I cannot locate the 
> code which does the real
> job. Could anybody explain in details how it works?

See RelationDropStorage() in src/backend/catalog/storage.c

No vacuum is necessary, the files that belong to the relation are deleted at 
commit time.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum never completed....

2016-04-19 Thread Jeff Janes
On Apr 19, 2016 6:37 AM,  wrote:
>
> We had a server approach wraparound yesterday on a 9.3 box. I restarted
in single user mode and only one table was above autovacuum_freeze_max_age.
The dataset was ~750G. We left vacuum running for 21hrs and it still did
not complete. What was interesting is that disk IO graph kept cycling, lots
of reads for ~45min, then a few writes for another 45min.

What kind of indexes were on the table?

Cheers,

Jeff


Re: [GENERAL] error while installing auto_explain contrib module

2016-04-19 Thread Sachin Kotwal
On Tue, Apr 19, 2016 at 7:02 PM, Adrian Klaver 
wrote:

> On 04/18/2016 11:46 PM, Sachin Kotwal wrote:
>
>> Hi,
>>
>> I am trying to install auto_explain on PostgreSQL-9.3.11
>> OS X EI Capitan - 10.11
>>
>
> Did you install the 9.3.11 server from source also?


Yes. I have clone git repository and checkout for  pg-9.3.11


>
>
>>
>> I am getting following error:
>>
>> --
>> $ pwd
>> /Users/sachin/postgres_git/postgres/contrib/auto_explain
>>
>
> What version of Postgres is the above pointing to?


This is my postgres git repository and i have checkout for pg-9.3.11
So above is pointing to auto_explain directory under contrib for same
postgres version.
Here is my pg_config output:

-
$ pg_config
BINDIR = /Users/sachin/pg_git_install/9.3.11/bin
DOCDIR = /Users/sachin/pg_git_install/9.3.11/share/doc/postgresql
HTMLDIR = /Users/sachin/pg_git_install/9.3.11/share/doc/postgresql
INCLUDEDIR = /Users/sachin/pg_git_install/9.3.11/include
PKGINCLUDEDIR = /Users/sachin/pg_git_install/9.3.11/include/postgresql
INCLUDEDIR-SERVER =
/Users/sachin/pg_git_install/9.3.11/include/postgresql/server
LIBDIR = /Users/sachin/pg_git_install/9.3.11/lib
PKGLIBDIR = /Users/sachin/pg_git_install/9.3.11/lib/postgresql
LOCALEDIR = /Users/sachin/pg_git_install/9.3.11/share/locale
MANDIR = /Users/sachin/pg_git_install/9.3.11/share/man
SHAREDIR = /Users/sachin/pg_git_install/9.3.11/share/postgresql
SYSCONFDIR = /Users/sachin/pg_git_install/9.3.11/etc/postgresql
PGXS =
/Users/sachin/pg_git_install/9.3.11/lib/postgresql/pgxs/src/makefiles/
pgxs.mk
CONFIGURE = '--with-libxml' '--enable-cassert' '--enable-debug'
'--enable-profiling' '--prefix=/Users/sachin/pg_git_install/9.3.11/'
'--with-python' 'CFLAGS=-g -O0'
CC = gcc
CPPFLAGS = -I/usr/include/libxml2
CFLAGS = -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -g -pg
CFLAGS_SL =
LDFLAGS = -L../../../src/common -Wl,-dead_strip_dylibs
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgport -lpgcommon -lxml2 -lz -lreadline -lm
VERSION = PostgreSQL 9.3.11





>
>
>> $ make
>> gcc -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
>> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
>> -Wformat-security -fno-strict-aliasing -fwrapv
>> -Wno-unused-command-line-argument -g -pg   -I. -I. -I../../src/include
>> -I/opt/local/include/libxml2   -c -o auto_explain.o auto_explain.c
>> gcc -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
>> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
>> -Wformat-security -fno-strict-aliasing -fwrapv
>> -Wno-unused-command-line-argument -g -pg   -bundle -multiply_defined
>> suppress -o auto_explain.so auto_explain.o -L../../src/port
>> -L../../src/common -L/opt/local/lib  -Wl,-dead_strip_dylibs
>> -bundle_loader ../../src/backend/postgres
>> ld: warning: directory not found for option '-L/usr/local/lib'
>> Undefined symbols for architecture x86_64:
>>"_MemoryContextSwitchTo", referenced from:
>>_explain_ExecutorStart in auto_explain.o
>> ld: symbol(s) not found for architecture x86_64
>> clang: error: linker command failed with exit code 1 (use -v to see
>> invocation)
>> make: *** [auto_explain.so] Error 1
>>
>>
>> --
>>
>> Thanks and Regards,
>> Sachin Kotwal
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 

Thanks and Regards,
Sachin Kotwal


[GENERAL] Vacuum never completed....

2016-04-19 Thread rolf
We had a server approach wraparound yesterday on a 9.3 box. I restarted 
in single user mode and only one table was above 
autovacuum_freeze_max_age. The dataset was ~750G. We left vacuum running 
for 21hrs and it still did not complete. What was interesting is that 
disk IO graph kept cycling, lots of reads for ~45min, then a few writes 
for another 45min. I failed to VACUUM VERBOSE so there was little other 
data to go on other than IO graph. We ended dropping the table and 
lowering autovacuum_freeze_max_age for clean up. We've had similar sized 
dataset complete vacuum in a few hours, but I've never seen this cycling 
IO before: https://imagebin.ca/v/2eIdB2bjFBxq


Is this normal? It gave me very little faith that VACUUM would ever 
complete.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error while installing auto_explain contrib module

2016-04-19 Thread Adrian Klaver

On 04/18/2016 11:46 PM, Sachin Kotwal wrote:

Hi,

I am trying to install auto_explain on PostgreSQL-9.3.11
OS X EI Capitan - 10.11


Did you install the 9.3.11 server from source also?




I am getting following error:

--
$ pwd
/Users/sachin/postgres_git/postgres/contrib/auto_explain


What version of Postgres is the above pointing to?



$ make
gcc -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -g -pg   -I. -I. -I../../src/include
-I/opt/local/include/libxml2   -c -o auto_explain.o auto_explain.c
gcc -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -g -pg   -bundle -multiply_defined
suppress -o auto_explain.so auto_explain.o -L../../src/port
-L../../src/common -L/opt/local/lib  -Wl,-dead_strip_dylibs
-bundle_loader ../../src/backend/postgres
ld: warning: directory not found for option '-L/usr/local/lib'
Undefined symbols for architecture x86_64:
   "_MemoryContextSwitchTo", referenced from:
   _explain_ExecutorStart in auto_explain.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
make: *** [auto_explain.so] Error 1


--

Thanks and Regards,
Sachin Kotwal



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How are files of tables/indexes/etc deleting?

2016-04-19 Thread Иван Фролков

When I drop a table, the file which contains real data will be deleted, but 
only when I will commit transaction. It seems like such job must do vacuum, but 
I cannot locate the code which does the real job. Could anybody explain in 
details how it works?


Re: [GENERAL] BDR replication slots

2016-04-19 Thread Nikhil
On interface down:
--
<10.102.31.213(27599)postgres13082016-04-19 06:31:36
GMTprocess_journal%LOG:  terminating walsender process due to replication
timeout


Once interface is brought back

 425906 <12692016-04-19 08:32:58 GMT%LOG:  starting background worker
process "bdr (6275149074578269365,2,16386,)->bdr (6275135922714263763,1,"
 425907 <597732016-04-19 08:32:58 GMT%ERROR:  relation "mygroup" already
exists
 425908 <597732016-04-19 08:32:58 GMT%CONTEXT:  during DDL replay of ddl
statement: CREATE  TABLE  public.mygroup (id pg_catalog."varchar"(14)  NOT
NULL COLLATE pg_catalog."default", name pg_catalog."varchar"(100)   COLLATE
pg_catalog."default", device_type pg_catalog."varchar"(30)   COLLATE
pg_catalog."default", platform_type pg_catalog."varchar"(30)   COLLATE
pg_catalog."default",
CONSTRAINT mygroup_pkey PRIMARY KEY (id) )  WITH (oids=OFF)
 425909 <12692016-04-19 08:32:58 GMT%LOG:  worker process: bdr
(6275149074578269365,2,16386,)->bdr (6275135922714263763,1,(PID 59773)
exited with exit code 1
 425910 <10.102.31.213(13467)postgres597742016-04-19 08:32:59
GMTprocess_journal%LOG:  starting logical decoding for slot
"bdr_16386_6275135922714263763_1_16386__"
 425911 <10.102.31.213(13467)postgres597742016-04-19 08:32:59
GMTprocess_journal%DETAIL:  streaming transactions committing after
0/1014CEE8, reading WAL from 0/1014A920
 425912 <10.102.31.213(13467)postgres597742016-04-19 08:32:59
GMTprocess_journal%LOG:  logical decoding found consistent point at
0/1014A920
 425913 <10.102.31.213(13467)postgres597742016-04-19 08:32:59
GMTprocess_journal%DETAIL:  There are no running transactions.
 425914 *<10.102.31.213(13467)postgres597742016-04-19 08:32:59
GMTprocess_journal%LOG:  unexpected EOF on standby con**nection*


On Tue, Apr 19, 2016 at 10:29 AM, Alvaro Aguayo Garcia-Rada <
aagu...@opensysperu.com> wrote:

> Hello,
>
> What do you see on each node's log after enablibg interfaces?
>
> Regards,
>
> Alvaro Aguayo
> Jefe de Operaciones
> Open Comb Systems E.I.R.L.
>
> Oficina: (+51-1) 3377813 | RPM: #034252 / (+51) 995540103  | RPC:
> (+51) 954183248
> Website: www.ocs.pe
>
> Sent from my Sony Xperia™ smartphone
>
>
>  Nikhil wrote 
>
>
> Hello,
>
> I have a 2 node BDR group and replication is happening properly. if i
> bring down one of the node's interface, after sometime the replication
> slots are becoming inactive (pg_replication_slots view). Then if i bring
> back interface slots are not turning active automatically and replication
> stops. Any idea why automatically its not re-established ?
>
> Best Regards,
> Nikhil
>


Re: [GENERAL] Re: what's the exact command definition in read committed isolation level?

2016-04-19 Thread Jinhua Luo
>
> First session:
> postgres=# create table t(id integer);
> CREATE TABLE
> postgres=# insert into t values (1);
> INSERT 0 1
> postgres=# begin;
> BEGIN
> postgres=# select count(*) from t;
>  count
> ---
>  1
> (1 row)
>
> 1 row in the table. In another session, insert another row in t:
>
> postgres=# insert into t values (2);
> INSERT 0 1
>
> And, then, back to the first session, still inside the previous transaction:
>
> postgres=# select count(*) from t;
>  count
> ---
>  2
> (1 row)
>
> That's how Read Committed behaved. You see changes commited by other
> transactions, even transactions started after yours. That has nothing to do
> with triggers.

Yes, this is exactly how document tells us. The outer statement
("top-level" statements from client session) is of course one of the
cases which satisfies the read committed isolation level rules. It's
easy to understand and well-documented.

But remind that I am talking about the embedded statements triggered
by the outer statement (which you cannot see them in explicit way, you
need to check whether some trigger be set, and what statements
contained in the trigger). That's the document doesn't clarify.

Theoretically and literally, the trigger should be considered as part
of the outer command, just like CTE and other forms of sub-query, but
in fact, it is not! Instead, the commands in trigger should be
considered as virtual "outer" commands, just like you inline the
trigger body after the outer command. So that's why I said trigger is
an exception, which break/extend the rule.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BDR replication slots

2016-04-19 Thread Sylvain Marechal
2016-04-19 6:51 GMT+02:00 Nikhil :

> Hello,
>
> I have a 2 node BDR group and replication is happening properly. if i
> bring down one of the node's interface, after sometime the replication
> slots are becoming inactive (pg_replication_slots view). Then if i bring
> back interface slots are not turning active automatically and replication
> stops. Any idea why automatically its not re-established ?
>

May be postgres does not detect the link is down.
You could try modifying the connection strings, so that postgres detects
earlier the problem, like described in this post :
https://github.com/2ndQuadrant/bdr/issues/173 :

<<<
[...]
Example of dsn_connection field:
host=192.168.1.140 port=5432 dbname=safewalk-server connect_timeout=10
keepalives_idle=5 keepalives_interval=1
>>>


Hope this helps,

Sylvain


[GENERAL] error while installing auto_explain contrib module

2016-04-19 Thread Sachin Kotwal
Hi,

I am trying to install auto_explain on PostgreSQL-9.3.11
OS X EI Capitan - 10.11


I am getting following error:

--
$ pwd
/Users/sachin/postgres_git/postgres/contrib/auto_explain

$ make
gcc -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -g -pg   -I. -I. -I../../src/include
-I/opt/local/include/libxml2   -c -o auto_explain.o auto_explain.c
gcc -g -O0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -g -pg   -bundle -multiply_defined
suppress -o auto_explain.so auto_explain.o -L../../src/port
-L../../src/common -L/opt/local/lib  -Wl,-dead_strip_dylibs
-bundle_loader ../../src/backend/postgres
ld: warning: directory not found for option '-L/usr/local/lib'
Undefined symbols for architecture x86_64:
  "_MemoryContextSwitchTo", referenced from:
  _explain_ExecutorStart in auto_explain.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
make: *** [auto_explain.so] Error 1


-- 

Thanks and Regards,
Sachin Kotwal


Re: [GENERAL] Re: what's the exact command definition in read committed isolation level?

2016-04-19 Thread Guillaume Lelarge
2016-04-19 5:41 GMT+02:00 Jinhua Luo :

> > Does that mean a VOLATILE function runs in a different transaction?
>
> No, all statements triggered by the outer statement is within the same
> transaction. If the trigger fails (without trapping the error), all
> affects including changes by outer statement would be rollback.
>
> > And does that mean it will see committed data that the calling statement
> > would not see?
>
> Yes, that's what I said. The trigger is special, each statement within
> it get new snapshot so it would see data from all committed
> transactions up to its execution instant. But that doesn't mean the
> trigger runs in different transaction.
>
> Please check my example above, and try it yourself.
>
>
Well, that's not specific to the trigger. Try this:

First session:
postgres=# create table t(id integer);
CREATE TABLE
postgres=# insert into t values (1);
INSERT 0 1
postgres=# begin;
BEGIN
postgres=# select count(*) from t;
 count
---
 1
(1 row)

1 row in the table. In another session, insert another row in t:

postgres=# insert into t values (2);
INSERT 0 1

And, then, back to the first session, still inside the previous transaction:

postgres=# select count(*) from t;
 count
---
 2
(1 row)

That's how Read Committed behaved. You see changes commited by other
transactions, even transactions started after yours. That has nothing to do
with triggers.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com