[GENERAL] Build in function to verify email addresses

2017-11-15 Thread Nick Dro
I beleieve that every information system has the needs to send emails.
Currently PostgreSQL doesn't have a function which gets TEXT and return true if it's valid email address (x...@yyy.com / .co.ZZ)
Do you believe such function should exist in PostgreSQL or it's best to let every user to implement his own function?

Re: [GENERAL] Indexes being ignored after upgrade to 9.5

2017-07-27 Thread Nick Brennan
Hi Peter,

Many thanks for your response. I tried to cancel the thread, it was 
unfortunately stupidity that was the issue. We'd been forced to manually 
analyze our tables due to time constraints, and one of the table partitions 
read in the query was missed. It was reporting a bitmap index scan on the 
parent so we thought all was ok, and was then causing other tables to 
sequential scan.

A further misunderstanding was that an explain analyze would initiate stats 
gathering on all queried tables, however this is not the case.

Thanks again for your response, we'll check the behaviour you report.

Best regards
Nick


> On 27 Jul 2017, at 00:40, Peter Geoghegan <p...@bowt.ie> wrote:
> 
>> On Wed, Jul 26, 2017 at 2:05 PM, Peter Geoghegan <p...@bowt.ie> wrote:
>>> On Tue, Jul 25, 2017 at 10:34 PM, Nick Brennan <nbrenna...@gmail.com> wrote:
>>> We've added duplicate indexes and analyzing, however the new indexes are
>>> still ignored unless we force using enable_seqscan=no or reduce
>>> random_page_cost to 2. The query response times using the new indexes are
>>> still as slow when we do this. Checking pg_stat_user_indexes the number of
>>> tuples returned per idx_scan is far greater after the upgrade than before.
>>> All indexes show valid in pg_indexes.
> 
> I assume that you mean that pg_stat_user_indexes.idx_tup_read is a lot
> higher than before, in proportion to pg_stat_user_indexes.idx_scan.
> What about the ratio between pg_stat_user_indexes.idx_tup_read and
> pg_stat_user_indexes.idx_tup_fetch? How much has that changed by?
> 
> -- 
> Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Interesting streaming replication issue

2017-07-27 Thread Gunnar &quot;Nick" Bluth
(sorry for the top post, bitchy K9 Mail) 

James, 

are you sure you're scp'ing from the archive, not from pg_xlog? 

Regards, 

Gunnar "Nick" Bluth

Am 27. Juli 2017 05:00:17 MESZ schrieb James Sewell <james.sew...@jirotech.com>:
>Hi all,
>
>I've got two servers (A,B) which are part of a streaming replication
>pair.
>A is the master, B is a hot standby. I'm sending archived WAL to a
>directory on A, B is reading it via SCP.
>
>This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm
>currently working to reproduce with standard 9.6)
>
>We have recently seen a situation where B does not catch up when taken
>offline for maintenance.
>
>When B is started we see the following in the logs:
>
>2017-07-27 11:55:57 AEST [21432]: [979-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00AA" from
>archive
>2017-07-27 11:55:58 AEST [21432]: [980-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00AB" from
>archive
>2017-07-27 11:55:58 AEST [21432]: [981-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00AC" from
>archive
>2017-07-27 11:55:59 AEST [21432]: [982-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00AD" from
>archive
>2017-07-27 11:55:59 AEST [21432]: [983-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00AE" from
>archive
>2017-07-27 11:56:00 AEST [21432]: [984-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00AF" from
>archive
>2017-07-27 11:56:00 AEST [21432]: [985-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00B0" from
>archive
>2017-07-27 11:56:01 AEST [21432]: [986-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00B1" from
>archive
>2017-07-27 11:56:01 AEST [21432]: [987-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00B2" from
>archive
>2017-07-27 11:56:02 AEST [21432]: [988-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00B3" from
>archive
>2017-07-27 11:56:02 AEST [21432]: [989-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00B4" from
>archive
>2017-07-27 11:56:03 AEST [21432]: [990-1] user=,db=,client=
>(0:0)LOG:  restored log file "000C005A00B5" from
>archive
>scp: /archive/xlog//000C005A00B6: No such file or directory
>2017-07-27 11:56:03 AEST [46191]: [1-1] user=,db=,client=
>(0:0)LOG:  started streaming WAL from primary at 5A/B500 on
>timeline 12
>2017-07-27 11:56:03 AEST [46191]: [2-1] user=,db=,client=
>(0:XX000)FATAL:  could not receive data from WAL stream: ERROR:
>requested WAL segment 000C005A00B5 has already been
>removed
>
>scp: /archive/xlog//000D.history: No such file or directory
>scp: /archive/xlog//000C005A00B6: No such file or directory
>2017-07-27 11:56:04 AEST [46203]: [1-1] user=,db=,client=
>(0:0)LOG:  started streaming WAL from primary at 5A/B500 on
>timeline 12
>2017-07-27 11:56:04 AEST [46203]: [2-1] user=,db=,client=
>(0:XX000)FATAL:  could not receive data from WAL stream: ERROR:
>requested WAL segment 000C005A00B5 has already been
>removed
>
>This will loop indefinitely. At this stage the master reports no
>connected
>standbys in pg_stat_replication, and the standby has no running WAL
>receiver process.
>
>This can be 'fixed' by running pg_switch_xlog() on the master, at which
>time a connection is seen from the standby and the logs show the
>following:
>
>scp: /archive/xlog//000D.history: No such file or directory
>2017-07-27 12:03:19 AEST [21432]: [1029-1] user=,db=,client= 
>(0:0)LOG:
> restored log file "000C005A00B5" from archive
>scp: /archive/xlog//000C005A00B6: No such file or directory
>2017-07-27 12:03:19 AEST [63141]: [1-1] user=,db=,client= 
>(0:0)LOG:
> started streaming WAL from primary at 5A/B500 on timeline 12
>2017-07-27 12:03:19 AEST [63141]: [2-1] user=,db=,client= 
>(0:XX000)FATAL:
> could not receive data from WAL stream: ERROR:  requested WAL segment
>000C005A00B5 has already been removed
>
>scp: /archive/xlog//000D.history: No such file or directory
>2017-07-27 12:03:24 AEST [21432]: [1030-1] user=,db=,client= 
>(0:0)LOG:
> restored log file "000C005A00B5" from archive
>2017-07-27 12:03:24 AEST [21432]: [1031-1] user=,db=,client= 
>(0:0)LOG:
> restored log file "000C005A00B6" from archive
>scp: /archiv

[GENERAL] Indexes being ignored after upgrade to 9.5

2017-07-26 Thread Nick Brennan
Hi,

We have recently promoted our Prod DB slave (2TB) to migrate to new
hardware, and upgraded from v9.2.9.21 to 9.5.1.6 using pg_upgrade.


The upgrade went without incident and we have been running for a week, but
the optimizer is ignoring indexes on 2 of our largest partitioned tables
causing very slow response times.


The indexes are Btree indexes on BIGINT columns, which the optimizer used
to return queries with ms response times on 9.2. Post-upgrade the queries
sequential scan and do not use indexes unless we force them.


We've added duplicate indexes and analyzing, however the new indexes are
still ignored unless we force using enable_seqscan=no or reduce
random_page_cost to 2. The query response times using the new indexes are
still as slow when we do this. Checking pg_stat_user_indexes the number of
tuples returned per idx_scan is far greater after the upgrade than before.
All indexes show valid in pg_indexes.


We have tried increasing effective_cache_size but no effect (the queries
appear to go slower). The DB is 24x7 so we cannot reindex the tables/
partitions.


Can anyone suggest why this would be happening?


Many thanks

Nick


Re: [GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread Nick Dro
This is a join in a middle of query.
How can I use dynamic SQL in the middle of query?ב מאי 15, 2017 20:26, David G. Johnston כתב:On Mon, May 15, 2017 at 10:02 AM, Nick Dro <postgre...@walla.co.il> wrote:
Hi, I'm new to postgresql and couldn't find answer to this situation anywhere. I asked this here: http://stackoverflow.com/questions/43984208/coditional-join-of-query-using-postgresqlI hope there is a better solution rather than creating two separated functions :(
​Generate the SQL itself in a string then execute the string.  Its called "Dynamic SQL"​.https://www.postgresql.org/docs/9.6/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYNDavid J.



[GENERAL] Coditional join of query using PostgreSQL

2017-05-15 Thread Nick Dro

Hi, I'm new to postgresql and couldn't find answer to this situation anywhere. I asked this here: http://stackoverflow.com/questions/43984208/coditional-join-of-query-using-postgresqlI hope there is a better solution rather than creating two separated functions :(


Re: [GENERAL] Streaming Replication Without Downtime

2017-02-17 Thread Gunnar &quot;Nick" Bluth
(sorry for the toppost,  mobile device) 
What you're looking for is pg_basebackup with - - xlog=stream,  I guess. 

Regards, 

Nick


Am 17. Februar 2017 14:06:36 MEZ schrieb Gabriel Ortiz Lour 
<ortiz.ad...@gmail.com>:
>Hi all,
>I've been searching for a way to initialize a new Hot Standby node with
>Streaming Replication withou the need for stop or even restarting the
>master.
>  Of course the master is already with the needed SR configs.
>
> I know I have to use pg_start_backup/pg_stop_backup, but i'd like some
>tips, or a link to some tutorial, with the order of the steps.
>
>  I assume will be something like:
>
>-  configure Slave for SR
>-  pg_start_backup()
>-  rsync PGDATA to slave
>-  start PG on the slave
>-  pg_stop_backup()
>
>  Anything i'm thinking wrong?
>
>Thanks in advance,
>Gabriel

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

Re: [GENERAL] pglogical cascading replication (chaining replication)

2016-07-19 Thread Nick Babadzhanian
The solution was found thanks to Petr Jelinek from 2ndQ.

> Cascading wasn't much of a priority so far. 
> Currently you have to create the origin manually using 
> pg_replication_origin_create(). 
> I plan to make this work seamlessly in the future release.

So whats needed to be done is:

on p2:

select * from pg_replication_origin;

will show all origins on p2, find the origin for p1;

on p3:

select pg_replication_origin_create('[origin name]');

Discussed here:

https://github.com/2ndQuadrant/pglogical/issues/23

Please ignore previous message;

Regards,
Nick.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglogical cascading replication (chaining replication)

2016-07-19 Thread Nick Babadzhanian
The solution was found thanks to Petr Jelinek from 2ndQ.

> Cascading wasn't much of a priority so far. 
> Currently you have to create the origin manually using 
> pg_replication_origin_create(). 
> I plan to make this work seamlessly in the future release.

So whats needed to be done is:

on p1:



Discussed here:

https://github.com/2ndQuadrant/pglogical/issues/23


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merging timeseries in postgres

2016-07-14 Thread Nick Babadzhanian
Nevermind, I misunderstood your question.

The answer is an outer join and if you want the exact output you provided then 
you can use the following clause.

coalesce(dx, dx1) as date

Is there any reason why these are two different tables? I'd consider changing 
data structure.

- Original Message -
From: "Tim Smith" 
To: "pgsql-general" 
Sent: Thursday, July 14, 2016 2:56:19 PM
Subject: [GENERAL] Merging timeseries in postgres

Hi,

I've got a bit of query-writers block ! I've tried various join styles
but can't get it to do what I want to achieve.

Assume I have a bunch of time-series tables :

create table test(dx date,n numeric);
create table test1(dx1 date,nx1 numeric);
insert into test values('2000-01-01','0.001');
insert into test1 values('2002-01-02','0.002');
insert into test1 values('2003-01-03','0.002');

What I want to do is create a view that merges these together with
time as the index, i.e the output would look like :


2000-01-01  0.001  (null)
2002-01-02  (null)   0.002
2003-01-03  (null)   0.003

I can't quite figure out how to keep the index independent and make a
clean join, typical outer join constructs end up with results like :

 dx | nx |dx1 |  nx1
+++---
|| 2002-01-02 | 0.001
|| 2003-01-02 | 0.002
(2 rows)

 dx |  nx   |dx1 |  nx1
+---++---
 2000-01-02 | 0.005 ||
|   | 2002-01-02 | 0.001
|   | 2003-01-02 | 0.002


Which isn't very pretty and doesn't really achieve what I want.

As I said "sql-writers block !"   ;-(

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merging timeseries in postgres

2016-07-14 Thread Nick Babadzhanian
Whats exactly is wrong with the following query?

select
dx date,
nx,
nx1
from
test t
join test1 t1 on t.dx=t1.dx1
;



- Original Message -
From: "Tim Smith" 
To: "pgsql-general" 
Sent: Thursday, July 14, 2016 2:56:19 PM
Subject: [GENERAL] Merging timeseries in postgres

Hi,

I've got a bit of query-writers block ! I've tried various join styles
but can't get it to do what I want to achieve.

Assume I have a bunch of time-series tables :

create table test(dx date,n numeric);
create table test1(dx1 date,nx1 numeric);
insert into test values('2000-01-01','0.001');
insert into test1 values('2002-01-02','0.002');
insert into test1 values('2003-01-03','0.002');

What I want to do is create a view that merges these together with
time as the index, i.e the output would look like :


2000-01-01  0.001  (null)
2002-01-02  (null)   0.002
2003-01-03  (null)   0.003

I can't quite figure out how to keep the index independent and make a
clean join, typical outer join constructs end up with results like :

 dx | nx |dx1 |  nx1
+++---
|| 2002-01-02 | 0.001
|| 2003-01-02 | 0.002
(2 rows)

 dx |  nx   |dx1 |  nx1
+---++---
 2000-01-02 | 0.005 ||
|   | 2002-01-02 | 0.001
|   | 2003-01-02 | 0.002


Which isn't very pretty and doesn't really achieve what I want.

As I said "sql-writers block !"   ;-(

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglogical cascading replication (chaining replication)

2016-07-14 Thread Nick Babadzhanian
I already tried to contact them, but no luck so far.

Although it seems to me that the change-set is actually forwarded, its the 
decoding that is failing on the receiving host. Check the log output for p2 and 
p3 in my previous message.

Regards,
Nick.

- Original Message -
From: "Joshua D. Drake" <j...@commandprompt.com>
To: "Nick Babadzhanian" <n...@cobra.ru>, "pgsql-general" 
<pgsql-general@postgresql.org>
Sent: Wednesday, July 13, 2016 7:16:30 PM
Subject: Re: [GENERAL] pglogical cascading replication (chaining replication)

On 07/12/2016 07:20 AM, Nick Babadzhanian wrote:
> I apologize if this is wrong place to ask the question.
>
> A quote from pglogical FAQ:
>
>> Q. Does pglogical support cascaded replication?
>> Subscribers can be configured as publishers as well thus cascaded 
>> replication can be achieved
>> by forwarding/chaining (again no failover though).
>
> The only mentions of forwarding on documentation page are:
>
>> Cascading replication is implemented in the form of changeset forwarding.
>
>> forward_origins - array of origin names to forward, currently only supported 
>> values are empty
>> array meaning don't forward any changes that didn't originate on provider 
>> node, or "{all}"
>> which means replicate all changes no matter what is their origin, default is 
>> "{all}"
>
> So my question is how to forward changeset using pglogical?

That's a great question. I have tried to figure out that information as 
well. Unfortunately it appears that it is locked up in 2ndQuadrant's 
head. I suggest contacting them.

Sincerely,

JD

-- 
Command Prompt, Inc.  http://the.postgres.company/
 +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglogical cascading replication (chaining replication)

2016-07-13 Thread Nick Babadzhanian
After checking logs I noticed this error on p3-node:

>ERROR:  cache lookup failed for replication origin 
>'pgl_test_node_p1_provider_sub_p1_t06410f8'

pgl_test_node_p1_provider_sub_p1_t06410f8 is origin for p1-node.

Here are the logs from all three server (this happens every time I insert 
something into p1 table):

p1:

>LOG:  starting pglogical supervisor
>LOG:  starting pglogical database manager for database test
>LOG:  starting logical decoding for slot 
>"pgl_test_node_p1_provider_sub_p1_t06410f8"
>DETAIL:  streaming transactions committing after 0/3BDBFD0, reading WAL from 
>0/3BDBFD0
>LOG:  logical decoding found consistent point at 0/3BDBFD0
>DETAIL:  There are no running transactions.

p2:

>LOG:  starting pglogical supervisor
>LOG:  starting pglogical database manager for database test
>LOG:  starting apply for subscription sub_p1_to_p2_insert_only
>LOG:  starting logical decoding for slot 
>"pgl_test_node_p2_9face77_sub_p2_t6fd19a3"
>DETAIL:  streaming transactions committing after 0/35DD958, reading WAL from 
>0/35DD958
>LOG:  logical decoding found consistent point at 0/35DD958
>DETAIL:  There are no running transactions.
>LOG:  starting logical decoding for slot 
>"pgl_test_node_p2_9face77_sub_p2_t6fd19a3"
>DETAIL:  streaming transactions committing after 0/35DDA38, reading WAL from 
>0/35DDA00
>LOG:  logical decoding found consistent point at 0/35DDA00
>DETAIL:  There are no running transactions.
>LOG:  could not receive data from client: Connection reset by peer
>LOG:  unexpected EOF on standby connection
>LOG:  starting logical decoding for slot 
>"pgl_test_node_p2_9face77_sub_p2_t6fd19a3"
>DETAIL:  streaming transactions committing after 0/35DDA38, reading WAL from 
>0/35DDA00
>LOG:  logical decoding found consistent point at 0/35DDA00
>DETAIL:  There are no running transactions.
>LOG:  could not receive data from client: Connection reset by peer
>LOG:  unexpected EOF on standby connection 

p3:

>LOG:  starting pglogical supervisor
>LOG:  starting pglogical database manager for database test
>LOG:  starting apply for subscription sub_p2_to_p3_insert_only
>ERROR:  cache lookup failed for replication origin 
>'pgl_test_node_p1_provider_sub_p1_t06410f8'
>LOG:  worker process: pglogical apply 13294:1876007473 (PID 14180) exited with 
>exit code 1
>LOG:  starting apply for subscription sub_p2_to_p3_insert_only
>ERROR:  cache lookup failed for replication origin 
>'pgl_test_node_p1_provider_sub_p1_t06410f8'
>LOG:  worker process: pglogical apply 13294:1876007473 (PID 14189) exited with 
>exit code 1 

- Original Message -
From: "Nick Babadzhanian" <n...@cobra.ru>
To: "pgsql-general" <pgsql-general@postgresql.org>
Sent: Tuesday, July 12, 2016 5:20:59 PM
Subject: pglogical cascading replication (chaining replication)

I apologize if this is wrong place to ask the question.

A quote from pglogical FAQ:

> Q. Does pglogical support cascaded replication?
> Subscribers can be configured as publishers as well thus cascaded replication 
> can be achieved
> by forwarding/chaining (again no failover though).

The only mentions of forwarding on documentation page are:

> Cascading replication is implemented in the form of changeset forwarding.

> forward_origins - array of origin names to forward, currently only supported 
> values are empty 
> array meaning don't forward any changes that didn't originate on provider 
> node, or "{all}" 
> which means replicate all changes no matter what is their origin, default is 
> "{all}"

So my question is how to forward changeset using pglogical?

Here's my setup:

There are 3 identical CentOS 7 servers: p1, p2 and p3. Postgres version is 
9.5.3.

p1:

select pglogical.create_node
(
node_name := 'node_p1_provider',
dsn := 'host=192.168.1.101 port=5432 dbname=test'
);

select pglogical.replication_set_add_all_tables('default_insert_only', 
array['public']);

p2:

select pglogical.create_node(
node_name := 'node_p2_provider_and_subscriber',
dsn := 'host=192.168.1.102 port=5432 dbname=test'
);

select pglogical.replication_set_add_all_tables('default_insert_only', 
array['public']);

select pglogical.create_subscription
(
subscription_name => 'sub_p1_to_p2_insert_only', 
provider_dsn => 'host=192.168.1.101 port=5432 dbname=test', 
replication_sets => array['default_insert_only']
);

p3:

select pglogical.create_node(
node_name := 'node_p3_subscriber',
dsn := 'host=192.168.1.103 port=5432 dbname=test'
);

select pglogical.create_subscription
(
subscription_name => 'sub_p2_to_p3_insert_only', 
provider_dsn => 'host=192.168.1.102 port=5432 dbname=test', 
replication_sets => array['default_insert_only']
);

Result:
p1:

insert into 

Re: [GENERAL] Replication with non-read-only standby.

2016-07-12 Thread Nick Babadzhanian
Thanks.

I ended up using pglogical, since I don't really need Bi-directional 
replication and docs for UDR suggest using pglogical instead.
Although I ran into a problem there, but pglogical seems to be the answer.

Regards,
Nick.

- Original Message -
From: "Sylvain Marechal" <marechal.sylva...@gmail.com>
To: "Nick Babadzhanian" <n...@cobra.ru>
Cc: "pgsql-general" <pgsql-general@postgresql.org>
Sent: Wednesday, July 6, 2016 11:00:05 PM
Subject: Re: [GENERAL] Replication with non-read-only standby.

2016-06-30 15:15 GMT+02:00 Nick Babadzhanian <n...@cobra.ru>:

> Setup:
> 2 PostgreSQL servers are geographically spread. The first one is used for
> an application that gathers data. It is connected to the second database
> that is used to process the said data. Connection is not very stable nor is
> it fast, so using Bidirectional replication is not an option. It is OK if
> data is shipped in batches rather than streamed.
>
> Question:
> Is there a way to make the standby server non-read-only, so that it can
> keep getting updates (mostly inserts) from the 'master', but users are able
> to edit the data stored on 'slave'? Is there some alternative solution to
> this?
>
> Regards,
> Nick.
>
> Hi Nick,

sorry for this silly question, but I am not sure to understand why BDR is
not an option.
As far as I know, it was designed to handle such cases.

My 2 cents,
Sylvain


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pglogical cascading replication (chaining replication)

2016-07-12 Thread Nick Babadzhanian
I apologize if this is wrong place to ask the question.

A quote from pglogical FAQ:

> Q. Does pglogical support cascaded replication?
> Subscribers can be configured as publishers as well thus cascaded replication 
> can be achieved
> by forwarding/chaining (again no failover though).

The only mentions of forwarding on documentation page are:

> Cascading replication is implemented in the form of changeset forwarding.

> forward_origins - array of origin names to forward, currently only supported 
> values are empty 
> array meaning don't forward any changes that didn't originate on provider 
> node, or "{all}" 
> which means replicate all changes no matter what is their origin, default is 
> "{all}"

So my question is how to forward changeset using pglogical?

Here's my setup:

There are 3 identical CentOS 7 servers: p1, p2 and p3. Postgres version is 
9.5.3.

p1:

select pglogical.create_node
(
node_name := 'node_p1_provider',
dsn := 'host=192.168.1.101 port=5432 dbname=test'
);

select pglogical.replication_set_add_all_tables('default_insert_only', 
array['public']);

p2:

select pglogical.create_node(
node_name := 'node_p2_provider_and_subscriber',
dsn := 'host=192.168.1.102 port=5432 dbname=test'
);

select pglogical.replication_set_add_all_tables('default_insert_only', 
array['public']);

select pglogical.create_subscription
(
subscription_name => 'sub_p1_to_p2_insert_only', 
provider_dsn => 'host=192.168.1.101 port=5432 dbname=test', 
replication_sets => array['default_insert_only']
);

p3:

select pglogical.create_node(
node_name := 'node_p3_subscriber',
dsn := 'host=192.168.1.103 port=5432 dbname=test'
);

select pglogical.create_subscription
(
subscription_name => 'sub_p2_to_p3_insert_only', 
provider_dsn => 'host=192.168.1.102 port=5432 dbname=test', 
replication_sets => array['default_insert_only']
);

Result:
p1:

insert into public.test (col1) values (1);
select count(1) from public.test; -- returns 1;

p2:
insert into public.test (col1) values (2);
select count(1) from public.test; -- returns 2;

p3:
select count(1) from public.test; -- returns 1;

Expected:
p3 recieves all inserts, thus the count on p3 should be 2 (same as on p2).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication with non-read-only standby.

2016-06-30 Thread Nick Babadzhanian
Setup:
2 PostgreSQL servers are geographically spread. The first one is used for an 
application that gathers data. It is connected to the second database that is 
used to process the said data. Connection is not very stable nor is it fast, so 
using Bidirectional replication is not an option. It is OK if data is shipped 
in batches rather than streamed.

Question:
Is there a way to make the standby server non-read-only, so that it can keep 
getting updates (mostly inserts) from the 'master', but users are able to edit 
the data stored on 'slave'? Is there some alternative solution to this?

Regards,
Nick.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slave claims requested WAL segment already removed - but it wasn't

2016-06-03 Thread Nick Cleaton
On 2 June 2016 at 02:43, Jeff Beck  wrote:
> Hi-
> We have a master (pg 9.4.4 on Ubuntu 14.10) and a slave (pg 9.4.8 on
> Centos 7). During a period of heavy use, the slave began complaining
> that the “requested WAL segment xx has already been removed”. But
> the WAL segment was still on the master. The issue was resolved by
> manually copying the pg_xlog directory over to the slave.
>
> I don’t see any errors on the master log file, or any other messages on
> the slave’s. What happened? How can this be prevented in the future?

I've seen this once. In my case there was a downstream slave of the
slave that I'd forgotten about, and the log entries on the slave were
the result of slave-of-slave asking slave for a WAL file that was not
present on slave.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unexpected function behaviour with NULL and/or default NULL parameters

2015-05-06 Thread Gunnar Nick Bluth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

I'm experiencing odd behaviour with a function I wrote yesterday.

Background: function is supposed to deliver some terms and
conditions from a table; when the locale is found, deliver the
highest version of that, otherwise, deliver the highest version of the
default locale.

CREATE OR REPLACE FUNCTION
public.get_current_tac(userid bigint, sessionid uuid, locale character
varying, OUT current_tac json)
 RETURNS json
 LANGUAGE sql
 IMMUTABLE STRICT SECURITY DEFINER
AS $function$
SELECT json_agg(selected) FROM (
  SELECT * FROM (
SELECT *, 1 AS locale_specific FROM terms_and_conditions WHERE
locale = $3 ORDER BY version DESC LIMIT 1
  ) specific
UNION
  SELECT * FROM (
SELECT *, 0 AS locale_specific FROM terms_and_conditions WHERE
locale = 'default' ORDER BY version DESC LIMIT 1
  ) unspecific
ORDER BY locale_specific DESC
LIMIT 1
) selected;
$function$

This works fine when I give valid userid, sessionid and locale:
=# select
get_current_tac(userid:=38,sessionid:='79993643-ec3c-0359-f603-069b543ce4a8'::uuid,locale:='en');

get_current_tac
-
--
 
[{locale:default,version:1,updated_at:2015-05-05T14:04:17.246684+02:00,terms_and_conditions:Hier
 kommen die AGB rein,locale_specific:0}]
(1 row)


Then I realised I don't really need those first two parameters and
applied default values (NULL). As $1 and $2 are not used, it should
still work, right? Well, it returns one empty (!) row (behaviour is
the same when declaring NULL as default values in the function header,
can't show as there's another version with only locale as parameter):

# select get_current_tac(userid:=null, sessionid:=null::uuid,
locale:='en');
 get_current_tac
- 
 NULL
(1 row)

I'm completely puzzled by this behaviour; at least it should cast the
locale_specific value into the JSON output, shouldn't it?

What am I missing? Any hints appreciated!
- -- 
Gunnar Nick Bluth
RHCE/SCLA

Mobil +49 172 8853339
Email: gunnar.bl...@pro-open.de
_
In 1984 mainstream users were choosing VMS over UNIX.
Ten years later they are choosing Windows over UNIX.
What part of that message aren't you getting? - Tom Payne

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJVScPtAAoJEBAQrmsyiTOMRpwIANOALdxqB3V35IaYyXg/BLJz
Vnfgz6tsq97vDPvlCq7J/3ratnsJJqB218tGWX9jNr5Jcs/Ak0ZfZFcGHBE/YFxJ
2H6+30CHFeHVdGRkAF4Lu0rDcXoABhe0vIwfpQpRileXPpukQL9+oyE7nNI5H5dn
cb6UzqjxLEu/LgEZUgh5M3P680gPWm2gx1ojBu/a1I6i7pZiBVGxH0dtcFn1Gwsh
CFb5iByrGl+ghuxge4N1Kc02RhgDhdgedV0Rfj5oD6PuGuTmFarfbdZpc057y553
eo8jllZFE1Qoj1pWvZSL+gLQVp3bzoy3XxtbDGFZXNB7wfSSCs5t61HjaoMM7lk=
=ae0S
-END PGP SIGNATURE-


0x3289338C.asc
Description: application/pgp-keys


0x3289338C.asc.sig
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected function behaviour with NULL and/or default NULL parameters

2015-05-06 Thread Gunnar Nick Bluth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Am 06.05.2015 um 09:57 schrieb David G. Johnston:

Ooops, accidentaly replied to David directly...

 Wednesday, May 6, 2015, Gunnar Nick Bluth
 gunnar.bl...@pro-open.de mailto:gunnar.bl...@pro-open.de
 wrote:
 
 -BEGIN PGP SIGNED MESSAGE- CREATE OR REPLACE FUNCTION 
 public.get_current_tac(userid bigint, sessionid uuid, locale
 character varying, OUT current_tac json) RETURNS json LANGUAGE sql 
 IMMUTABLE STRICT SECURITY DEFINER AS $function$ SELECT
 json_agg(selected) FROM ( SELECT * FROM ( SELECT *, 1 AS
 locale_specific FROM terms_and_conditions WHERE locale = $3 ORDER
 BY version DESC LIMIT 1 ) specific UNION SELECT * FROM ( SELECT *,
 0 AS locale_specific FROM terms_and_conditions WHERE locale =
 'default' ORDER BY version DESC LIMIT 1 ) unspecific ORDER BY
 locale_specific DESC LIMIT 1 ) selected; $function$
 
 
 Also, I don't know why you would need security definer but
 defining

Nothing in the DB is accessible to the apache user directly, thus the
security definer.

 the functions as being immutable is a flat out lie.  Combined
 with your misuse of strict I would suggest you read up on
 function creation and usage in the documentation.

Well, you're of course right... wrote a bunch of real immutable strict
functions these last days, so that just sticked. Always think before
writing... w/out the two, the function behaves as expected. I'll
review the other functions as well now!


 It also looks odd to define the OUT parameter along with RETURNS
 json - unless it is giving you some kind of output column name
 benefit that I cannot remember at the moment.

That's how it was put into my vim when doing an \ef...

Cheers,
- -- 
Gunnar Nick Bluth
RHCE/SCLA

Mobil +49 172 8853339
Email: gunnar.bl...@pro-open.de
_
In 1984 mainstream users were choosing VMS over UNIX.
Ten years later they are choosing Windows over UNIX.
What part of that message aren't you getting? - Tom Payne

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJVSfdIAAoJEBAQrmsyiTOMN84IAMhgo3blO5oZqTJjyHnfznYW
MvKx5NuZkTQ4xphzlD2XdEGEASzb1FEUShKw1OB7TQ7E8O0aq19oXqdVIOyL0oVi
GCZgT5uDEY7WpIsP98qaO0GEZ/Tc6hUUbH6DLB6fhRdnrNQPoSssi682HgIvg83e
PDjgkS4+Zi2CWquF4jDPeaMGjp/+hFUtecZaYl3XqoD3GWtbj9T3LidFBfZPj0iV
V7qGvbcpu1r0bYRmA5dXiVkaFtq2xqBZn2T1S2uzd2giqCIm8L1uXTAEFt/fNlvC
wyu9mzfQUA0lyPLyIbUFfSb1Csgb7uSZXVOGgc++rgps5wZ+ZssCcx+4VcdF09A=
=1i6y
-END PGP SIGNATURE-


0x3289338C.asc
Description: application/pgp-keys


0x3289338C.asc.sig
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Collation problem?

2015-04-26 Thread Gunnar Nick Bluth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Am 26.04.2015 um 10:02 schrieb Bjørn T Johansen:
 My databases looks like this...:
 
 
 List of databases Name |Owner | Encoding |   Collate
 |Ctype|   Access privileges 
 --+--+--+-+-+-
- --

 
dbname | owner| UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 ...
 
 
 And my problem is that I am using Norwegian in some tables and when
 using order by the sort order is not correct for the Norwegian
 letters.. So my guestion is if it is possible to get the correct
 sort order without recreating all my databases or initialize
 PGSQL?
 
Sure (i.e., if you're on a half recent version):

ALTER TABLE yourtable ALTER COLUMN yourcolumn COLLATE collation


http://www.postgresql.org/docs/9.4/static/sql-altertable.html

- -- 
Gunnar Nick Bluth
RHCE/SCLA

Mobil +49 172 8853339
Email: gunnar.bl...@pro-open.de
_
In 1984 mainstream users were choosing VMS over UNIX.
Ten years later they are choosing Windows over UNIX.
What part of that message aren't you getting? - Tom Payne

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJVPKJoAAoJEBAQrmsyiTOMhRsH/3RENBEhVBHCKpAURq9EdPdj
gEB8vD9PY2U/m5L3vG/RrqNhtPbIhVsfLn0CUSqhTTh4VEmoiGIbRS8MTDxH79Nl
Ic/ovsjioPy7feIPBKRKALTY4R+8KG/XAIuY2WEWUcy3NTr5NX3id+BcHnOc5nXU
PB7QB3VD5a3YykWVE6/6OxskeoSiEN97ey4vbdav9qNNSQ60zt0gJa9SR7nGHsaV
M8yF9fG57TeUFrTaEkZNkmZwC4Ui4w+eKFZk0m9L13JXaoZ4xIqLvH1nufKIO0uB
SHBUl22MGKZoksg1KaAEvq6lYLGfdOvH53tsulw4nDR0w+VYztY4eIR0j1xm17c=
=Sh2G
-END PGP SIGNATURE-


0x3289338C.asc
Description: application/pgp-keys


0x3289338C.asc.sig
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FK check implementation

2014-10-13 Thread Nick Barnes
On Sat, Oct 11, 2014 at 5:01 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 10/10/2014 10:41 AM, Nick Barnes wrote:


 I understand why the FK insert needs to lock on the PK row. But why is
 the PK delete trying to lock the FK row? If it finds one, won't the
 delete fail anyway? If it doesn't find one, what is there to lock?


 I would say this has to do with setting DEFERRABLE on a constraint.


Any guesses why this might be? I would have thought that by this point,
where we're actually performing the check, anything related to deferring
the check would be behind us.

And even if we do require a lock, why FOR KEY SHARE? As I understand it,
this won't lock the referencing field, which should be the only thing in
the FK relation that we're interested in.


[GENERAL] FK check implementation

2014-10-10 Thread Nick Barnes
I'm looking at the code behind the foreign key checks in ri_triggers.c, and
something's got me a little confused.

In both cases (FK insert/update checking the PK, and PK update/delete
checking the FK) the check is done with a SELECT ... FOR KEY SHARE.

This makes perfect sense for PK checks, but in the FK check, it seems
pointless at best; if it actually manages to find something to lock, it
will fail the check and error out moments later. And in any case, I don't
see how the key fields in the FK relation (to which the KEY SHARE lock
applies) are even relevant to the constraint in question.

What am I missing?


Re: [GENERAL] FK check implementation

2014-10-10 Thread Nick Barnes
On Sat, Oct 11, 2014 at 1:32 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Nick Barnes nickbarne...@gmail.com writes:
  I'm looking at the code behind the foreign key checks in ri_triggers.c,
 and
  something's got me a little confused.

  In both cases (FK insert/update checking the PK, and PK update/delete
  checking the FK) the check is done with a SELECT ... FOR KEY SHARE.

  This makes perfect sense for PK checks, but in the FK check, it seems
  pointless at best; if it actually manages to find something to lock, it
  will fail the check and error out moments later. And in any case, I don't
  see how the key fields in the FK relation (to which the KEY SHARE lock
  applies) are even relevant to the constraint in question.

  What am I missing?

 Race conditions.

 Example case: you're trying to delete the row for PK 'foo', while
 concurrently somebody is inserting a row that references foo.  With
 no locking, neither of you will see the other action, hence both
 will conclude their action is ok and commit.  Presto: FK violation.

 The point of the FOR SHARE lock (which also goes along with some
 cute games played with the query's snapshot) is to make sure there
 aren't uncommitted changes that would result in an FK violation.
 We could possibly have done it another way but that would just have
 resulted in two generally-similar mechanisms.

 regards, tom lane


I understand why the FK insert needs to lock on the PK row. But why is the
PK delete trying to lock the FK row? If it finds one, won't the delete fail
anyway? If it doesn't find one, what is there to lock?


Re: [GENERAL] table versioning approach (not auditing)

2014-09-29 Thread Nick Guenther


On September 29, 2014 11:08:55 AM EDT, Jonathan Vanasco postg...@2xlp.com 
wrote:

- use a transaction log.  every write session gets logged into the
transaction table (serial, timestamp, user_id).  all updates to the
recorded tables include the transaction's serial.  then there is a
transactions table, that is just transaction_serial ,  object_id ,
object_action.  

A newbie tangent question: how do you access the transaction serial? Is it 
txid_current() as listed in 
http://www.postgresql.org/docs/9.3/static/functions-info.html?

And how do you actually make use of that information? I know from Bruce 
Momjians's excellent MVCC talk http://momjian.us/main/writings/pgsql/mvcc.pdf 
that postgres internally has a secret txid column on each row; can you somehow 
query on the secret column? And does your implementation worry about multiple 
timelines? 

My use case is dynamically allocated replication. Broadly, my algorithm is that 
for each client
1) download a full copy of the current table
2) keep the connection open and send deltas (which are just inserts and 
deletes, for me)

I need 2 to begin *as if immediately* after 1.  txids sound like they are 
exactly what I need but without knowing how to handle them, I fudged it by 
opening a query for 1 and for 2 immediately after each other so that they 
should be plugged to the same txid but before reading them.  There's definitely 
a race condition that will show under load, though. I think the correct 
algorithm is:

1) ask the current txid X
2) start buffering deltas with txid  X
3) download the table as of X
4) download the buffer of deltas and listen for future ones
-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Nick Guenther



Quoting Seref Arikan serefari...@gmail.com:


On Tue, Sep 23, 2014 at 9:36 AM, Craig Ringer cr...@2ndquadrant.com wrote:


Hi all

I've had some issues with how the procedural languages are packaged in
the Windows installer for a while, but I was very surprised to see that
plpython2 appears to be entirely absent in 9.3.

It doesn't seem to be provided via EDB's StackBuilder app either.

What's going on? It looks like it was dropped in 9.1.



I've gone through the same journey myself. Tried to juggle dlls etc.[...]




I've struggled with plpython on OpenBSD 5.5-amd64 as well. Could it be  
related? Maybe the amount of dependencies python pulls in gets  
overwhelming and things break?



$ psql -h localhost -d postgres
psql (9.3.2)
Type help for help.
postgres=# create language plpython2u;
ERROR:  could not load library  
/usr/local/lib/postgresql/plpython2.so: dlopen  
(/usr/local/lib/postgresql/plpython2.so) failed: Cannot load specified  
object

postgres=# create language plperl;
CREATE LANGUAGE
postgres=#


This is strange because /usr/local/lib/postgresql/plpython2.so exists.  
Also, perl loads fine.

$ ls -l /usr/local/lib/postgresql/pl*
-rwxr-xr-x  1 root  bin  100948 Jul 31 02:05  
/usr/local/lib/postgresql/plperl.so
-rwxr-xr-x  1 root  bin  181287 Jul 31 02:05  
/usr/local/lib/postgresql/plpgsql.so
-rwxr-xr-x  1 root  bin  137951 Mar  4 12:45  
/usr/local/lib/postgresql/plpython2.so



I uninstalled all the postgres subpackages and rebuilt them from  
ports, and ended up with an identical plpython2.so, which has these  
checksums:
SHA256 (/usr/local/lib/postgresql/plpython2.so) =  
8c7ff6358d9bf0db342e3aca1762cd7c509075a6803b240291d60a21ca38198b
MD5 (/usr/local/lib/postgresql/plpython2.so) =  
bb6122f27f48f0b3672dbc79cef40eea
SHA1 (/usr/local/lib/postgresql/plpython2.so) =  
4dd79641cbad3f71466648559d74e6b0c4f174a3


Any other OpenBSD users that have run into this, here?

--
Nick Guenther
4B Stats/CS
University of Waterloo






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Nick Guenther



On September 23, 2014 2:27:29 PM EDT, Adrian Klaver  
adrian.kla...@aklaver.com wrote:

On 09/23/2014 11:05 AM, Nick Guenther wrote:



Quoting Seref Arikan serefari...@gmail.com:


On Tue, Sep 23, 2014 at 9:36 AM, Craig Ringer

cr...@2ndquadrant.com

wrote:


Hi all

I've had some issues with how the procedural languages are packaged

in

the Windows installer for a while, but I was very surprised to see

that

plpython2 appears to be entirely absent in 9.3.

It doesn't seem to be provided via EDB's StackBuilder app either.

What's going on? It looks like it was dropped in 9.1.



I've gone through the same journey myself. Tried to juggle dlls

etc.[...]





I've struggled with plpython on OpenBSD 5.5-amd64 as well. Could it

be

related? Maybe the amount of dependencies python pulls in gets
overwhelming and things break?


$ psql -h localhost -d postgres
psql (9.3.2)
Type help for help.

  postgres=# create language plpython2u;

ERROR:  could not load library

/usr/local/lib/postgresql/plpython2.so:

dlopen (/usr/local/lib/postgresql/plpython2.so) failed: Cannot load
specified object
postgres=# create language plperl;
CREATE LANGUAGE
postgres=#




What user are you running as?
plpythonu is an untrusted language so it can only be installed by a
superuser.


My user account which is also the one that ran initdb and the one that  
ran postgres. I discovered this when trying to work on a project I'd  
started on arch linux where everything is peachy.



Also languages are now extensions and should be installed using CREATE
EXTENSION:

http://www.postgresql.org/docs/9.3/interactive/sql-createlanguage.html


Ah, thanks for the tip. I will change over to that. But CREATE  
EXTENSION gives the identical error:


postgres=# create extension plpython2u;
ERROR:  could not load library  
/usr/local/lib/postgresql/plpython2.so: dlopen  
(/usr/local/lib/postgresql/plpython2.so) failed: Cannot load specified  
object


 On September 23, 2014 2:25:43 PM EDT, Alan Hodgson  
ahodg...@simkin.ca wrote:


I'm not a current BSD user, but I'd suggest ldd'ing that library and
see if it
has any shared libraries it can't find or if it references shared
libraries
that are not in whatever OpenBSD uses for a library search path (ie.
the
equivalent of Linux's ld.so.conf), or if any of those shared libraries
have
permissions issues.


Ah, good point.

It turned out that the problem DLL was libpthread. I discovered this  
using LD_DEBUG and worked around it with LD_PRELOAD.



Full output is below, so that hopefully this becomes searchable:

$ uname -a
OpenBSD localhost.localdomain 5.5 GENERIC.MP#315 amd64

$ initdb data
The files belonging to this database system will be owned by user nguenthe.
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to english.

Data page checksums are disabled.

creating directory data ... ok
creating subdirectories ... ok
selecting default max_connections ... 40
selecting default shared_buffers ... 128MB
creating configuration files ... ok
creating template1 database in data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... not supported on this platform
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling trust authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

postgres -D data
or
pg_ctl -D data -l logfile start



$ LD_DEBUG=1 postgres -D ./data
rtld loading: 'postgres'
exe load offset:  0x1c200040
 flags postgres = 0x0
head postgres
obj postgres has postgres as head
examining: 'postgres'
loading: libasn1.so.20.0 required by postgres
 flags /usr/lib/libasn1.so.20.0 = 0x0
obj /usr/lib/libasn1.so.20.0 has postgres as head
loading: libroken.so.1.0 required by postgres
 flags /usr/lib/libroken.so.1.0 = 0x0
obj /usr/lib/libroken.so.1.0 has postgres as head
loading: libwind.so.1.0 required by postgres
 flags /usr/lib/libwind.so.1.0 = 0x0
obj /usr/lib/libwind.so.1.0 has postgres as head
loading: libc.so.73.1 required by postgres
 flags /usr/lib/libc.so.73.1 = 0x0
obj /usr/lib/libc.so.73.1 has postgres as head
loading: libkrb5.so.20.0 required by postgres
 flags /usr/lib/libkrb5.so.20.0 = 0x0
obj /usr/lib/libkrb5.so.20.0 has postgres as head
loading: libcrypto.so.23.0 required by postgres
 flags /usr/lib/libcrypto.so.23.0 = 0x0
obj /usr/lib/libcrypto.so

Re: [GENERAL] Watching Views

2014-09-18 Thread Nick Guenther


Ah! Your reply was excellent, David. I only found it now, cleaning out  
my inbox. Comments inline!


Quoting David G Johnston david.g.johns...@gmail.com:


Nick Guenther wrote

As you said, attaching the trigger to a view is useless (for
BEFORE/AFTER, which I'm interested in, also only works on statement
level changes, which I would rather not have to deal with). I tried
attaching my trigger to a materialized view and found that postgres
does not support that; as you said, I need to write a trigger on the
source to keep the materialized view and the source in sync. But in
that case I don't think a materialized view helps me at all, since
without triggers on it I can't even have it compute the diffs for me
(and I suspect that internally postgres simply truncates the old table
and refills it, so there would be nothing to hook anyway).

My bottleneck is the size of my database and the network: I want to
take slices of a potentially gigabytes-large database and stream them
out to multiple clients. Thus I need to stream--not poll--for changes.
I think a materialized view would force me to poll for changes, and in
that case I would be doing REFRESH MATERIALIZED VIEW on each poll and
therefore not winning anything over a regualar view. Ideally, when an
update comes in to a parent table I want all views that have sliced it
to be notified; I'm interested in doing dataflow* programming,
essentially. Is there maybe some sort of extension that does
dependency tracking inside of postgres?


While PostgreSQL has materialized view functionality built in the concept is
general and can be done manually.  Instead of the main table having the link
to the FIFO I was thinking you would instead replicate record changes to all
active subset tables and then triggers on those tables would send the
relevant changes out to the world.

Keep in mind you can attach a where clause to your trigger, and I think you
can pass in arguments to it as well.  You should have on trigger per view
attached to the source table - though with good meta-data and some imposed
limits you can probably pool some clients into the same stream.


I actually ended up independently deciding on using triggers.
I spent a long time agonizing over trying to avoid triggers
because tie me to postgres, but they ended up not being so bad
and I even think that duplicating the work for MySQL at least should
not be that bad.

The hooks I wrote are in python (don't laugh) and are at  
https://github.com/kousu/modex/tree/databased/src/backend/db/replicate.pysql.

All the hooks do are export {INSERT,UPDATE,DELETE}s to json--
So far filtering is only in my frontend--in javascript. I did this because
 http://nytimes.github.io/pourover/
rightly points outthat centralizing database slicing means creating
an artificial bottleneck. If you're curious, what I've got so far is  
at https://github.com/kousu/modex/blob/databased/src/frontend/tables.js

But now I'm musing on how to do the same in the database.

For context, my use case has several game instances in parallel. So,  
each game has many clients which should share a part of the database,  
and each client individually further slices the data depending on what  
its user wants to investigate. Sharing data between the games and then  
having them use their in-javascript triggers to just ignore each other  
is possible--and my current implementation supports that--but it would  
be a lot better and less memory and bandwidth heavy if I could have  
the database do the initial share a part slice, and even better  
slices could be deduplicated--that is, if I could  pooling clients  
onto the a single table instead.


Right now, I foresee the final javascript API being something like

```
var DB = Remote_DB.table(buildings)   //create an AST representing
.where(Column(run_id)  // select  
location, kind, cost from buildings where run_id = $1

  .eq(this.run_id))
.select(location, kind, cost)
DB = DB.execute()  //invoke the AST inside of postgres, which should  
wire up triggers doing dataflow
var points = DB.where(Column(location)  //create an in-javascript  
dataflow node
.within((x,y), 500))   // which  
uses triggers (currently
   //  
PourOver's Events) to watch
   //  
the source for changes.

var expensive = DB.max(expensive) // Create another one, which will
   //  
update simultaneously with its sibling
   //  
when the source database changes

```

I would like the AST (and postgres hooks that back it) to have all the  
same methods as the javascript-dataflow part so that changing the  
boundary

Re: [GENERAL] Feature request: temporary schemas

2014-09-14 Thread Nick Guenther


On September 14, 2014 5:01:54 PM EDT, cowwoc cow...@bbs.darktech.org wrote:
Hi,

I'd like to propose the ability to create temporary schemas.

This would facilitate running unit tests, where developers would like
to run
the same creation script for unit tests and production code but do not

What if you ran a temporary database instead? You can use some combination of 
PGHOST, PGPORT, PGUSER and PGDATABASE to pg_ctl start an empty DB and then 
point your unit tests at it.  Then you aren't in danger of wiping out your 
production DB either.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature request: temporary schemas

2014-09-14 Thread Nick Guenther


On September 14, 2014 6:01:15 PM EDT, cowwoc cow...@bbs.darktech.org wrote:

On 14/09/2014 5:52 PM, Nick Guenther [via PostgreSQL] wrote:


 On September 14, 2014 5:01:54 PM EDT, cowwoc [hidden email] 
 /user/SendEmail.jtp?type=nodenode=5819003i=0 wrote:
 Hi,
 
 I'd like to propose the ability to create temporary schemas.
 
 This would facilitate running unit tests, where developers would
like
 to run
 the same creation script for unit tests and production code but do
not

 What if you ran a temporary database instead? You can use some 
 combination of PGHOST, PGPORT, PGUSER and PGDATABASE to pg_ctl
start 
 an empty DB and then point your unit tests at it.  Then you aren't in

 danger of wiping out your production DB either.


Hi Nick,

I don't think this would help for three reasons:

 1. I'm already planning to run unit tests against a separate (but
identical) database than production, so there's no danger of wiping
out the production database.
 2. I need to create a new temporary schema per test, and run 4-10 of
tests per second. I'm guessing this wouldn't perform well using
pg_ctl.
 3. And finally, I'd like to configure all this through JDBC (pure
Java). pg_ctl would require me to interact with native code.

Well, forget pg_ctl then.  Can you use DROP DATABASE instead?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] change the value of unix_socket_directories , must used -h /xx/xx to use the Unix domain socket

2014-08-16 Thread Nick Guenther


On August 16, 2014 11:41:02 AM EDT, lin jlu...@163.com wrote:
Hi all,
I change the value of unix_socket_directories in postgresql.conf , 
then restart the database, but it cannot connect the database used like
this 
psql -d postgres -p 5432 ,  it must given the parameter  -h /xx/xx
to use the Unix domain socket。
how to fix   this issue ?

 I'll start by saying that your test case is very clear, and thank you for it. 
I am unsure what your goal is, however. I assume you are trying to set up 
parallel postgres processes, for debugging. I've done this recently, for that 
reason.

First thing to point out is that you need only one of -h and -p.  They are 
redundant options, because you only connect to postgres either over TCP (-p) or 
with a unix domain socket (-h).

Second, what you're seeing is necessary. If you change the default, then psql 
doesn't know where to look. However, you can recover the old behaviour with 
shell tricks:
$ alias psql='psql -h /xx/xx'
$ psql -d postgres

(Personally, I wrote a short wrapper script called client.sh which 
depth-first searches for a postgres db directory and the runs 'psql -h' with 
the first one it finds; equally well you could have this script install an 
alias)

Are you perhaps confused about what a unix domain socket is? Why are you 
changing it? This is a decent description of it: 
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man4/unix.4


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Watching Views

2014-07-23 Thread Nick Guenther




Quoting David G Johnston david.g.johns...@gmail.com:


Nick Guenther wrote

Dear List,

In principle, I am looking for some way to say
```
CREATE VIEW view13131 AS select (id, name, bank_account) from actors
where age  22;
WATCH view13131;
```

and get output to stdout like
```

INSERT view13131 VALUES (241, Mortimer, 131.09);
...
INSERT view13131 VALUES (427, Schezwan, 95.89);
UPDATE view13131 SET bank_account = 1017.12 WHERE id = 427;
DELETE FROM view13131 WHERE id = 92;
...
```


9.4 - http://www.postgresql.org/docs/9.4/static/logicaldecoding.html

Though I doubt your requirement to obtain only a subset of data is something
that can be accommodated; especially in SQL form.

And, yes, you can create triggers on views.

http://www.postgresql.org/docs/9.4/static/sql-createtrigger.html

But assuming your view is meant to be dynamic, covering only the subset of
data you wish to watch, no one is going to be using your view to actually
Insert/Update/Delete against the underlying table(s) so it will not do you
any good to add triggers to it.

You probably need to create some kind of materialized view and add a trigger
to the relevant source table to maintain that view on an ongoing basis.
Then remove the trigger (and optionally the materialized view) when you no
longer care to watch.

This:
http://www.postgresql.org/docs/9.4/static/sql-creatematerializedview.html
 or roll your own.

You can also use the full power of whatever programming languages you can
install onto the server (e.g., pl/perl, pl/python, pl/sh) to link up with
the outside world from inside one of those triggers...


Thanks David, your tips were very helpful. I'm not a SQL expert, but  
these ideas springboarded me ahead. I've been playing with your ideas  
this week and I've come up with a way to extract the logical changes  
without using 9.4, which I'll share here in case anyone else is  
curious. This is preliminary, so don't rely on it for anything  
important. It doesn't write DELETE FROM  lines, but it does write  
the data in a json-esque format which could be without too much work  
turned into my invented WATCH language.



```{psql}
-- watch.psql
-- This postgres + pl/python2 script demonstrates watching changes to  
a table via a trigger.

-- Python opens a FIFO on which it writes lines like
--  + {'rating': 3, 'kind': 'Documentary', 'name': 'The Mail Man'}.
-- The FIFO part is the flakiest bit, because it requires you to load  
the trigger,
-- then immediately run a reader (e.g. `tail -f  
/path/to/postgres/data/_changes_films`)

-- *before* any DB updates happen.
-- If you have no reader, updates will fail (the kernel will raise  
ENXIO at print FIFO).
-- The other option is to ignore the ENXIOs, but then changes will get  
lost. I'm not sure.
-- Some more design (subprocess? calling this from a master script?)  
can probably fix that awkwardness.

--
-- The second point of flakiness is that attaching the trigger is  
rather verbose.

-- This can be solved with pl/pgsql subroutines.
--
-- This could be probably done in plpgsql, but I know python better,  
and it comes with serialization (json, msgpack, pickle) available  
easily.

-- these tips are due to
--   
http://www.postgresql.org/message-id/1405660725952-5811931.p...@n5.nabble.com  
and
--   
http://www.postgresql.org/message-id/1405703990457-5811982.p...@n5.nabble.com
-- The reason I'm not using Logical Decoding  
http://www.postgresql.org/docs/devel/static/logicaldecoding-example.html is  
because it's still in devel---not even Arch Linux, usually full of  
bleeding edge code, has this feature yet. Plus it requires fiddling  
with the .conf file.



DROP TABLE IF EXISTS films CASCADE;
CREATE TABLE films (name text, kind text, rating int);


DROP FUNCTION IF EXISTS watch_table();
CREATE FUNCTION watch_table() RETURNS trigger AS $$
  tablename = TD[args][0]

  FIFO = _changes_%s % (tablename,)
  if FIFO not in SD:
#this is our first time running in this instance of the python  
interpreter:

# run initializations

#PL/Python is really meant for small one-off tasks, mostly. Most  
data should probably just be stuffed straight into the database.

# however, things like file descriptors don't work so well like that
# for these things, we need to use the facilities PL/python  
provides:  
http://www.postgresql.org/docs/9.3/static/plpython-sharing.html
#  summary is: SD stands for static data and behaves like  
static locals in C (they must have some kind of trap table kicking  
around that switches in values of SD when the appropriate function is  
called).

#  GD stands for global data and is the same everywhere
#both begin as empty dictionaries
#   note also that it seems that one python interpreter is  
invoked ~per client connection~; not per-statement (which would be too  
fine) nor per

import sys, os

if os.path.exists(FIFO):
  #TODO: check that, if it exists, it's a FIFO and we have perms

[GENERAL] Watching Views

2014-07-17 Thread Nick Guenther

Dear List,

I am interested in replicating views of my data in real time to a  
frontend visualizer. I've looked around, and it seems that most  
applications in this direction write some ad-hoc json-formatter that  
spits out exactly the columns it is interested in. I want something  
more like Cubes[1], where a user (or at least, some javascript) can  
say I am interested in this slice of the world, and then get updates  
to that slice, but unlike Cubes it must be near-real-time: I want to  
hook events, not just redownload tables.



In principle, I am looking for some way to say
```
CREATE VIEW view13131 AS select (id, name, bank_account) from actors  
where age  22;

WATCH view13131;
```

and get output to stdout like
```

INSERT view13131 VALUES (241, Mortimer, 131.09);
...
INSERT view13131 VALUES (427, Schezwan, 95.89);
UPDATE view13131 SET bank_account = 1017.12 WHERE id = 427;
DELETE FROM view13131 WHERE id = 92;
...
```
(and then I would stick a SQL-to-JSON proxy in the middle to make it  
more web-happy, and handle disconnects with a corresponding UNWATCH  
operation)


I am stumped about the best way to go about this. Is there any  
extension that does this specific task for postgres?





CouchDB seems to have this implemented[2](!) but there are a lot of  
reasons I am wary of going that route. dat[3] is specifically for  
real-time replication and versioning for data sets; it's super new  
and shakey, but promising.


I spent awhile trying to implement the replication protocol[4] before  
I found a tip[5] which pointed out that the confusing, opaque, binary  
data I was getting (like
b'x00\x98\x08\x00\x00\x00\x00\n\x00\x1f\x00\x10@bid\x00\x98\x08\x00\x00\x00\x00\n\x00\x1e\x00\x18@dicks\x00\x00\x00\x98\x08\x00\x00\x00\x00\x00\x00\x00\x00\n\x00\x10\x00\x18@event_2_2\x00\x00\x00\x15@\x00\x00\x00\x00\n\x00\x08\x00\x18@event_2_1\x00\x00\x00\x15@\x00\x00\x00\x00\n\x00\x01\x00\x18@event_2_0\x00\x00\x00\x15@\x00\x00\x00\x00\t\x00\x14\x00 @event_2_2_txid_idx\x00\x00\x15@\x00\x00\x00\x00\t\x00\x0f\x00\x18@event_2_2\x00\x00\x00\x15@\x00\x00\x00\x00\t\x00\x0e\x00 @event_2_1_txid_idx\x00\x00\x15@') is--I believe, please correct me if I'm wrong--a verbatim copy of postgres's internal data structures. Will it pay off to reverse and reimplement these data structures in javascript? The tipster seemed to think not, but that was 6 years ago. Also, this solution doesn't give me the ability to slice data, though I could hack it with some kind of ridiculous proxy database  
setup.


I discovered Skytools[6]'s Londiste, which will replicate only  
specific tables and seems very close to what I want, but it seems like  
it has a lot of administrative overhead and is targetted at  
postgres-to-postgres log shipping. Does anyone know if I can hook it  
somewhere in the middle in order to extract the CREATE, UPDATE and  
DELETE events?


My last option that I am considering is writing code myself which sets  
and unsets Postgres triggers corresponding to each WATCH statement. I  
could implement this as PL/pgSQL or on Python+SQLAlchemy. This seems  
like it might end up fragile, so if I do end up going this route, I  
would appreciate any tidbits and gotchas you might have to share. If I  
ALTER VIEW will the triggers all fire appropriately? Can I even set  
triggers on views?


Thanks in advance
-Nick Guenther
4B Stats/CS
University of Waterloo

[1] Cubes http://cubes.databrewery.org/
[2] CouchDB. Filtered Replication.  
http://couchdb.readthedocs.org/en/latest/replication/protocol.html#filter-replication

[3] https://github.com/maxogden/dat
[4] Postgres Streaming Replication Protocol  
http://www.postgresql.org/docs/current/static/protocol-replication.html /  
Guide http://guide.couchdb.org/draft/notifications.html#continuous
[5] Erik Jones, Re: reading WAL files in python  
http://www.postgresql.org/message-id/ce398e79-ffef-4219-9606-f5d28ad2e...@myemma.com

[6] SkyTools http://wiki.postgresql.org/wiki/Skytools





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why does autovacuum clean fewer rows than I expect?

2014-07-03 Thread Nick Cabatoff
Hi,

I'm having trouble making sense of the these two autovacuum log entries.
I'm running PostgreSQL 8.4.

[2014-05-22 04:56:43.486 EST]  {537cf2c6.30f9}  LOG:  automatic vacuum of
table postgres.globaldicom.requests: index scans: 1
pages: 0 removed, 163600 remain
tuples: 5916 removed, 8423598 remain
system usage: CPU 3.62s/9.84u sec elapsed 1060.49 sec

[2014-05-24 05:02:12.707 EST]  {537f90c3.7144}  LOG:  automatic vacuum of
table postgres.globaldicom.requests: index scans: 2
pages: 0 removed, 510725 remain
tuples: 1554 removed, 8583343 remain
system usage: CPU 9.19s/16.86u sec elapsed 2673.32 sec

I've verified that no vacuum (manual or auto) took place between them unless
they were basically no-ops, since log_autovacuum_min_duration=250.
 Otherwise
we're using the stock autovacuum config settings, thus 3 workers, 1min
naptime,
autovacuum_vacuum_scale_factor=0.2, vacuum_cost_delay=0ms, etc.

First question: how do we get a 3x page usage increase with a 1% tuple
increase?  The only thing I can think of is that a lot of very big values
were
stored.  To my knowledge that didn't happen, but I can't rule it out
entirely.
Since I judge it quite unlikely I'm looking for alternate explanations of
what
might have happened.

Second question: What triggered the above autovacuums?  From what I
understand
it must be one of imminent txid wraparound or exceeding
autovacuum_vacuum_scale_factor.  If it was the latter, suggesting that 20%
of
the 8.5M tuples had changed, then why were only a few thousand tuples
removed?
I'm not sure how to eliminate the possibility that it was the former after
the
fact.  However, I know that there was a lot of activity, going by the
auto-analyzes that took place (autovacuum_vacuum_analyze_factor=0.1):

[2014-05-20 11:08:59.632 EST]  {537aab0a.b704}  LOG:  automatic analyze of
table postgres.globaldicom.requests system usage: CPU 0.36s/0.45u sec
elapsed 32.98 sec
[2014-05-23 21:12:49.164 EST]  {537f2d12.459b}  LOG:  automatic analyze of
table postgres.globaldicom.requests system usage: CPU 0.08s/0.36u sec
elapsed 30.60 sec
[2014-05-24 00:56:00.927 EST]  {537f6161.2b77}  LOG:  automatic analyze of
table postgres.globaldicom.requests system usage: CPU 0.16s/0.38u sec
elapsed 31.42 sec
[2014-05-24 06:33:16.972 EST]  {537fb06d.e9e3}  LOG:  automatic analyze of
table postgres.globaldicom.requests system usage: CPU 0.15s/0.35u sec
elapsed 31.13 sec

This isn't an isolated incident, I see many autovacuum log entries where
only a
few tuples are removed from a very large table, and I can't figure out why
it
was being vacuumed if that's all there was to be removed.


Re: [GENERAL] Why does autovacuum clean fewer rows than I expect?

2014-07-03 Thread Nick Cabatoff
Hi Tom,

Thanks for the quick reply.  8.4.9.  No db crashes, in fact not even a
restart in May until the 28th.  I don't believe any HOT updates were
occurring in that table: all updates modify one or more indexed fields
(update status from pending to done, or update to store a stack trace and
bump the time at which to retry the row into future).  We don't delete from
the table, just insert and update.

During the time interval in question a bug was leading to an unusually high
level of activity, both inserts and updates.   I'm thinking this is what
was responsible for a huge amount of table and index bloat, but haven't yet
connected all the dots.  Since I don't see autovacuum failing, or failing
to run, it's a bit of a mystery.

I guess the next step is to attempt to reproduce the bug on a test system
and see if we get the bloat.  I'll let you know what turns up.


On Thu, Jul 3, 2014 at 5:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Nick Cabatoff nick.cabat...@gmail.com writes:
  I'm having trouble making sense of the these two autovacuum log entries.
  I'm running PostgreSQL 8.4.

 8.4.what?

 It'd probably be a good idea to monitor the counts in pg_stat_all_tables
 for the troublesome table(s).  I suspect there are a whole lot of HOT
 updates going on in this table, which'd account for autoanalyze activity
 happening even though autovacuum finds few dead tuples to remove.  That
 still doesn't directly explain physical table bloat, but it might be
 a piece of the puzzle.

 Also, have you had any database crashes in between these autovacuums?
 I'm wondering if the WAL-replay issue fixed in commits 6f2aead1f and
 siblings could have produced the bloat.

 regards, tom lane



Re: [GENERAL] Why does autovacuum clean fewer rows than I expect?

2014-07-03 Thread Nick Cabatoff
We've been running 9.2 on our later branches for a while now.  We're
overdue to bump the older branches to a later 8.4 though, you're right.
Thanks for the reminder.


On Thu, Jul 3, 2014 at 7:46 PM, Michael Paquier michael.paqu...@gmail.com
wrote:

 On Fri, Jul 4, 2014 at 7:41 AM, Nick Cabatoff nick.cabat...@gmail.com
 wrote:
  8.4.9.
 This has been released in 2011, so you are missing 3 years worth of
 bug fixes. It may be a good idea to first update to 8.4.21 before
 trying to work more on that (consider as well an upgrade as 8.4 will
 be EOL soon).

 Regards,
 --
 Michael



Re: [GENERAL] Patched odbc_dfw for PG = 9.2 - compiles works but has issues with NULL representation in ODBC?!?

2014-04-24 Thread Gunnar Nick Bluth

Am 19.04.2014 12:44, schrieb Gunnar Nick Bluth:

Heillo,

after stumbling across compile errors for odbc_fdw I realised it was 
still on the 9.1. level.


I've patched it, based on the tds_fdw code (so it's still read-only).

It does compile, and I think it is generally sane, however I'm 
_anything but a C hacker_, so whoever feels confident enough please 
review this, or even add the write capability :)


As I've been playing around with tds_fdw before and just wanted to see 
if I can get around my encoding issues with odbc_fdw, I will not be 
able to test it for a few days though.


Ok, finally got it tested against the SQL-Server (shiver). It generally 
works (picks up the DSN from freetds.conf etc.), but when accessing any 
remote table, I get:


ERROR:  invalid input syntax for integer: NULL

Now, I don't want to change my INT columns to varchar and rework the 
rubbish that's in there... does anybody know how to trick (unix)ODBC 
into sending proper NULL values instead of a char representation?


Any hints appreciated!

Regards,

--
Gunnar Nick Bluth
RHCE/SCLA

Mobil +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX. Ten years later
they are choosing Windows over UNIX. What part of that message aren't you
getting? - Tom Payne



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Patched odbc_dfw for PG = 9.2

2014-04-19 Thread Gunnar Nick Bluth

Heillo,

after stumbling across compile errors for odbc_fdw I realised it was 
still on the 9.1. level.


I've patched it, based on the tds_fdw code (so it's still read-only).

It does compile, and I think it is generally sane, however I'm _anything 
but a C hacker_, so whoever feels confident enough please review this, 
or even add the write capability :)


As I've been playing around with tds_fdw before and just wanted to see 
if I can get around my encoding issues with odbc_fdw, I will not be able 
to test it for a few days though.


It is available at
https://github.com/bluthg/odbc_fdw

According notes have been added to the Foreign_data_wrappers WIKI page 
as well (no need for more people to try compiling it ;-).


Cheers,

--
Gunnar Nick Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inserting boolean types as an alias?

2013-12-02 Thread Nick
Hello I am new to this site and also a student. I am working on an assignment
and was wondering if there is a way to make an alias for a boolean? For
example when I am making a table called club_games and it must contain the
memberID, gameID, count, and status as column fields. When I enter the data
for status I want it to say either unavailable for false and available
for true. but I don't know how to do that.
This is what I have currently for the line that I am specifically talking
about:

INSERT INTO club_Games(memberID, gameID, hardwareID, count, status)
 VALUES ((SELECT memberID FROM members WHERE name = 'Fred Flinstone'),
(SELECT gameID FROM games WHERE name = 'Jurrasic Park'), NULL, 1, 'true');

when I do the SELECT * FROM club_Games this is my output:

id gameidmemberid  hardwareidcount 
status
1   12345z   A12345   null   1 
t

Where it says t for status I wanted it to say available but mean true for
boolean, but I don't know how to do that. Thanks to all for help.

 My file is attached if you would like to view my assignment.  task5.txt
http://postgresql.1045698.n5.nabble.com/file/n5781328/task5.txt  



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Inserting-boolean-types-as-an-alias-tp5781328.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High RAM usage on postgres

2013-03-17 Thread Gunnar Nick Bluth

Am 17.03.2013 09:31, schrieb Scott Marlowe:

On Thu, Mar 14, 2013 at 12:55 PM, prashantmalik
prashantmal...@gmail.com wrote:


*Query :* SELECT * FROM customer
---
top

top - 00:14:38 up 44 days, 12:06,  2 users,  load average: 3.57, 1.34, 0.69
Tasks: 243 total,   3 running, 240 sleeping,   0 stopped,   0 zombie
Cpu(s):  6.5%us,  0.6%sy,  0.0%ni, 92.5%id,  0.4%wa,  0.0%hi,  0.0%si,
0.0%st
Mem:  32949816k total, 31333260k used,  1616556k free,   526988k buffers
Swap:  4192956k total,  1989136k used,  2203820k free,  9182092k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
12671 root  25   0 19.8g  19g 1612 R 100.1 62.6   4:31.78 psql


What eats up your memory is psql, which indeed allocates a whopping 
19G physical memory, not the server process.
- Are you sure that is _your_ psql selecting * from customers and 
not somebody else's, doing a cross-join?
- Is there potentially anything that gets TOASTed in your customer 
table? I'm not sure if that would show up in pg_relation_size and 
friends, but it would get sent to psql of course.


Regards,

--
Gunnar Nick Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and a clustered file system

2012-11-12 Thread Gunnar Nick Bluth

Am 12.11.2012 11:03, schrieb Ivan Voras:

Hello,

Is anyone running PostgreSQL on a clustered file system on Linux? By
clustered I actually mean shared, such that the same storage is
mounted by different servers at the same time (of course, only one
instance of PostgreSQL on only one server can be running on such a
setup, and there are a lot of other precautions that need to be satisfied).


TBTH, I don't see the potential benefit. Clustered filesystems have 
benefits for special use cases (e.g. redundant fileservers or 
applications that actually can work in parallel, relying on file 
locking, DB clusters that coordinate writes themselves, ...), but PG 
certainly is not one of these...


I'm not even sure if PG will lock any files at all (I'd guess not, as 
SHM is used for process coordination), so you'd actually have to roll 
your own safety measures. OTOH, you pay a huge price for the clustering 
part (google for DLM), which will slow your I/O down significantly.


If you're just looking for shared storage (e.g. because SAN space is 
fricking expensive and you don't want to allocate it twice to your 
primary and secondary server ;-), it is probably safer  performing 
better to implement the  volumes and filesystems as underlying cluster 
resources. That's something your cluster software should be able to 
handle... ;-)


A newly imported filesystem will also not carry any leftover file locks 
around, should your primary server fail. The journal replay should not 
hurt you too much, as the relevant I/O PG does is sync anyway.


Just my 2p,

--
Gunnar Nick Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] At what point does a big table start becoming too big?

2012-08-23 Thread Nick
I have a table with 40 million rows and haven't had any performance issues yet.

Are there any rules of thumb as to when a table starts getting too big?

For example, maybe if the index size is 6x the amount of ram, if the table is 
10% of total disk space, etc?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High CPU on read-only streaming replication server

2012-07-20 Thread Nick
On Jul 19, 1:23 pm, Nick nboutel...@gmail.com wrote:
 My read-only streaming replication servers are showing a much higher
 CPU load than I would expect.

 None of my queries are longer than 10ms.

 My application server is connecting to this server via pgbouncer.

 Other than that, its a pretty standard setup. I haven't toyed with too
 many settings besides the standard optimization settings.

 Any ideas what could be going on?

One more thing to point out, the top process that shows high CPU is...
COMMAND: postgres: dbuser mydatabase X.XX.XXX.XXX() idle
CPU%: 68%

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High CPU on read-only streaming replication server

2012-07-20 Thread Nick
BTW, Its version 9.1.4 on a 2.2Ghz dual processor with 17GB of ram.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] High CPU on read-only streaming replication server

2012-07-20 Thread Nick
My read-only streaming replication servers are showing a much higher
CPU load than I would expect.

None of my queries are longer than 10ms.

My application server is connecting to this server via pgbouncer.

Other than that, its a pretty standard setup. I haven't toyed with too
many settings besides the standard optimization settings.

Any ideas what could be going on?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Return equal number of rows with same column value

2012-06-04 Thread Nick
On Jun 4, 5:27 pm, Nick nboutel...@gmail.com wrote:
 For the data...

 INSERT INTO test (id,val) VALUES (1,a);
 INSERT INTO test (id,val) VALUES (2,a);
 INSERT INTO test (id,val) VALUES (3,a);
 INSERT INTO test (id,val) VALUES (4,a);
 INSERT INTO test (id,val) VALUES (5,b);
 INSERT INTO test (id,val) VALUES (6,b);

 How could I return an even amount of val? For example, I would like to
 return this...

 1 | a
 2 | a
 5 | b
 6 | b

 Since the least number of b vals is 2, Id like to limit the a columns
 to return only 2

Found something good. Now if I could only dynamically get that 2
least val.

SELECT * FROM (
  SELECT ROW_NUMBER() OVER (PARTITION BY val ORDER BY val) AS r, t.*
FROM test t
) x
WHERE r = 2

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Return equal number of rows with same column value

2012-06-04 Thread Nick
For the data...

INSERT INTO test (id,val) VALUES (1,a);
INSERT INTO test (id,val) VALUES (2,a);
INSERT INTO test (id,val) VALUES (3,a);
INSERT INTO test (id,val) VALUES (4,a);
INSERT INTO test (id,val) VALUES (5,b);
INSERT INTO test (id,val) VALUES (6,b);

How could I return an even amount of val? For example, I would like to
return this...

1 | a
2 | a
5 | b
6 | b

Since the least number of b vals is 2, Id like to limit the a columns
to return only 2

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Chaining inserts ... This would be cool

2012-04-23 Thread Nick Apperson
There are obviously workarounds for this, but I'm wondering why the
following query shouldn't work. It seems like it should. With MVCC already
present on the back-end, I can't see any reason other than additional
parsing routines that this couldn't work:

INSERT INTO old_login_id_to_new_account_id(new_account_id, old_login_id)
INSERT INTO accounts(id, username, password_hash, email) SELECT DEFAULT,
username, password_hash, email FROM logins_old RETURNING id, logins_old.id;

Anyway, I'm sure there are more important features for Postgres (like
upserts, unique indexes on GIN, Gist and hash, fixed arrays, compact
storage of enum arrays as bitfields, etc.) I just thought it was an
interesting idea.


Re: [GENERAL] Chaining inserts ... This would be cool

2012-04-23 Thread Nick Apperson
You can have INSERT ... SELECT  RETURNING ... as a query in Postgres.
The RETURNING applies to the INSERT. See:

CREATE TABLE foo(i INT);
INSERT INTO foo(i) SELECT generate_series(1,5) RETURNING i;

Anyway, CTEs might work, but I can't see any good way to pass the
association through without the syntax or equivalent to what I originally
posted. Thanks for the help anyway.

On Mon, Apr 23, 2012 at 4:19 PM, Thom Brown t...@linux.com wrote:

 On 23 April 2012 21:49, Nick Apperson apper...@gmail.com wrote:
  There are obviously workarounds for this, but I'm wondering why the
  following query shouldn't work. It seems like it should. With MVCC
 already
  present on the back-end, I can't see any reason other than additional
  parsing routines that this couldn't work:
 
  INSERT INTO old_login_id_to_new_account_id(new_account_id, old_login_id)
  INSERT INTO accounts(id, username, password_hash, email) SELECT DEFAULT,
  username, password_hash, email FROM logins_old RETURNING id,
 logins_old.id;
 
  Anyway, I'm sure there are more important features for Postgres (like
  upserts, unique indexes on GIN, Gist and hash, fixed arrays, compact
 storage
  of enum arrays as bitfields, etc.) I just thought it was an interesting
  idea.

 You should be able to use writeable common table expressions to
 achieve a linking behaviour.

 http://www.postgresql.org/docs/9.1/static/queries-with.html
 http://www.depesz.com/index.php/2011/03/16/waiting-for-9-1-writable-cte/

 http://thombrown.blogspot.de/2011/11/writeable-common-table-expressions.html

 But I'm not sure the query you posted makes any sense.  Why would a
 SELECT statement have a RETURNING clause?  And where do the values for
 the first INSERT come from?
 --
 Thom



[GENERAL] Error trying to set up streaming replication

2012-03-26 Thread Nick
I followed the instructions from
http://packetcloud.net/2010/12/09/setting-up-streaming-replication-in-postgresql-9-0/

When I start the standby I get this...

LOG:  database system was shut down in recovery at 2012-03-26 17:36:32
PDT
LOG:  entering standby mode
cp: cannot stat `/var/lib/postgresql/9.1/main/pg_wal/
000100760019': No such file or directory
LOG:  consistent recovery state reached at 76/1978
LOG:  record with zero length at 76/1978
LOG:  database system is ready to accept read only connections
cp: cannot stat `/var/lib/postgresql/9.1/main/pg_wal/
000100760019': No such file or directory
LOG:  streaming replication successfully connected to primary
LOG:  incomplete startup packet

And the standby never gets updated by the master.

Any ideas whats going wrong?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Yearly date comparison?

2012-02-28 Thread Nick
What is the best way to find an event with a yearly occurrence?

CREATE TABLE events (
start_date DATE,
end_date DATE,
recurring TEXT
);
INSERT INTO events (start_date, end_date, recurring) VALUES
('2010-02-28','2010-03-01','yearly');

SELECT * FROM events WHERE (start_date+'2 YEARS'::INTERVAL) = NOW()
AND (end_date+'2 YEARS'::INTERVAL)  NOW();

Since I may not know how many years back the start/end_date is, is
there a way to just ignore the year or make it the current year,
without killing performance?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help speeding up a left join aggregate

2012-01-31 Thread Nick
I have a pretty well tuned setup, with appropriate indexes and 16GB of
available RAM. Should this be taking this long? I forced it to not use
a sequential scan and that only knocked a second off the plan.

QUERY
PLAN
--
 Hash Right Join  (cost=105882.35..105882.47 rows=3 width=118) (actual
time=3931.567..3931.583 rows=4 loops=1)
   Hash Cond: (songs_downloaded.advertisement_id = a.id)
   -  HashAggregate  (cost=105881.21..105881.26 rows=4 width=13)
(actual time=3931.484..3931.489 rows=3 loops=1)
 -  Seq Scan on songs_downloaded  (cost=0.00..95455.96
rows=1042525 width=13) (actual time=0.071..1833.680 rows=1034752
loops=1)
   Filter: (advertiser_id = 6553406)
   -  Hash  (cost=1.10..1.10 rows=3 width=46) (actual
time=0.050..0.050 rows=4 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 1kB
 -  Seq Scan on advertisements a  (cost=0.00..1.10 rows=3
width=46) (actual time=0.037..0.041 rows=4 loops=1)
   Filter: (advertiser_id = 6553406)
 Total runtime: 3931.808 ms
(10 rows)

SELECT a.id, sd.price, COALESCE(sd.downloads,0) AS downloads,
COALESCE(sd.download_revenue,0) AS download_revenue
FROM advertisements a
LEFT JOIN (SELECT advertisement_id, AVG(price) AS price, SUM(price) AS
download_revenue, COUNT(1) AS downloads FROM songs_downloaded WHERE
advertiser_id = 6553406 GROUP BY advertisement_id) AS sd ON a.id =
sd.advertisement_id
WHERE advertiser_id = 6553406

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to know if update is from a foreign key cascade in plperl?

2012-01-25 Thread Nick
Is it possible (in a plperl function) to know if an update is from a
foreign key cascade, rather than just a user submitted update
statement?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Whats the most efficient query for this result?

2012-01-17 Thread Nick
I have three tables (users, books, pencils) and would like to get a
list of all users with a count and total price of their books and
pencils for 2012-01-01...

So with this data...

users (user_id)
1
2
3

books (user_id, price, created)
1 | $10 | 2012-01-01
1 | $10 | 2012-01-01
3 | $10 | 2012-01-01

pencils
1 | $.50 | 2012-01-02
3 | $.50 | 2012-01-01
3 | $.50 | 2012-01-02

What is the most efficient way to get this result...

query_result (user_id, book_count, book_price_total, pencil_count,
pencil_price_total)
1 | 2 | $20 | 0 | $0
2 | 0 | $0 | 0 | $0
3 | 1 | $10 | 1 | $.50


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Whats the most efficient query for this result?

2012-01-17 Thread Nick
On Jan 17, 3:33 am, t...@audioboundary.com (Tom Molesworth) wrote:
 Hi Nick,

 On 17/01/12 00:18, Nick wrote:









  I have three tables (users, books, pencils) and would like to get a
  list of all users with a count and total price of their books and
  pencils for 2012-01-01...

  So with this data...

  users (user_id)
  1
  2
  3

  books (user_id, price, created)
  1 | $10 | 2012-01-01
  1 | $10 | 2012-01-01
  3 | $10 | 2012-01-01

  pencils
  1 | $.50 | 2012-01-02
  3 | $.50 | 2012-01-01
  3 | $.50 | 2012-01-02

  What is the most efficient way to get this result...

  query_result (user_id, book_count, book_price_total, pencil_count,
  pencil_price_total)
  1 | 2 | $20 | 0 | $0
  2 | 0 | $0 | 0 | $0
  3 | 1 | $10 | 1 | $.50

 Seems straightforward enough - left join the tables, group the result on
 user_id - so I'd write it as:

 select u.user_id, count(b.user_id) as book_count,
 coalesce(sum(b.price), 0) as book_price_total, count(p.user_id) as
 pencil_count, coalesce(sum(b.price), 0) as pencil_price_total
 from users u
 left join books b on b.user_id = u.user_id and b.created = '2012-01-01'
 left join pencils p on p.user_id = u.user_id and p.created = '2012-01-01'
 group by u.user_id
 order by u.user_id

 If you need something more efficient, summary tables may help - hard to
 say without knowing more about the real data.

 Tom

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

I think I figured it out. I have to add a where clause...

select u.user_id, count(b.user_id) as book_count,
coalesce(sum(b.price), 0) as book_price_total, count(p.user_id) as
pencil_count, coalesce(sum(b.price), 0) as pencil_price_total
from users u
left join books b on b.user_id = u.user_id and b.created =
'2012-01-01'
left join pencils p on p.user_id = u.user_id and p.created =
'2012-01-01'
WHERE b.user_id = u.user_id AND p.user_id = u.user_id
group by u.user_id
order by u.user_id

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Whats the most efficient query for this result?

2012-01-17 Thread Nick
On Jan 17, 3:33 am, t...@audioboundary.com (Tom Molesworth) wrote:
 Hi Nick,

 On 17/01/12 00:18, Nick wrote:









  I have three tables (users, books, pencils) and would like to get a
  list of all users with a count and total price of their books and
  pencils for 2012-01-01...

  So with this data...

  users (user_id)
  1
  2
  3

  books (user_id, price, created)
  1 | $10 | 2012-01-01
  1 | $10 | 2012-01-01
  3 | $10 | 2012-01-01

  pencils
  1 | $.50 | 2012-01-02
  3 | $.50 | 2012-01-01
  3 | $.50 | 2012-01-02

  What is the most efficient way to get this result...

  query_result (user_id, book_count, book_price_total, pencil_count,
  pencil_price_total)
  1 | 2 | $20 | 0 | $0
  2 | 0 | $0 | 0 | $0
  3 | 1 | $10 | 1 | $.50

 Seems straightforward enough - left join the tables, group the result on
 user_id - so I'd write it as:

 select u.user_id, count(b.user_id) as book_count,
 coalesce(sum(b.price), 0) as book_price_total, count(p.user_id) as
 pencil_count, coalesce(sum(b.price), 0) as pencil_price_total
 from users u
 left join books b on b.user_id = u.user_id and b.created = '2012-01-01'
 left join pencils p on p.user_id = u.user_id and p.created = '2012-01-01'
 group by u.user_id
 order by u.user_id

 If you need something more efficient, summary tables may help - hard to
 say without knowing more about the real data.

 Tom

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

Thanks Tom. Thats what I originally thought it would be, but my
results (disregarding the date clause) show that user 1 has 2 pencils
instead of 1, and user 3 has 2 books instead of 1.

I guess the LEFT JOIN is joining the first LEFT JOIN instead of the
users table.

Any other thoughts on how to get books and pencils to individually
LEFT JOIN the users table?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Whats the most efficient query for this result?

2012-01-17 Thread Nick
On Jan 17, 3:33 am, t...@audioboundary.com (Tom Molesworth) wrote:
 Hi Nick,

 On 17/01/12 00:18, Nick wrote:









  I have three tables (users, books, pencils) and would like to get a
  list of all users with a count and total price of their books and
  pencils for 2012-01-01...

  So with this data...

  users (user_id)
  1
  2
  3

  books (user_id, price, created)
  1 | $10 | 2012-01-01
  1 | $10 | 2012-01-01
  3 | $10 | 2012-01-01

  pencils
  1 | $.50 | 2012-01-02
  3 | $.50 | 2012-01-01
  3 | $.50 | 2012-01-02

  What is the most efficient way to get this result...

  query_result (user_id, book_count, book_price_total, pencil_count,
  pencil_price_total)
  1 | 2 | $20 | 0 | $0
  2 | 0 | $0 | 0 | $0
  3 | 1 | $10 | 1 | $.50

 Seems straightforward enough - left join the tables, group the result on
 user_id - so I'd write it as:

 select u.user_id, count(b.user_id) as book_count,
 coalesce(sum(b.price), 0) as book_price_total, count(p.user_id) as
 pencil_count, coalesce(sum(b.price), 0) as pencil_price_total
 from users u
 left join books b on b.user_id = u.user_id and b.created = '2012-01-01'
 left join pencils p on p.user_id = u.user_id and p.created = '2012-01-01'
 group by u.user_id
 order by u.user_id

 If you need something more efficient, summary tables may help - hard to
 say without knowing more about the real data.

 Tom

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

I take that back, now im incorrectly not getting user 2's results

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql allow connections without password [SOLVED]

2012-01-13 Thread debian nick
Thanks raymond, you were right, i never think that psql was using .pgpass
file. When a delete it psql ask me for the password.
THANKS I’ll never figure that out.

2012/1/13 Raymond O'Donnell r...@iol.ie

 On 12/01/2012 05:16, debian nick wrote:
  I have postgresql 8.4.9 installed now, my problem is that from time to
  time my postgresql let psql version 8.4.9 access the database without
  asking for password (psql -d mydatabase -h myhost -U myuser), and the
  connection attempts from psql 8.3 are not allowed no matter what i got
  time out exceptions from psql 8.3.
  The first time that happen to my i have installed postgresql 8.3, so i
  think that because i upgrade my system and not postgres (i upgrade from
  debian lenny (5) to squeeze (6)) that was the problem, so i installed
  8.4.9 and with pg_dumpall i restore all my databases. And the problem
  was solved. Postgresql ask for passwords no matter from where i try to
  connect.
  And now a month later i have the same problem with 8.4.9. After various
  restart, stop and start the problem disappear, and 3 hours later it came
  back.

 Do you have pgAdmin on the same (client) machine? pgAdmin stores
 passwords in the .pgpass file (though it asks you first, and also warns
 about clear-text password) which could possibly explain this
 intermittent behaviour.

 Ray.


 --
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie



[GENERAL] Postgresql allow connections without password

2012-01-12 Thread debian nick
I have postgresql 8.4.9 installed now, my problem is that from time to time
my postgresql let psql version 8.4.9 access the database without asking for
password (psql -d mydatabase -h myhost -U myuser), and the connection
attempts from psql 8.3 are not allowed no matter what i got time out
exceptions from psql 8.3.
The first time that happen to my i have installed postgresql 8.3, so i
think that because i upgrade my system and not postgres (i upgrade from
debian lenny (5) to squeeze (6)) that was the problem, so i installed 8.4.9
and with pg_dumpall i restore all my databases. And the problem was solved.
Postgresql ask for passwords no matter from where i try to connect.
And now a month later i have the same problem with 8.4.9. After various
restart, stop and start the problem disappear, and 3 hours later it came
back.
I don't know what to do.
In my logs the only strange thing i startup package incomplete after i
start the server. I google it but nothing seems to explain it, and i don't
have any cron.

Now im going to install postgresql 9 to see if the problem disappear.

Any help will be really appreciate.


[GENERAL] Pl/pgsql function

2011-06-04 Thread Nick Raj
I am implementing some pl/pgsql functions.

Is there any way to change the input
for example- I got some value by $1. I want to modify this value (means
split that value), Can we do this and how?

Second thing,
Suppose i defined a function test as

select test('geom',the_geom,time) from tablename
.
Inside body,
How can i get the tablename inside the body (because i haven't pass table
name to function)
..
end


[GENERAL] Index Size

2011-05-30 Thread Nick Raj
Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_
size('cubtest'));   //Table size without index
 pg_size_pretty

 8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
 pg_size_pretty

 16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick


Re: [GENERAL] Index Size

2011-05-30 Thread Nick Raj
On Tue, May 31, 2011 at 8:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Craig Ringer cr...@postnewspapers.com.au writes:
  On 05/30/2011 08:53 PM, Nick Raj wrote:
  Cube code provided by postgres contrib folder. It uses the NDBOX
 structure.
  On creating index, it's size increase at a high rate.

  [snip]

  Can some one tell me why index is becoming so large?
  How to compress or reduce its size?

  It'd help if you included some more details:

  - Your PostgreSQL version

 In particular, I wonder whether his version contains this fix:

 Author: Robert Haas rh...@postgresql.org
 Branch: master [4fa0a23c7] 2010-11-14 21:27:34 -0500
 Branch: REL9_0_STABLE Release: REL9_0_2 [e6b380251] 2010-11-14 21:27:34
 -0500
 Branch: REL8_4_STABLE Release: REL8_4_6 [2519b8268] 2010-11-14 21:27:34
 -0500
 Branch: REL8_3_STABLE Release: REL8_3_13 [d589e4070] 2010-11-14 21:27:34
 -0500
 Branch: REL8_2_STABLE Release: REL8_2_19 [e642ca767] 2010-11-14 21:27:34
 -0500
 Branch: REL8_1_STABLE Release: REL8_1_23 [0e27a7319] 2010-11-14 21:27:34
 -0500

Fix bug in cube picksplit algorithm.

Alexander Korotkov




Is this bug fixed?
postgresql 9.0 having this bug or not?

Thanks


Re: [GENERAL] disable seqscan

2011-05-24 Thread Nick Raj
On Mon, May 23, 2011 at 7:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Nick Raj nickrajj...@gmail.com writes:
  Andrew Sullivan a...@crankycanuck.ca wrote:
  It sounds like your index can't actually be used to satisfy your
  query.  Without seeing the table definition, index definition, and
  query, however, it's pretty hard to give you a real answer.

  explain analyze select * from vehicle_stindex where
  ndpoint_overlap('(116.4,39.3,2008/02/11
 11:11:11),(117.8,39.98,2008/02/13
  11:11:11)',stpoint);

  I have defined a datatype called ndpoint. It works same as contrib/cube
  code (cube datatype).

 Indexes can only be used with WHERE conditions that are of the form
indexed_column  operator  some_expression
 where the operator is one of those belonging to the index's operator
 class.  You haven't told us what operators you put into the operator
 class for this new data type, but in any case the function
 ndpoint_overlap is not one of them.

regards, tom lane

CREATE OR REPLACE FUNCTION ndpoint_overlap(ndpoint, ndpoint) RETURNS
bool AS '$libdir/ndpoint','ndpoint_overlap' LANGUAGE CIMMUTABLE STRICT;

CREATE OPERATOR  (
LEFTARG = ndpoint, RIGHTARG = ndpoint, PROCEDURE = ndpoint_overlap,
COMMUTATOR = '',
RESTRICT = areasel, JOIN = areajoinsel
);

CREATE OPERATOR CLASS gist_ndpoint_ops
DEFAULT FOR TYPE ndpoint USING gist AS
OPERATOR3, ..

One think i am not able to understand is, if i use ndpoint_overlap method it
is going for seq. scan every time but if i use  operator it is using index
scan. Why it is so?
Look below for their explain analyze statement

1. explain analyze select * from vehicle_stindex where
ndpoint_overlap('(116.4,39.3,2008/02/11 11:11:11),(117.8,39.98,2008/02/13
11:11:11)',stpoint);
QUERY
PLAN
--
 Seq Scan on vehicle_stindex  (cost=100.00..1050870.86
rows=698823 width=66) (actual time=599.300..599.300 rows=0 loops=1)
   Filter: ndpoint_overlap('(116.40,39.30,2008-02-11
11:11:11+05:30),(117.80,39.98,2008-02-13 11:11:11+05:30)'::ndpoint,
stpoint)
 Total runtime: 599.337 ms
(3 rows)

2. explain analyze select * from vehicle_stindex where
'(116.4,39.3,2008/02/11 11:11:11),(117.8,39.98,2008/02/13 11:11:11)' 
stpoint;
   QUERY
PLAN
-
 Index Scan using stindex on vehicle_stindex  (cost=0.00..58542.00
rows=10482 width=66) (actual time=0.866..0.866 rows=0 loops=1)
   Index Cond: ('(116.40,39.30,2008-02-11
11:11:11+05:30),(117.80,39.98,2008-02-13 11:11:11+05:30)'::ndpoint
 stpoint)
 Total runtime: 0.916 ms
(3 rows)

Why these is happening?


[GENERAL] disable seqscan

2011-05-23 Thread Nick Raj
Hi,
I have build an index. When, i execute the query, it gives the result by
sequential scan, not by using my index.
I have already run vacuum analyze to collect some statistics regarding
table.

May be sequential scan is giving faster execution time than my indexing. But
i want to know how much time it would take in my indexing.
For that, i have set enable_seqscan=off in postgresql.conf. But it still
going through sequential scan. Even i tried to set for a particular session,
by set enable_seqscan=off on psql terminal.
It again going by sequential scan.

Does any one having an idea to force postgres to use index scan?

Thanks
Nick


Re: [GENERAL] disable seqscan

2011-05-23 Thread Nick Raj
On Mon, May 23, 2011 at 5:44 PM, Andreas Kretschmer 
akretsch...@spamfence.net wrote:

 Andrew Sullivan a...@crankycanuck.ca wrote:

  On Mon, May 23, 2011 at 05:31:04PM +0530, Nick Raj wrote:
   Hi,
   I have build an index. When, i execute the query, it gives the result
 by
   sequential scan, not by using my index.
 
   For that, i have set enable_seqscan=off in postgresql.conf. But it
 still
   going through sequential scan.
 
  It sounds like your index can't actually be used to satisfy your
  query.  Without seeing the table definition, index definition, and
  query, however, it's pretty hard to give you a real answer.

 ... and the output produced by

 explain analyse insert your query

 Explain analyze of my query
 explain analyze select * from vehicle_stindex where
 ndpoint_overlap('(116.4,39.3,2008/02/11 11:11:11),(117.8,39.98,2008/02/13
 11:11:11)',stpoint);
 QUERY
 PLAN

 --
  Seq Scan on vehicle_stindex  (cost=100.00..1050870.86
 rows=698823 width=66) (actual time=3285.106..3285.106 rows=0 loops=1)
Filter: ndpoint_overlap('(116.40,39.30,2008-02-11
 11:11:11+05:30),(117.80,39.98,2008-02-13 11:11:11+05:30)'::ndpoint,
 stpoint)
  Total runtime: 3285.153 ms
 (3 rows)


 Table Defination

 Table public.vehicle_stindex
  Column  |  Type   | Modifiers
 -+-+---
  regno   | text|
  stpoint | ndpoint |
 Indexes:
 stindex gist (stpoint)

 It has 2099192 tuples.


 Index defination
 create index stindex on vehicle_stindex using gist(stpoint).


 I have defined a datatype called ndpoint. It works same as contrib/cube
 code (cube datatype).
 Query is working fine. I mean no error from query or my datatype. All are
 giving right result.

If anything more to mention, then tell me

Nick


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Toast datum

2011-05-16 Thread Nick Raj
Hi,

#define DatumGetNDBOX(x)((NDBOX*)DatumGetPointer(x))
#define PG_GETARG_NDBOX(x)DatumGetNDBOX(
PG_DETOAST_DATUM(PG_GETARG_DATUM(x)) )

Now i have to define
#define NDBOXGetDatum(x) ()PointerGetDatum(x)

Is there any need to replace this ?? with some toastable thing or is it
sufficient?
If some toastable require then, what is that macro?

Thanks


[GENERAL] arguments are not toastable

2011-05-16 Thread Nick Raj
Hi,
I have defined some function and also used NDBOX structure that having
variable length.

typedef struct NDBOX
{
int32vl_len_;/* varlena length */
unsigned int dim;
doublex[1];
} NDBOX;

When i called my function, it gives NDBOX to be null
On debugging, i found out ,FunctionInvokeCall invokes fmgr_oldstyle
function, for getting argument

if (fnextra-arg_toastable[i])//this returns false, not able to get
arguments
fcinfo-arg[i] =
PointerGetDatum(PG_DETOAST_DATUM(fcinfo-arg[i]));
}

Why it is going into fmgr_oldstyle if the arguments is not toasted? How to
get arguments toastable?? and even my table pg_class.reltoastrelid entry is
zero.


Thanks


Re: [GENERAL] Debug Contrib/cube code

2011-05-13 Thread Nick Raj
Hi,
I am able to debug postgres by --enable-debug but breakpoint in cube is not
working. Can you more elaborate on your second point?
Even i also tried to change makefile of contrib/cube by -g(debug) but it
gives error.
Any idea is welcome.

ps: sorry for late reply

Thanks




On Thu, May 12, 2011 at 6:42 AM, Joshua Tolley eggyk...@gmail.com wrote:

 On Fri, May 06, 2011 at 10:43:23AM +0530, Nick Raj wrote:
  Hi,
  I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can
 we
  able to debug that cube code?  Because there is no .configure  file to
  enable debug. Is there is any way to change make file to enable debug?

 If your postgres build uses --enable-debug, cube should use it as well. It
 determines this either through pg_config, or by referring to makefiles in
 the
 postgres source directory contrib/cube lives in.

 --
 Joshua Tolley / eggyknap
 End Point Corporation
 http://www.endpoint.com

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAk3LNAsACgkQRiRfCGf1UMMtUQCeJHmpy5PrHX8UG48sYe3EHlmQ
 SdEAnA9WCw5vFcklpCrJYDVc/yqo9FtH
 =FKXd
 -END PGP SIGNATURE-




[GENERAL] Debug Contrib/cube code

2011-05-05 Thread Nick Raj
Hi,
I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can we
able to debug that cube code?  Because there is no .configure  file to
enable debug. Is there is any way to change make file to enable debug?

Thanks
Nick


[GENERAL] Defining input function for new datatype

2011-04-21 Thread Nick Raj
Hi,
I am defining a new data type called mpoint
i.e.
typedef struct mpoint
{
Point p;
Timestamp t;
} mpoint;

For defining input/output function

1 Datum mpoint_in(PG_FUNCTION_ARGS)
2 {
3
4mpoint *result;
5char *pnt=(char *)malloc (sizeof (20));
6char *ts=(char *)malloc (sizeof (20));
7result= (mpoint *) palloc(sizeof(mpoint));
8char *st = PG_GETARG_CSTRING(0);
9mpoint_decode(st,pnt,ts);
// st breaks down into pnt that corresponds to Point and ts corresponds to
Timestamp
10
11  result-p = point_in(PointerGetDatum(pnt));//
point_in (input function for point that assigns x, y into point)
12  result- t = timestamp_in(PointerGetDatum(ts)); // similar
for timestamp
13
14  PG_RETURN_MPOINT_P(result);
15   }

line no 11 warning: passing argument 1 of ‘point_in’ makes pointer from
integer without a cast
 ../../../include/utils/geo_decls.h:191: note: expected
‘FunctionCallInfo’ but argument is of type ‘unsigned int’
line no 11 error: incompatible types when assigning to type ‘Point’ from
type ‘Datum’
line no 12 warning: passing argument 1 of ‘timestamp_in’ makes pointer from
integer without a cast
 ../../../include/utils/timestamp.h:205: note: expected
‘FunctionCallInfo’ but argument is of type ‘unsigned int’

Can anybody figure out what kind of mistake i am doing?
Also, why it got related to 'FunctionCallInfo' ?

Thanks
Nick


Re: [GENERAL] Typecast

2011-04-18 Thread Nick Raj
Thanks dude

On Mon, Apr 18, 2011 at 2:25 PM, Chetan Suttraway 
chetan.suttra...@enterprisedb.com wrote:



 On Fri, Apr 15, 2011 at 10:29 PM, Nick Raj nickrajj...@gmail.com wrote:

 Hi,
 Can anybody tell me how to typecast data type Point into Datum?

 Thanks
 Nick


 Assuming you are referring to  c-code,

 Point somepoint;
 Datum result;

 result = PointPGetDatum(somepoint)

 You can also checkout src/backend/utils/adt/geo_ops.c which has code for
 conversion between
 datum and point and vice-versa.



 --
 Regards,
 Chetan Suttraway
 EnterpriseDB http://www.enterprisedb.com/, The Enterprise 
 PostgreSQLhttp://www.enterprisedb.com/
  company.






[GENERAL] Typecast

2011-04-15 Thread Nick Raj
Hi,
Can anybody tell me how to typecast data type Point into Datum?

Thanks
Nick


[GENERAL] Global Variables in plpgsql

2011-04-11 Thread Nick Raj
Hi,
Can anyone know how to define global variable in plpgsql?
Thanks

Regards,
Raj


[GENERAL] Integrating New Data Type

2011-04-05 Thread Nick Raj
Hi all,
I have defined a new data type. I have defined in and out function for that
data type.
But i want to know how to integrate this data type with postgres (how
postgres compile my code or know my datatype) ?

Thanks,
Nirmesh


[GENERAL] Understanding Datum

2011-03-23 Thread Nick Raj
Hi,
I am understanding the postgres code. In code, i just want to see what are
values that are passing through the variables?
Can you please tell me if the variable is of type Datum, then how to print
its value? Because i dont the variable v type.

And also what the structure of Datum?

Thanks,
Raj


Re: [GENERAL] Understanding Datum

2011-03-23 Thread Nick Raj
Hi,
In postgres, typedef uintptr_t Datum
Datum is getting value from PG_GETARG_POINTER(1);
But, now problem is how would i know the type of PG_GETARG_POINTER(1)
(postgres internally pass this argument) to figure out datum type?

Can you tell detailed structure of Datum, so i can print the value?? How to
find out what type of pointer argument is PG_GETARG_POINTER(1)??

Thanks,
Nirmesh

On Wed, Mar 23, 2011 at 11:40 PM, Radosław Smogura m...@smogura.eu wrote:

 Nick Raj nickrajj...@gmail.com Wednesday 23 March 2011 18:45:41
  Hi,
  I am understanding the postgres code. In code, i just want to see what
 are
  values that are passing through the variables?
  Can you please tell me if the variable is of type Datum, then how to
 print
  its value? Because i dont the variable v type.
 
  And also what the structure of Datum?
 
  Thanks,
  Raj

 The structure is explained in one of headers, generally Datum is pointer.
 It
 points to memory containing at first four bytes integer describing size of
 data in datum (use macro to extract this), and then, it's followed by bytes
 containing data.

 Actually almost each object is represented by structure like this
 struct something {
int4 size; //Required
 //  Here put what you want
 }
 see headers.

 If you want to present data from datum you need to 1) check what type of
 data
 datum has (datum doesn't contain this) 2) Find datum representation for
 this
 type.

 Regards,
 Radek



Re: [GENERAL] Understanding Datum

2011-03-23 Thread Nick Raj
If Datum contains only the value (not having type specific info.), then
Suppose i want to print the Datum V value (already defined in postgres)
then printf(%??, V);

Because V is assigned by PG_GETARG_POINTER(1);
I don't having the information of type Datum.

How to print the value of Datum in postgres?


On Thu, Mar 24, 2011 at 2:35 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Nick Raj nickrajj...@gmail.com writes:
  In postgres, typedef uintptr_t Datum
  Datum is getting value from PG_GETARG_POINTER(1);
  But, now problem is how would i know the type of PG_GETARG_POINTER(1)
  (postgres internally pass this argument) to figure out datum type?

 Datum does not carry any type information, only a value.  Functions are
 typically coded to know their input types a priori.  If you want to
 write code that is not type-specific then you'd better be passing around
 type OIDs as well as values.

regards, tom lane

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] OT: Oleg Bartunov in Himalaya...

2011-03-16 Thread Nick Rudnick

Dear all,

please forgive me, but I am so impressed that Oleg, besides his splendid 
work on TSearch, gives us such unbelievable pictures:


http://www.dailymail.co.uk/sciencetech/article-1366794/Rainbow-cloud-towers-Mount-Everest.html

I stumbled about this completely by accident, and can't reject the 
reflex to post this here, wow...


Please excuse  all the best, Nick

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Composite index structure

2011-03-06 Thread Nick Raj
Hi all,

I want to construct an Composite Index Structure i.e. a combination of
gist and btree.
What i am thinking is that first creating a Rtree structure that is pointing
to another Btree structure.
For example, Suppose i want to find vehicles between 2 to 4 pm on 14/2/2011
on X road.

I am thinking of creating rtree structure for road network and then btree
for time. For reaching X road i use Rtree, and from there btree begin i.e.
leaf node of rtree contains the pointer to root node of btree ( in this way
i have all time belonging to X road)

My question is that how to implement this composite index structure in
postgres?

Let us suppose, if i create mygist index, then i have to write my own
operator class?
or
can i use gist index as it is and btree tree as it is. I mean their operator
class and their gist methods but how to establish linkage between them?

Any idea ??

Thanks
Raj


[GENERAL] data type

2011-03-02 Thread Nick Raj
Hi,
I am writing some function in postgres pl/sql.

My function is of type St_ABC((select obj_geom from XYZ),(select
boundary_geom from boundary))
I have table XYZ with 20,000 tuples and in boundary, i have only one
geometry.

In postgres, ST_intersects(obj_geom, boundary_geom) checks each obj_geom
with boundary_geom and returns true/false. It returns true/false 20,000
times
I want to write function that return only one true/false according to my
calculation.

So, create or replace function ST_ABC(?, geometry) returns boolean

Which type of data type will be used in above function (in place of ?)
that can collect more than one row(20,000) ?

Thanks
Raj


[GENERAL] Are there any projects interested in object functionality? (+ rule bases)

2011-01-21 Thread Nick Rudnick

Dear all,

for the sake academic teaching, a colleague asked me in how far 
PostgreSQL does support object functionality these days.


I am afraid my web research was not very fruitful to him; the impression 
is that hardly anybody is occupied in working on PostgreSQL object 
functionality -- have ORM mappers grown so strong?


The docs report that the SQL/OLB ISO/IEC 9075-10 part of the SQL 
standard have no implementation yet.


So I'd like to place my questions here:

* are there any people / projects known which are interested in ORDBMS / 
OODBMS usage of PostgreSQL? Strict SQL standard conformance is less 
important than the possibility to provide instructive and impressive 
examples to students.


* are there any people / projects known which are interested in 
extending PostgreSQL at a higher level (plpgsql, creating operators, 
etc.) for the sake of ORDBMS / OODBMS functionality?


* are there any people / projects known which are interested in 
extending PostgreSQL on the level of developing C code for the sake of 
ORDBMS / OODBMS functionality?


* in how far does the backend support such efforts -- would it do fine, 
or is rather to be expected that doing ORDBMS / OODBMS driven queries 
would lead to disastrous performance?


* are there any people / projects known which are interested in using 
the rule (?trigger?) system of PostgreSQL (maybe with extensions) to 
achieve some kind of rule base / datalog type inference engines? In how 
far does the backend constrain this in regard of performance?


Thanks a lot in advance,

Nick




Re: [GENERAL] Are there any projects interested in object functionality? (+ rule bases)

2011-01-21 Thread Nick Rudnick

Hi Andy,

to a considerable extent I agree with you -- but the motivation exactly 
is not typical business apps, but *academic teaching* needing a 
demonstration platform for *non-standard* apps -- these guys are a DB 
chair, and I was optimistic there might be some projects here which 
might allow them to use PostgreSQL for that sake.


So I am meaning OO in a very broad sense.

All the best,

Nick

On 01/21/2011 04:10 PM, Andy Colson wrote:


Short answer: no.

Here are some counter questions for you:

Have you ever seen any actual real world usage of OORDBMS?

Are there any products (good, useful products, not just academic 
playthings) that support OORDBMS?


Bonus: If there is more than one product, do they share a common query 
language?


You do realize that ORM sucks, right?

Strict SQL standard conformance is less important than the 
possibility to provide instructive and impressive examples to students.


Well!  As long as its impressive!  Who cares about anything else!


I've seen the buzword OODBMS for as long as OOP, and while OOP came 
and went, OODBMS never amounted to anything.  Let it go.


If anything, OODBMS transformed into webservices.  There is your 
common query language.  JSON over HTTP!


OOP in code is easily understandable.  OOData?  It doesnt even make 
sense.  OOP in code means a container to hold your common data and 
code together.  In PG you can use a Schema to do the same thing.  OOP 
needs polymorphism.  How does that even make sense with data?  (Its a 
double rainbow) WHAT DOES IT EVEN MEAN?!


Academia saw OOP revolutionize code, and I think they wanted something 
to revolutionize data as well.  We have a set of rules and procedures 
for developing code... and those don't apply to data.  (There is a 
tiny little gray area however, when you get to stored procedures, 
which is code, but dont let it fool you, its data).


In fact, what if I told you:  Code is just data.

There, whew!  I spent my existentialism for the month :-)

-Andy




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there any projects interested in object functionality? (+ rule bases)

2011-01-21 Thread Nick Rudnick

Hi Jeff,


this is worse news than I expected -- I had that that at least somebody 
would be playing around with this...


Of course there is a price to pay for OO functionality -- but is there 
evidence that, given I tweak the frontend into such a direction, 
PostreSQL due to its backend architecture will do really significantly 
slower than OODBMS/ORDBMS more specialized for that sake (the same with 
rules...)?? Or is it worth to give it a try?


This would be my last resort,

Nick



Although postgres at one time had ORDBMS as a goal, I haven't seen any
real interest or work in that area for a long time. The OO features are
primarily:

1. OIDs -- hidden columns with auto-generated number.

At one time, OIDs were a part of every table,

;-)) The time when I began using PostgreSQL...

now they must be specified
explicitly. They still remain on many of the system catalogs, but not
all. Mostly, they are now just an implementation detail that people only
notice when looking at the catalogs (and I am personally annoyed that
they are hidden, because when looking at the catalogs usually you want
to see the OIDs).

2. Inheritance

This feature is now used almost exclusively for physical partitioning
rather than logical design.

Nick, I think the problem with ORDBMS is that they essentially introduce
pointers in the data, and that adds a lot of complexity.

For instance, the relational algebra has nice closure properties. If you
join two relations, you get another relation.

But what if you introduce pointers? Well, then each table might start
out with OIDs, but then when you join with another table and do a GROUP
BY, you lose any meaningful OIDs. So what do you have? Something else.
Therefore, you've lost closure.

Now, maybe there is a good reason to sacrifice closure and other nice
properties of the relational model. But there is a perception among many
people (like Andy) that the case has not been made.

Regards,
Jeff Davis






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Thoughts on a surrogate key lookup function?

2010-11-06 Thread Nick
Are there any existing trigger functions (preferably C) that could
retrieve a missing value for a compound foreign key on insert or
update? If this overall sounds like a really bad idea, please let me
know as well. This functionality could really speed my project up
though.

For example,

CREATE TABLE email_addresses (id BIGSERIAL, email_address VARCHAR);
ALTER TABLE email_addresses ADD CONSTRAINT _email_addresses_pkey
PRIMARY KEY (id,email_address);
CREATE TABLE users (id BIGSERIAL, email_address_id BIGINT,
email_address VARCHAR);
ALTER TABLE users ADD CONSTRAINT users_fkey_email_address_id FOREIGN
KEY (email_address_id,email_address) REFERENCES
email_addresses(id,email_address) ON UPDATE CASCADE ON DELETE SET
NULL;
CREATE TRIGGER 1-auto_email_address_id BEFORE INSERT OR UPDATE ON
users FOR EACH ROW EXECUTE PROCEDURE _auto_id('email_address_id');

If I would like to insert a new user AND new email_address I would
assign the email_address_id of NULL or -1.

INSERT INTO users (id, email_address_id, email_address) VALUES
(1,-1,'f...@bar.com') which would do...

SELECT id FROM email_addresses WHERE email_address = 'f...@bar.com'
INTO NEW.email_address_id

If it cannot find a value, it then does...

INSERT INTO email_addresses (email_address) VALUES ('f...@bar.com')
RETURNING id INTO NEW.email_address_id

If I would like to insert a new user and existing email address, I
would assign the email_address_id of 0.

INSERT INTO users (id, email_address_id, email_address) VALUES
(2,0,'f...@bar.com') which would...

SELECT id FROM email_addresses WHERE email_address = 'f...@bar.com'
INTO NEW.email_address_id

If it cannot find a value, it will raise an exception.

If I insert or update users and email_address_id is  0 then it gets
the natual value by id...

INSERT INTO users (id, email_address_id, email_address) VALUES
(3,2,NULL) which will

SELECT email_address FROM email_addresses WHERE id = 2 INTO
NEW.email_address

And if both email_address_id and email_address are NULL then, both
values just get inserted into users as null.

Declaring the surrogate as -1 (select or insert) or 0 (select) would
save time having to lookup or create the value before inserting into
users. Ive been using a plperl function for this and really like the
results but im wondering if theres a faster way.

The foreign key constraint already scans the email_addresses table for
values so im wondering if theres a way to bootstrap that process or
maybe thats too risky? Any thoughts would be greatly appreciated.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thoughts on a surrogate key lookup function?

2010-11-06 Thread Nick
Merlin, thanks for the reply. Yes, using email_addresses was a very
silly example. Maybe the following is a better example...

CREATE TABLE first_names (id INT, first_name VARCHAR);
ALTER TABLE first_names ADD CONSTRAINT first_names_pkey PRIMARY KEY
(id,first_name);
ALTER TABLE first_names ADD CONSTRAINT first_names_unique_first_name
UNIQUE (first_name);

CREATE TABLE last_names (id INT, first_name VARCHAR);
ALTER TABLE last_names ADD CONSTRAINT last_names_pkey PRIMARY KEY
(id,last_name);
ALTER TABLE last_names ADD CONSTRAINT last_names_unique_last_name
UNIQUE (last_name);

CREATE TABLE referrals (id INT, ref_code VARCHAR);
ALTER TABLE referrals ADD CONSTRAINT referrals_pkey PRIMARY KEY
(id,ref_code);
ALTER TABLE referrals ADD CONSTRAINT referrals_unique_ref_code UNIQUE
(ref_code);

CREATE TABLE users (id INT, first_name_id INT, first_name VARCHAR,
last_name_id INT, last_name VARCHAR, ref_code_id INT, ref_code DATE);
ALTER TABLE users ADD CONSTRAINT users_fkey_first_names FOREIGN KEY
(first_name_id,first_name) REFERENCES first_names(id,first_name) ON
UPDATE CASCADE ON DELETE SET NULL;
ALTER TABLE users ADD CONSTRAINT users_fkey_last_names FOREIGN KEY
(last_name_id,last_name) REFERENCES last_names(id,last_name) ON UPDATE
CASCADE ON DELETE SET NULL;
ALTER TABLE users ADD CONSTRAINT users_fkey_referrals FOREIGN KEY
(ref_code_id,ref_code) REFERENCES referrals(id,ref_code) ON UPDATE
CASCADE ON DELETE SET NULL;
CREATE TRIGGER auto_first_name_id BEFORE INSERT OR UPDATE ON users
FOR EACH ROW EXECUTE PROCEDURE _auto_id('first_name_id');
CREATE TRIGGER auto_last_name_id BEFORE INSERT OR UPDATE ON users
FOR EACH ROW EXECUTE PROCEDURE _auto_id('last_name_id');
CREATE TRIGGER auto_ref_code_id BEFORE INSERT OR UPDATE ON users FOR
EACH ROW EXECUTE PROCEDURE _auto_id('ref_code_id');

If I would like to insert a new user, first name, last name I would
give the surrogates a value of NULL or -1. Their referral code must
exist so ill give that surrogate a value of 0.
INSERT INTO users (id, first_name_id, first_name, last_name_id,
last_name, ref_code_id, ref_code) VALUES (1,-1,'John',-1,'Doe',
0,'xyz') which would...

SELECT id FROM first_names WHERE first_name = 'John' INTO
NEW.first_name_id
IF NOT FOUND INSERT INTO first_names (first_name) VALUES ('John')
RETURNING id INTO NEW.first_name_id

SELECT id FROM last_names WHERE last_name = 'Doe' INTO
NEW.last_name_id
IF NOT FOUND INSERT INTO last_names (last_name) VALUES ('Doe')
RETURNING id INTO NEW.last_name_id

SELECT id FROM referral_codes WHERE ref_code = 'xyz' INTO
NEW.ref_code_id
IF NOT FOUND raise exception

If I want to insert the new user John Smith, and I already know the
surrogate value for John and I dont want to add a ref_code then I can
do...
INSERT INTO users (id, first_name_id, first_name, last_name_id,
last_name, ref_code_id, ref_code) VALUES
(2,1,NULL,-1,'Smith',NULL,NULL) which would...

SELECT first_name FROM first_names WHERE id = 1 INTO NEW.first_name
IF NOT FOUND raise exception

SELECT id FROM last_names WHERE last_name = 'Smith' INTO
NEW.last_name_id
IF NOT FOUND INSERT INTO last_names (last_name) VALUES ('Smith')
RETURNING id INTO NEW.last_name_id

So by adding both the surrogate and natural keys to users table and
toggling the surrogate on insert by 0 (must exist) or -1 (select or
insert) I can bypass a much more complex insert statement. Is this
frowned upon? I havent had many issues (but some ive been able to work
around) with this as a plperl trigger and am pleased with how much
easier it makes my inserts (besides the execution speed).

-Nick


On Nov 6, 6:28 am, mmonc...@gmail.com (Merlin Moncure) wrote:
 On Fri, Nov 5, 2010 at 5:27 PM, Nick nboutel...@gmail.com wrote:
  Are there any existing trigger functions (preferably C) that could
  retrieve a missing value for a compound foreign key on insert or
  update? If this overall sounds like a really bad idea, please let me
  know as well. This functionality could really speed my project up
  though.

 I think your issues are really SQL issues.  See my comments below:

  For example,

  CREATE TABLE email_addresses (id BIGSERIAL, email_address VARCHAR);
  ALTER TABLE email_addresses ADD CONSTRAINT _email_addresses_pkey
  PRIMARY KEY (id,email_address);

 email addresses should be unique, so this is pretty silly.  You are
 not getting fast lookups on email which is what you need.  You have
 two choices here: keep the surrogate on email_addresses, in which case
 I'd do the tables like this:
 CREATE TABLE email_addresses
 (
   email_address_id BIGSERIAL primary key,
   email_address VARCHAR unique
 );

 create table users
 (
   user_id BIGSERIAL primary key,
   email_address_id BIGINT references email_addresses on delete cascade/set 
 null,
 )

 your insert will look like this (pseudo code):
 select email_address_id from email_addresses where email_address =
 '@foo.com';

 if not found then
   insert into email_addresses(email_address) returning email_address_id;
 else
   insert into users

Re: [GENERAL] It it possible to get this result in one query?

2010-10-15 Thread Nick
Thanks Guy, is it possible to get the 3rd column result as an array
instead of string? -Nick

On Oct 14, 9:27 pm, guyr-...@burntmail.com (Guy Rouillier) wrote:
 Sure:

 select
     t3.id,
     coalesce
        (
        t1.title,
        t2.title,
        t3.title
        ),
     coalesce
        (
        case
           when t1.title is not null
           then 'table_one,'
           else null
        end,
        case
           when t2.title is not null
           then 'table_two,'
           else null
        end,
        ''
        ) || 'table_three'
 from
     table_three t3
     left outer join table_two t2 using (id)
     left outer join table_one t1 using (id)

 On 10/14/2010 8:13 PM, Nick wrote:





  I guess I should mention that im basically searching for a way to
  recusively coalesce the title. So I want to search the second table
  and

  table_one (id,title)
  1 | new one

  table_two (id,title)
  2 | new two

  table_three (id,title)
  1 | one
  2 | two
  3 | three

  Id like an sql statement that returns...
  1 | new one | [table_one,table_three]
  2 | new two | [table_two,table_three]
  3 | three | [table_three]

  On Oct 14, 4:49 pm, Nicknboutel...@gmail.com  wrote:
  Is it possible to get the results of this snip of a function without
  using a function? All tables include an id and title column.

  tables := ARRAY[table_one,table_two,table_three]::VARCHAR;
  CREATE TEMP TABLE final_results (id INTEGER, title VARCHAR, r_types
  VARCHAR[]);
  FOR t IN ARRAY_LOWER(tables,1) .. ARRAY_UPPER(tables,1) LOOP
     FOR r IN EXECUTE 'SELECT id, title FROM ' || tables[t] LOOP
       IF (SELECT TRUE FROM final_results WHERE id = r.id LIMIT 1) THEN
         UPDATE final_results SET r_types =
  array_append(r_types,tables[t]) WHERE id = r.id;
       ELSE
         INSERT INTO final_results (id,title,r_types) VALUES
  (r.id,r.title,ARRAY[tables.t]);
     END LOOP;
  END LOOP;

 --
 Guy Rouillier

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It it possible to get this result in one query?

2010-10-14 Thread Nick
I guess I should mention that im basically searching for a way to
recusively coalesce the title. So I want to search the second table
and

table_one (id,title)
1 | new one

table_two (id,title)
2 | new two

table_three (id,title)
1 | one
2 | two
3 | three

Id like an sql statement that returns...
1 | new one | [table_one,table_three]
2 | new two | [table_two,table_three]
3 | three | [table_three]


On Oct 14, 4:49 pm, Nick nboutel...@gmail.com wrote:
 Is it possible to get the results of this snip of a function without
 using a function? All tables include an id and title column.

 tables := ARRAY[table_one,table_two,table_three]::VARCHAR;
 CREATE TEMP TABLE final_results (id INTEGER, title VARCHAR, r_types
 VARCHAR[]);
 FOR t IN ARRAY_LOWER(tables,1) .. ARRAY_UPPER(tables,1) LOOP
   FOR r IN EXECUTE 'SELECT id, title FROM ' || tables[t] LOOP
     IF (SELECT TRUE FROM final_results WHERE id = r.id LIMIT 1) THEN
       UPDATE final_results SET r_types =
 array_append(r_types,tables[t]) WHERE id = r.id;
     ELSE
       INSERT INTO final_results (id,title,r_types) VALUES
 (r.id,r.title,ARRAY[tables.t]);
   END LOOP;
 END LOOP;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] It it possible to get this result in one query?

2010-10-14 Thread Nick
Is it possible to get the results of this snip of a function without
using a function? All tables include an id and title column.

tables := ARRAY[table_one,table_two,table_three]::VARCHAR;
CREATE TEMP TABLE final_results (id INTEGER, title VARCHAR, r_types
VARCHAR[]);
FOR t IN ARRAY_LOWER(tables,1) .. ARRAY_UPPER(tables,1) LOOP
  FOR r IN EXECUTE 'SELECT id, title FROM ' || tables[t] LOOP
IF (SELECT TRUE FROM final_results WHERE id = r.id LIMIT 1) THEN
  UPDATE final_results SET r_types =
array_append(r_types,tables[t]) WHERE id = r.id;
ELSE
  INSERT INTO final_results (id,title,r_types) VALUES
(r.id,r.title,ARRAY[tables.t]);
  END LOOP;
END LOOP;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It it possible to get this result in one query?

2010-10-14 Thread Nick
Found a solution for what I need. Please let me know if you know of
something better/faster. -Nick

CREATE AGGREGATE array_accum (anyelement) (
sfunc = array_append,
stype = anyarray,
initcond = '{}'
);
SELECT id, title, array_accum(t) AS ts FROM (
  SELECT 'table_one' AS t, id, title FROM table_one
  UNION ALL
  SELECT 'table_two' AS t, b.id, COALESCE(a.title,b.title,c.title) AS
title FROM table_two b
  LEFT JOIN table_one a ON a.id = b.id
  LEFT JOIN table_three c ON c.id = b.id
  UNION ALL
  SELECT 'table_three' AS t, c.id, COALESCE(a.title,b.title,c.title)
AS title FROM table_three c
  LEFT JOIN table_one a ON a.id = c.id
  LEFT JOIN table_two b ON b.id = c.id
) x GROUP BY id, title;

On Oct 14, 5:13 pm, Nick nboutel...@gmail.com wrote:
 I guess I should mention that im basically searching for a way to
 recusively coalesce the title. So I want to search the second table
 and

 table_one (id,title)
 1 | new one

 table_two (id,title)
 2 | new two

 table_three (id,title)
 1 | one
 2 | two
 3 | three

 Id like an sql statement that returns...
 1 | new one | [table_one,table_three]
 2 | new two | [table_two,table_three]
 3 | three | [table_three]

 On Oct 14, 4:49 pm, Nick nboutel...@gmail.com wrote:



  Is it possible to get the results of this snip of a function without
  using a function? All tables include an id and title column.

  tables := ARRAY[table_one,table_two,table_three]::VARCHAR;
  CREATE TEMP TABLE final_results (id INTEGER, title VARCHAR, r_types
  VARCHAR[]);
  FOR t IN ARRAY_LOWER(tables,1) .. ARRAY_UPPER(tables,1) LOOP
    FOR r IN EXECUTE 'SELECT id, title FROM ' || tables[t] LOOP
      IF (SELECT TRUE FROM final_results WHERE id = r.id LIMIT 1) THEN
        UPDATE final_results SET r_types =
  array_append(r_types,tables[t]) WHERE id = r.id;
      ELSE
        INSERT INTO final_results (id,title,r_types) VALUES
  (r.id,r.title,ARRAY[tables.t]);
    END LOOP;
  END LOOP;- Hide quoted text -

 - Show quoted text -


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a bug in FOR i IN 1..10 LOOP (8.4.4)?

2010-09-12 Thread Nick
Woops, figured it out. Just needed to declare the num_var := '';

On Sep 11, 10:45 pm, Nick nboutel...@gmail.com wrote:
 CREATE FUNCTION test() RETURNS text
     LANGUAGE plpgsql
     AS $$DECLARE
   num_var TEXT;
 BEGIN
   FOR i IN 1..10 LOOP
     num_var := num_var || ',' || i;
   END LOOP;
   RETURN num_var;
 END;$$;

 SELECT test();

 returns NULL


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is there a bug in FOR i IN 1..10 LOOP (8.4.4)?

2010-09-12 Thread Nick
CREATE FUNCTION test() RETURNS text
LANGUAGE plpgsql
AS $$DECLARE
  num_var TEXT;
BEGIN
  FOR i IN 1..10 LOOP
num_var := num_var || ',' || i;
  END LOOP;
  RETURN num_var;
END;$$;

SELECT test();

returns NULL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Regular expression that splits CSV string into table

2010-09-10 Thread Nick
What would be the regexp_split_to_table pattern that splits a comma
separated string into a table? Im having trouble when a string
contains commas or there are commas at the beginning or end

String
',one,two,''three,four'',five,six,'

Should return
,one
two
three,four
five
six,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regular expression that splits CSV string into table

2010-09-10 Thread Nick
Yes, that gets down to the root of my question... what is the
expression that would properly split the values? -Nick

On Sep 10, 4:43 pm, brick...@gmail.com (bricklen) wrote:
 On Fri, Sep 10, 2010 at 3:43 PM, Nick nboutel...@gmail.com wrote:
  What would be the regexp_split_to_table pattern that splits a comma
  separated string into a table? Im having trouble when a string
  contains commas or there are commas at the beginning or end

  String
  ',one,two,''three,four'',five,six,'

  Should return
  ,one
  two
  three,four
  five
  six,

 You can roll your own function, or try regexp_split_to_table, though
 you will probably have to use a different delimiter if you don't want
 it to break on three,four.
 Eg.

 select regexp_split_to_table('one,two,three,four,five',',');
  regexp_split_to_table
 ---
  one
  two
  three
  four
  five

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regular expression that splits CSV string into table

2010-09-10 Thread Nick
I dont mind if the commas are at the beginning and end, im more
concerned about three,four staying in one row because its surrounded
by quotes. -Nick

On Sep 10, 6:03 pm, alvhe...@commandprompt.com (Alvaro Herrera) wrote:
 Excerpts from Nick's message of vie sep 10 20:36:24 -0400 2010:

  Yes, that gets down to the root of my question... what is the
  expression that would properly split the values? -Nick

 The only idea that comes to mind right now is to remove them before
 processing the rest of the string, and put them back to the first and
 last element if they were removed.

 --
 Álvaro Herrera alvhe...@commandprompt.com
 The PostgreSQL Company - Command Prompt, Inc.
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dynamically update NEW columns in plpgsql trigger

2010-09-09 Thread Nick
On Sep 9, 2:21 am, dmit...@gmail.com (Dmitriy Igrishin) wrote:
 Hey Nick,

 You may do it with PL/pgSQL more easily with hstore module.
 Please, refer tohttp://www.postgresql.org/docs/9.0/static/hstore.html
 Please, look at the hstore(record) and populate_record(record, hstore)
 function. Hope this helps.

 And I think it will be more faster then you solution.

 Regards,
 Dmitriy

 2010/9/9 Nick nboutel...@gmail.com



  I need to dynamically update NEW columns. Ive been inserting the NEW
  values into a temp table, updating them, then passing the temp table
  values back to NEW (is there a better way?). Ive had success with this
  method unless there is a null value...

  EXECUTE 'CREATE TEMP TABLE new AS SELECT $1.*' USING NEW;
  EXECUTE 'UPDATE new SET ' || dyn_col_name || ' = 1';
  EXECUTE 'SELECT * FROM new' INTO NEW;
  EXECUTE 'DROP TABLE new';

  This last line...
  EXECUTE 'SELECT $1.' || dyn_col_name INTO val USING NEW;

  gives the ERROR: could not identify column col_one in record data
  type.

  However RAISE EXCEPTION '%',NEW.col_one;
  returns 1 correctly.

  If col_one does does not start out as a null value, then everything
  works. Why does the passing from temp table back to NEW lose the USING
  functionality?

  --
  Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
  To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general- Hide quoted text -

 - Show quoted text -

Thanks Dmitriy,

Im guessing that hstore(record) is not compatible with 8.4.4? I get an
error cannot cast type record to hstore.

-Nick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dynamically update NEW columns in plpgsql trigger

2010-09-08 Thread Nick
I need to dynamically update NEW columns. Ive been inserting the NEW
values into a temp table, updating them, then passing the temp table
values back to NEW (is there a better way?). Ive had success with this
method unless there is a null value...

EXECUTE 'CREATE TEMP TABLE new AS SELECT $1.*' USING NEW;
EXECUTE 'UPDATE new SET ' || dyn_col_name || ' = 1';
EXECUTE 'SELECT * FROM new' INTO NEW;
EXECUTE 'DROP TABLE new';

This last line...
EXECUTE 'SELECT $1.' || dyn_col_name INTO val USING NEW;

gives the ERROR: could not identify column col_one in record data
type.

However RAISE EXCEPTION '%',NEW.col_one;
returns 1 correctly.

If col_one does does not start out as a null value, then everything
works. Why does the passing from temp table back to NEW lose the USING
functionality?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On insert duplicate row, return existing key

2010-08-04 Thread Nick
Anyone? Please

On Jul 31, 12:36 pm, Nick nboutel...@gmail.com wrote:
 If I insert a duplicate row into a table, id like to return the
 existing key.

 I tried creating a rule for this...

 CREATE RULE no_duplicates AS ON INSERT TO names WHERE EXISTS (SELECT 1
 FROM names WHERE new.name = name) DO INSTEAD SELECT id, name FROM
 names WHERE name = new.name;

 However, I get an error on...

 INSERT INTO names (name) VALUES ('existing') RETURNING *;

 ERROR:  cannot perform INSERT RETURNING on relation names
 HINT:  You need an unconditional ON INSERT DO INSTEAD rule with a
 RETURNING clause.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Find users that have ALL categories

2010-06-30 Thread Nick
Is this the most efficient way to write this query? Id like to get a
list of users that have the categories 1, 2, and 3?

SELECT user_id FROM user_categories WHERE category_id IN (1,2,3) GROUP
BY user_id HAVING COUNT(*) = 3

users_categories (user_id, category_id)
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
3 | 1
4 | 1
4 | 2
4 | 3

The result should produce 1  4.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with query using ST_Dwithin

2010-01-28 Thread Nick
On Jan 28, 4:32 pm, Nick nick.uebel...@noaa.gov wrote:
 The following query's all work fine,

 select distinct zoa_metar_xml.stn_id, zoa_metar_xml.metar_txt,
 zoa_metar_xml.time, zoa_metar_xml.flt_cat, zoa_metar_xml.cld_cvr,
 zoa_metar_xml.cld_base, zoa_metar_xml.lonlat, zoa_metar_xml.geom from
 zoa_metar_xml;

 select distinct id, kml, type, min_hgt, max_hgt from
 zoa_sectors_basetop where zoa_sectors_basetop.id='$i' and
 (zoa_sectors_basetop.type = 'LO' or zoa_sectors_basetop.type='HILO');

 select distinct zoa_sectors_basetop.id, west_pirep_xml.raw_text,
 west_pirep_xml.cld_cvr, west_pirep_xml.cld_base,
 west_pirep_xml.cld_top, west_pirep_xml.altitude, west_pirep_xml.time,
 west_pirep_xml.acft_type, zoa_sectors_basetop.max_hgt,
 zoa_sectors_basetop.min_hgt, zoa_sectors_basetop.type,
 west_pirep_xml.lonlat from zoa_sectors_basetop, west_pirep_xml, artcc
 where (ST_DWithin(zoa_sectors_basetop.geom, west_pirep_xml.geom, 0)
 and ST_DWithin(artcc.the_geom, zoa_sectors_basetop.geom, 0)) and
 zoa_sectors_basetop.id='$i' and (west_pirep_xml.cld_base = 5000 or
 west_pirep_xml.cld_top = 1) and west_pirep_xml.cld_cvr != 'SKC'
 and (zoa_sectors_basetop.type='LO' or
 zoa_sectors_basetop.type='HILO');

 So I'm not sure why this query is not working,

 select distinct zoa_metar_xml.stn_id, zoa_metar_xml.metar_txt,
 zoa_metar_xml.time, zoa_metar_xml.flt_cat, zoa_metar_xml.cld_cvr,
 zoa_metar_xml.cld_base, zoa_metar_xml.lonlat from zoa_metar_xml,
 zoa_sectors_basetop, artcc where  (ST_DWithin
 (zoa_sectors_basetop.geom, zoa_metar_xml.geom, 0) and ST_DWithin
 (artcc.the_geom, zoa_sectors_basetop.geom, 0)) and
 zoa_sectors_basetop.id='$i' and (zoa_sectors_basetop.type='LO' or
 zoa_sectors_basetop.type='HILO');

 Believe it has something to do with the ST_DWithin portion of the
 query (but these work in other queries)...all of these queries are
 being executed from a php file, then create kml files. The first query
 creates kml file for metars, this is working and geometry is there, so
 I'm not sure why I can't match these with the zoa_sectors_basetop.geom
 or artcc.the_geom, because the last working query also works when
 matching intersecting polygons with the zoa_sectors_basetop.geom. So
 looks like the problem may lie in the geometry in the table
 zoa_metar_xml. But can't figure why? Hope this is enough info, can
 anyone help? Thanks,

 Nick

Figured it out, just replaced the 0 with a .001, 0 must no longer work
in postgresql update.

(ST_DWithin(zoa_sectors_basetop.geom, zoa_metar_xml.geom, 0) and
ST_DWithin
(artcc.the_geom, zoa_sectors_basetop.geom, 0))

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with query using ST_Dwithin

2010-01-28 Thread Nick
The following query's all work fine,

select distinct zoa_metar_xml.stn_id, zoa_metar_xml.metar_txt,
zoa_metar_xml.time, zoa_metar_xml.flt_cat, zoa_metar_xml.cld_cvr,
zoa_metar_xml.cld_base, zoa_metar_xml.lonlat, zoa_metar_xml.geom from
zoa_metar_xml;

select distinct id, kml, type, min_hgt, max_hgt from
zoa_sectors_basetop where zoa_sectors_basetop.id='$i' and
(zoa_sectors_basetop.type = 'LO' or zoa_sectors_basetop.type='HILO');

select distinct zoa_sectors_basetop.id, west_pirep_xml.raw_text,
west_pirep_xml.cld_cvr, west_pirep_xml.cld_base,
west_pirep_xml.cld_top, west_pirep_xml.altitude, west_pirep_xml.time,
west_pirep_xml.acft_type, zoa_sectors_basetop.max_hgt,
zoa_sectors_basetop.min_hgt, zoa_sectors_basetop.type,
west_pirep_xml.lonlat from zoa_sectors_basetop, west_pirep_xml, artcc
where (ST_DWithin(zoa_sectors_basetop.geom, west_pirep_xml.geom, 0)
and ST_DWithin(artcc.the_geom, zoa_sectors_basetop.geom, 0)) and
zoa_sectors_basetop.id='$i' and (west_pirep_xml.cld_base = 5000 or
west_pirep_xml.cld_top = 1) and west_pirep_xml.cld_cvr != 'SKC'
and (zoa_sectors_basetop.type='LO' or
zoa_sectors_basetop.type='HILO');

So I'm not sure why this query is not working,

select distinct zoa_metar_xml.stn_id, zoa_metar_xml.metar_txt,
zoa_metar_xml.time, zoa_metar_xml.flt_cat, zoa_metar_xml.cld_cvr,
zoa_metar_xml.cld_base, zoa_metar_xml.lonlat from zoa_metar_xml,
zoa_sectors_basetop, artcc where  (ST_DWithin
(zoa_sectors_basetop.geom, zoa_metar_xml.geom, 0) and ST_DWithin
(artcc.the_geom, zoa_sectors_basetop.geom, 0)) and
zoa_sectors_basetop.id='$i' and (zoa_sectors_basetop.type='LO' or
zoa_sectors_basetop.type='HILO');

Believe it has something to do with the ST_DWithin portion of the
query (but these work in other queries)...all of these queries are
being executed from a php file, then create kml files. The first query
creates kml file for metars, this is working and geometry is there, so
I'm not sure why I can't match these with the zoa_sectors_basetop.geom
or artcc.the_geom, because the last working query also works when
matching intersecting polygons with the zoa_sectors_basetop.geom. So
looks like the problem may lie in the geometry in the table
zoa_metar_xml. But can't figure why? Hope this is enough info, can
anyone help? Thanks,

Nick


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index not used when using a function

2010-01-13 Thread Nick
SELECT * FROM locations WHERE id = 12345 LIMIT 1

uses the primary key (id) index, but...

SELECT * FROM locations WHERE id = get_location_id_from_ip(641923892)
LIMIT 1

does not and is verrry slow. Any ideas why? Whats weird is that it
works (uses index) on a previous db, but when I copied everything over
to a new db it doesnt. Ive ran vacuum + analyze, does the planner just
need more time to figure out that it needs to use an index?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   >