Re: [GENERAL] How to quote the COALESCE function?

2016-03-28 Thread Jerry Sievers
Roman Scherer  writes:

> Hello,
>
> I'm building a DSL in Clojure for SQL and specifically PostgreSQL
> [1]. When building a SQL statement that contains a function call
> I always quote the function name with \" in case the function
> name contains any special characters. Here's an example:
>
>   (select db ['(upper "x")])
>   ;=> ["SELECT \"upper\"(?)" "x"]
>
> This worked fine so far, but today I found a case that doesn't
> work as expected, the COALESCE function.
>
>   (select db ['(coalesce nil 0)])
>   ;=> ["SELECT \"coalesce\"(NULL, 0)"]
>
> Can someone explain to me what's the difference between quoting
> the `upper` and the `coalesce` function? I can execute the
> following statements via psql, and it works as expected:
>
>   SELECT upper ('x');
>   SELECT "upper"('x');
>   SELECT coalesce(NULL, 1);
>
> But as soon as I try this with `coalesce` I get an error:
>
>   SELECT "coalesce"(NULL, 1);


While not a precise answer to your question, it may be of interest to
note that coalesce is *not* a function.

It is a language construct with a function-like syntax.

select distinct proname from pg_proc where proname in ('coalesce', 'lower');
 proname 
-
 lower
(1 row)

>
>   ERROR:  function coalesce(unknown, integer) does not exist
>   LINE 1: SELECT "coalesce"(NULL, 1);
>                  ^
>   HINT:  No function matches the given name and argument types. You might 
> need to add explicit type casts.
>
> What I found so far is, that the `upper` function can be found in
> the `pg_proc` table but not `coalesce`.
>
>   SELECT proname FROM pg_proc WHERE proname ILIKE 'upper';
>   SELECT proname FROM pg_proc WHERE proname ILIKE 'coalesce';
>
> Does this mean that `coalesce` isn't a classical function and I
> shouldn't quote it? Is it instead a keyword, as described in
> the "Lexical Structure" section of the docs [2]? How can I find
> out which other functions are not meant to be quoted?
>
> I'm aware that I do not need to quote the `coalesce` and `upper`
> functions and I may change my strategy for quoting functions names.
>
> Thanks for you help, Roman.
>
> [1] https://github.com/r0man/sqlingvo
> [2] http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to quote the COALESCE function?

2016-03-28 Thread Tom Lane
Roman Scherer  writes:
> Can someone explain to me what's the difference between quoting
> the `upper` and the `coalesce` function?

COALESCE is a keyword.

> What I found so far is, that the `upper` function can be found in
> the `pg_proc` table but not `coalesce`.

Yup.

> Does this mean that `coalesce` isn't a classical function and I
> shouldn't quote it? Is it instead a keyword, as described in
> the "Lexical Structure" section of the docs [2]? How can I find
> out which other functions are not meant to be quoted?

Yes, yes, and you already found one good way: if it doesn't have
a pg_proc entry then it's a special case of some sort or other.

Have you considered only quoting the function name if it actually
needs it, ie, contains special characters?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread John Turner
On Mon, Mar 28, 2016 at 2:32 AM, Sterpu Victor  wrote:



*table1*
> id
> nr - integer
>
> *table2*
> id
> id_table1 - FK in Table 1
> valid_from - timestamp
>
> There must be unique values for:
> - nr - from table1
> and
> - YEAR(MIN(valid_from)) from table 2
>
Just trying to understand your goal here -

Is '*nr*' supposed to be a count of records from *table2* by 'id_table1'
column?
And every single value for '*nr*' within *table1* must be unique - i.e.,
you need a table of unique counts?

I think I fixed the problem by executing the function AFTER insert or
> update but I'm not sure.
> Until now the execution was before insert or update.
>

That doesn't sound right, I believe you would want to stick with execution
of the DML to your presumable 'fact' table (*table2*) using a BEFORE
trigger, *especially* if it's intended to enforce constraints.

It would be helpful if you could:

   - clarify/elaborate on how these 2 tables are supposed to relate to each
   other
   - post your actual trigger
   - provide us with some example inserts/updates

- John


[GENERAL] How to quote the COALESCE function?

2016-03-28 Thread Roman Scherer
Hello,

I'm building a DSL in Clojure for SQL and specifically PostgreSQL
[1]. When building a SQL statement that contains a function call
I always quote the function name with \" in case the function
name contains any special characters. Here's an example:

  (select db ['(upper "x")])
  ;=> ["SELECT \"upper\"(?)" "x"]

This worked fine so far, but today I found a case that doesn't
work as expected, the COALESCE function.

  (select db ['(coalesce nil 0)])
  ;=> ["SELECT \"coalesce\"(NULL, 0)"]

Can someone explain to me what's the difference between quoting
the `upper` and the `coalesce` function? I can execute the
following statements via psql, and it works as expected:

  SELECT upper ('x');
  SELECT "upper"('x');
  SELECT coalesce(NULL, 1);

But as soon as I try this with `coalesce` I get an error:

  SELECT "coalesce"(NULL, 1);

  ERROR:  function coalesce(unknown, integer) does not exist
  LINE 1: SELECT "coalesce"(NULL, 1);
 ^
  HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.

What I found so far is, that the `upper` function can be found in
the `pg_proc` table but not `coalesce`.

  SELECT proname FROM pg_proc WHERE proname ILIKE 'upper';
  SELECT proname FROM pg_proc WHERE proname ILIKE 'coalesce';

Does this mean that `coalesce` isn't a classical function and I
shouldn't quote it? Is it instead a keyword, as described in
the "Lexical Structure" section of the docs [2]? How can I find
out which other functions are not meant to be quoted?

I'm aware that I do not need to quote the `coalesce` and `upper`
functions and I may change my strategy for quoting functions names.

Thanks for you help, Roman.

[1] https://github.com/r0man/sqlingvo
[2] http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html


Re: [GENERAL] Way to get at parsed trigger 'WHEN' clause expression?

2016-03-28 Thread Igor Neyman
-Original Message-
From: James Robinson [mailto:jlrob...@socialserve.com] 
Sent: Friday, March 25, 2016 11:29 AM
To: Igor Neyman 
Cc: Melvin Davidson ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Way to get at parsed trigger 'WHEN' clause expression?


> On Mar 25, 2016, at 11:10 AM, Igor Neyman  wrote:
> 
> This doesn’t answer OP question.
> 
> Besides this query gets you not only “WHEN” clause but also whatever follows 
> it, for instance “EXECUTE PROCEDURE…”

Yeah. I was imagining having to do doing something semantically equivalent, but 
better using some more subtle regexes. It sounds like folks don't know of 
something directly exposed at the SQL level as opposed to something available 
within the backend C only. When researching what psql's \d does, it was a 
pleasant surprise to see that the backend offers a single function to produce 
the entire trigger representation. Now alas we ended up wanting just a portion 
of it.

> 
> As for “pg_get_expr(pg_node_tree, relation_oid)” – looks like it doesn’t work 
> with pg_trigger, because as a second parameter (Var) it expects relation_oid, 
> and relation could have multiple triggers, so pg_get_expr() wouldn’t know 
> which trigger’s tgqual you want to decompile.
> 

pg_get_expr() can be fed the pg_catalog.pg_trigger.tgqual value, which looks to 
be the column where the WHERE clause gets persisted. And then also pass in the 
oid of the table the trigger is on. But it seems it gets tripped up on how NEW 
and OLD are represented, 'cause those are not just column references.

I'm content with going down 'use pg_get_triggerdef(), then work to strip out 
all of the string contents which does not appear to be the WHEN clause portion' 
for this use case (an in-house web-based schema browser which just got taught 
how to display triggers). Was primarily interested in seeing if there was a 
known way of doing this short of filthy string parsing.

I'll post the soln. I end up with just for mail archives search fodder 
completeness.

Thanks folks!

---
James Robinson
ja...@jlr-photo.com

_
James,

Instead of pg_catalog, you could use INFORMATION_SCHEMA to get "WHEN" clause:

select action_condition from information_schema.triggers where trigger_name = 
'your_trigger_name';

Regards,
Igor



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Way to get at parsed trigger 'WHEN' clause expression?

2016-03-28 Thread Igor Neyman
Did you mean something like

SELECT substring(pg_get_triggerdef(tg.oid, TRUE) FROM position('WHEN' in 
pg_get_triggerdef(tg.oid, TRUE))) AS when_cond
  FROM pg_trigger tg WHERE tg.tgname = 'your_trigger_name';

--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you. 
[http://us.i1.yimg.com/us.yimg.com/i/mesg/tsmileys2/01.gif]


This doesn’t answer OP question.
Besides this query gets you not only “WHEN” clause but also whatever follows 
it, for instance “EXECUTE PROCEDURE…”

As for “pg_get_expr(pg_node_tree, relation_oid)” – looks like it doesn’t work 
with pg_trigger, because as a second parameter (Var) it expects relation_oid, 
and relation could have multiple triggers, so pg_get_expr() wouldn’t know which 
trigger’s tgqual you want to decompile.

Regards,
Igor Neyman


Re: [GENERAL] More correlated (?) index woes

2016-03-28 Thread Geoff Winkless
On 28 March 2016 at 22:01, rob stone  wrote:

> What does:-
>
> DELETE FROM pa
> WHERE pa.field1 IS NULL
> AND pa.sc_id IN (SELECT legs.sc_id FROM legs
> WHERE legs.scdate BETWEEN 20160220 AND > 20160222)
>
> give as a cost when you run ANALYZE over it?
>

​Thanks for the suggestion.

It's a
​pproximately the same.

 Delete on pa  (cost=1463.31..493321.89 rows=187833 width=12) (actual
time=41539.174..41539.174 rows=0 loops=1)
   ->  Hash Join  (cost=1463.31..493321.89 rows=187833 width=12) (actual
time=41539.172..41539.172 rows=0 loops=1)
 Hash Cond: (pa.sc_id = legs.sc_id)
 ->  Seq Scan on pa  (cost=0.00..480888.83 rows=2899078 width=10)
(actual time=0.010..40866.049 rows=2591264 loops=1)
   Filter: (field1 IS NULL)
   Rows Removed by Filter: 4931412
 ->  Hash  (cost=1321.48..1321.48 rows=11346 width=10) (actual
time=29.481..29.481 rows=20940 loops=1)
   Buckets: 32768 (originally 16384)  Batches: 1 (originally 1)
 Memory Usage: 1156kB
   ->  HashAggregate  (cost=1208.02..1321.48 rows=11346
width=10) (actual time=20.446..25.028 rows=20940 loops=1)
 Group Key: legs.sc_id
 ->  Index Scan using legs_scdate_idx on legs
 (cost=0.43..1171.88 rows=14458 width=10) (actual time=0.025..13.133
rows=21281 loops=1)
   Index Cond: ((scdate >= 20160220) AND (scdate <=
20160222))

I've tried creating a subquery out of the legs dataset in the hope that
that would help but that made no difference either.

Geoff​


Re: [GENERAL] More correlated (?) index woes

2016-03-28 Thread rob stone
On Mon, 2016-03-28 at 20:23 +0100, Geoff Winkless wrote:
> So I accept that when using MIN(sc_id) against scdate it makes
> statistical sense to use the sc_id index for a reasonable percentage
> of the full range of scdate, unless we know in advance that scdate is
> closely correlated to sc_id (because using MIN means we can stop
> immediately we hit a value).
> 
> However I'm now finding a similar problem when using a multi-table
> DELETE, where the same obviously can't apply.
> 
> This query:
> 
> DELETE FROM pa USING legs WHERE legs.scdate BETWEEN 20160220 AND
> 20160222 AND legs.sc_id=pa.sc_id;
> 
> does what one would hope:
> 
>  Delete on pa  (cost=0.99..705406.24 rows=36570 width=12)
>    ->  Nested Loop  (cost=0.99..705406.24 rows=36570 width=12)
>          ->  Index Scan using legs_scdate_idx on legs
>  (cost=0.43..1171.88 rows=14458 width=10)
>                Index Cond: ((scdate >= 20160220) AND (scdate <=
> 20160222))
>          ->  Index Scan using pa_pkey on pa  (cost=0.56..48.33
> rows=38 width=10)
>                Index Cond: (sc_id = legs.sc_id)
> 
> 
> However as soon as I add an extra test for field1 IS NULL, it
> apparently goes insane:
> 
>  Delete on pa  (cost=577260.90..626296.46 rows=23732 width=12)
> (actual time=41870.770..41870.770 rows=0 loops=1)
>    ->  Hash Join  (cost=577260.90..626296.46 rows=23732 width=12)
> (actual time=37886.396..41315.668 rows=44960 loops=1)
>          Hash Cond: (legs.sc_id = pa.sc_id)
>          ->  Index Scan using legs_scdate_idx on legs
>  (cost=0.43..1171.88 rows=14458 width=10) (actual time=0.030..13.667
> rows=21281 loops=1)
>                Index Cond: ((scdate >= 20160220) AND (scdate <=
> 20160222))
>          ->  Hash  (cost=481691.12..481691.12 rows=5497868 width=10)
> (actual time=37805.756..37805.756 rows=4875870 loops=1)
>                Buckets: 131072  Batches: 64  Memory Usage: 4311kB
>                ->  Seq Scan on pa  (cost=0.00..481691.12 rows=5497868
> width=10) (actual time=0.008..35869.304 rows=4875870 loops=1)
>                      Filter: (field1 IS NULL)
>                      Rows Removed by Filter: 2688634
>  Planning time: 0.447 ms
>  Execution time: 41870.832 ms
> 
> Running ANALYZE makes no difference.
> 
> Table pa has 7522676 rows, 4834042 of which have field1 NULL, so it's
> absolutely not reasonable to expect this to be an optimal strategy.
> 
> Any suggestions as to how I can improve this query?
> 
> Thanks :)
> 
> Geoff



What does:-

DELETE FROM pa 
WHERE pa.field1 IS NULL
AND pa.sc_id IN (SELECT legs.sc_id FROM legs
WHERE legs.scdate BETWEEN 20160220 AND > 20160222)

give as a cost when you run ANALYZE over it?

HTH

Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table size for partitioned setup

2016-03-28 Thread Rob Sargent



On 03/28/2016 02:55 PM, Mat Arye wrote:

This will run on EC2 (or other cloud service) machines and on ssds.
Right now runs on m4.4xlarge with 64GiB of ram.
Willing to pay for beefy instances if it means better performance.


On Mon, Mar 28, 2016 at 4:49 PM, Rob Sargent > wrote:




On 03/28/2016 02:41 PM, Mat Arye wrote:

Hi All,

I am writing a program that needs time-series-based insert
mostly workload. I need to make the system scaleable with many
thousand of inserts/s. One of the techniques I plan to use is
time-based table partitioning and I am trying to figure out
how large to make my time tables.

Does anybody have any hints on optimal table sizes either in
terms of rows or in terms of size? Any rule of thumbs I can
use for table size in relation to amount of memory on the
machine? Is the size of the index more important than the size
of the table (if queries mostly use indexes)?

Basically, I am asking for pointers about how to think about
this problem and any experiences people have had.

Thanks,
Mat

P.S. I am aware of limits listed here:
http://www.postgresql.org/about/. I am asking about practical
size limits for performance consideration.

Your current hardware, or hardware budget might play into the answer.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Those who supply real answers on this list um, er, discourage 
top-posting. (Not my fave, but there you go)





Re: [GENERAL] Table size for partitioned setup

2016-03-28 Thread Mat Arye
This will run on EC2 (or other cloud service) machines and on ssds.
Right now runs on m4.4xlarge with 64GiB of ram.
Willing to pay for beefy instances if it means better performance.


On Mon, Mar 28, 2016 at 4:49 PM, Rob Sargent  wrote:

>
>
> On 03/28/2016 02:41 PM, Mat Arye wrote:
>
>> Hi All,
>>
>> I am writing a program that needs time-series-based insert mostly
>> workload. I need to make the system scaleable with many thousand of
>> inserts/s. One of the techniques I plan to use is time-based table
>> partitioning and I am trying to figure out how large to make my time tables.
>>
>> Does anybody have any hints on optimal table sizes either in terms of
>> rows or in terms of size? Any rule of thumbs I can use for table size in
>> relation to amount of memory on the machine? Is the size of the index more
>> important than the size of the table (if queries mostly use indexes)?
>>
>> Basically, I am asking for pointers about how to think about this problem
>> and any experiences people have had.
>>
>> Thanks,
>> Mat
>>
>> P.S. I am aware of limits listed here: http://www.postgresql.org/about/.
>> I am asking about practical size limits for performance consideration.
>>
> Your current hardware, or hardware budget might play into the answer.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Table size for partitioned setup

2016-03-28 Thread Rob Sargent



On 03/28/2016 02:41 PM, Mat Arye wrote:

Hi All,

I am writing a program that needs time-series-based insert mostly 
workload. I need to make the system scaleable with many thousand of 
inserts/s. One of the techniques I plan to use is time-based table 
partitioning and I am trying to figure out how large to make my time 
tables.


Does anybody have any hints on optimal table sizes either in terms of 
rows or in terms of size? Any rule of thumbs I can use for table size 
in relation to amount of memory on the machine? Is the size of the 
index more important than the size of the table (if queries mostly use 
indexes)?


Basically, I am asking for pointers about how to think about this 
problem and any experiences people have had.


Thanks,
Mat

P.S. I am aware of limits listed here: 
http://www.postgresql.org/about/. I am asking about practical size 
limits for performance consideration.

Your current hardware, or hardware budget might play into the answer.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table size for partitioned setup

2016-03-28 Thread Mat Arye
Hi All,

I am writing a program that needs time-series-based insert mostly workload.
I need to make the system scaleable with many thousand of inserts/s. One of
the techniques I plan to use is time-based table partitioning and I am
trying to figure out how large to make my time tables.

Does anybody have any hints on optimal table sizes either in terms of rows
or in terms of size? Any rule of thumbs I can use for table size in
relation to amount of memory on the machine? Is the size of the index more
important than the size of the table (if queries mostly use indexes)?

Basically, I am asking for pointers about how to think about this problem
and any experiences people have had.

Thanks,
Mat

P.S. I am aware of limits listed here: http://www.postgresql.org/about/. I
am asking about practical size limits for performance consideration.


[GENERAL] More correlated (?) index woes

2016-03-28 Thread Geoff Winkless
So I accept that when using MIN(sc_id) against scdate it makes statistical
sense to use the sc_id index for a reasonable percentage of the full range
of scdate, unless we know in advance that scdate is closely correlated to
sc_id (because using MIN means we can stop immediately we hit a value).

However I'm now finding a similar problem when using a multi-table DELETE,
where the same obviously can't apply.

This query:

DELETE FROM pa USING legs WHERE legs.scdate BETWEEN 20160220 AND 20160222
AND legs.sc_id=pa.sc_id;

does what one would hope:

 Delete on pa  (cost=0.99..705406.24 rows=36570 width=12)
   ->  Nested Loop  (cost=0.99..705406.24 rows=36570 width=12)
 ->  Index Scan using legs_scdate_idx on legs  (cost=0.43..1171.88
rows=14458 width=10)
   Index Cond: ((scdate >= 20160220) AND (scdate <= 20160222))
 ->  Index Scan using pa_pkey on pa  (cost=0.56..48.33 rows=38
width=10)
   Index Cond: (sc_id = legs.sc_id)


However as soon as I add an extra test for field1 IS NULL, it apparently
goes insane:

 Delete on pa  (cost=577260.90..626296.46 rows=23732 width=12) (actual
time=41870.770..41870.770 rows=0 loops=1)
   ->  Hash Join  (cost=577260.90..626296.46 rows=23732 width=12) (actual
time=37886.396..41315.668 rows=44960 loops=1)
 Hash Cond: (legs.sc_id = pa.sc_id)
 ->  Index Scan using legs_scdate_idx on legs  (cost=0.43..1171.88
rows=14458 width=10) (actual time=0.030..13.667 rows=21281 loops=1)
   Index Cond: ((scdate >= 20160220) AND (scdate <= 20160222))
 ->  Hash  (cost=481691.12..481691.12 rows=5497868 width=10)
(actual time=37805.756..37805.756 rows=4875870 loops=1)
   Buckets: 131072  Batches: 64  Memory Usage: 4311kB
   ->  Seq Scan on pa  (cost=0.00..481691.12 rows=5497868
width=10) (actual time=0.008..35869.304 rows=4875870 loops=1)
 Filter: (field1 IS NULL)
 Rows Removed by Filter: 2688634
 Planning time: 0.447 ms
 Execution time: 41870.832 ms

Running ANALYZE makes no difference.

Table pa has 7522676 rows, 4834042 of which have field1 NULL, so it's
absolutely not reasonable to expect this to be an optimal strategy.

Any suggestions as to how I can improve this query?

Thanks :)

Geoff


Re: [GENERAL] View deleted records in a table

2016-03-28 Thread Adrian Klaver

On 03/26/2016 12:47 PM, Boyan Botev wrote:

Thanks, Adrian! That worked great for what I needed. I greatly
appreciate your help. Do you know if there is a way to also display
system columns like xmin, xmax with this extension. I can see the need
for that in some future investigation.


You are not the only one:

http://www.depesz.com/2012/04/04/lets-talk-dirty/

"I would definitely add information about xmin/xmax to outputted columns 
– this is required to be able to tell which row is latest, and which was 
previous"



I would suggest filing an issue here:

https://github.com/omniti-labs/pgtreats/tree/master/contrib/pg_dirtyread


Thanks,
Boyan



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Jehan-Guillaume de Rorthais
Le Mon, 28 Mar 2016 12:44:51 +0200,
Emre Hasegeli  a écrit :

> > Could I use lock tables to fix this? Is postgres automaticaly locking a
> > table while running a trigger on that table?
> 
> You can use LOCK TABLE. See the documentation:
> 
> http://www.postgresql.org/docs/current/static/explicit-locking.html

Use advisory locks for better performance. See the following article (at
least the two lasts chapters) for some examples:

http://blog.ioguix.net/postgresql/2015/02/05/Partitionning-and-constraints-part-1.html

Cheers,


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Emre Hasegeli
> Could I use lock tables to fix this? Is postgres automaticaly locking a
> table while running a trigger on that table?

You can use LOCK TABLE. See the documentation:

http://www.postgresql.org/docs/current/static/explicit-locking.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Sterpu Victor

-- Original Message --
From: "Emre Hasegeli" 
To: "Sterpu Victor" 
Cc: "PostgreSQL General" 
Sent: 28/3/2016 12:06:23 PM
Subject: Re: [GENERAL] Unique values on multiple tables


 I have 2 tables and I must make asure unique values like this.

 table1
 id
 nr - integer

 table2
 id
 id_table1 - FK in Table 1
 valid_from - timestamp

 There must be unique values for:
 - nr - from table1
 and
 - YEAR(MIN(valid_from)) from table 2


In situations like this, I add the required column to the other table
with a foreign key. Assuming that (id) is the primary key of table1,
you would need another unique key on (nr, id). Than you can add nr
column to table2 by changing the foreign key to (nr, id_table1)
references table1 (nr, id).

Obviously, its not an efficient solution. It requires an additional
unique key and more storage on the referencing table. Though, I
believe it is a safe one. It doesn't allow the duplicated column to
be inconsistent. There are many things that can go wrong under
concurrency with a trigger like you posted.


This wouldn't work in my case because I need unique values only for the 
smallest valid_from.

Example:
- table1 has row id=1, nr=100
- table2 has row1 id=5, id_table1=1, valid_from=2015-12-01
  row2 id=6, id_table1=1, valid_from=2016-01-01
Then unique values must be assured only for (100, 2015-12-01), ignorig 
the second valid_from(2016-01-01)


I changed the execution time of the function from BEFORE to AFTER and I 
hope this will solve the problem.
I don't know how postgres works behind this code but I hope that this 
will solve the problem.
Could I use lock tables to fix this? Is postgres automaticaly locking a 
table while running a trigger on that table?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi Master Replication setup

2016-03-28 Thread Pierre Chevalier
Hello, 

Searching BDR led me to a few surprising results... (trigrams definitely have 
numerous funny meanings ;o))
This URL will probably help:
http://bdr-project.org/docs/stable/index.html

Regards,
Pierre


Andreas Kretschmer  a écrit :

>Sachin Srivastava  wrote:
>
>> Dear Concern,
>> 
>> Kindly inform to me how to setup multi master replication in
>Postgres.
>
>i think, you are looking for BDR. Please use google for more details.
>
>
>Regards, Andreas Kretschmer

 -- 
Envoyé de mon GéolPDA Android avec K-9 Mail. Excusez la brièveté.
Sent from my GeolPDA Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Emre Hasegeli
> I have 2 tables and I must make asure unique values like this.
>
> table1
> id
> nr - integer
>
> table2
> id
> id_table1 - FK in Table 1
> valid_from - timestamp
>
> There must be unique values for:
> - nr - from table1
> and
> - YEAR(MIN(valid_from)) from table 2

In situations like this, I add the required column to the other table
with a foreign key.  Assuming that (id) is the primary key of table1,
you would need another unique key on (nr, id).  Than you can add nr
column to table2 by changing the foreign key to (nr, id_table1)
references table1 (nr, id).

Obviously, its not an efficient solution.  It requires an additional
unique key and more storage on the referencing table.  Though, I
believe it is a safe one.  It doesn't allow the duplicated column to
be inconsistent.  There are many things that can go wrong under
concurrency with a trigger like you posted.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi Master Replication setup

2016-03-28 Thread Andreas Kretschmer
Sachin Srivastava  wrote:

> Dear Concern,
> 
> Kindly inform to me how to setup multi master replication in Postgres.

i think, you are looking for BDR. Please use google for more details.


Regards, Andreas Kretschmer
-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multi Master Replication setup

2016-03-28 Thread Sachin Srivastava
Dear Concern,

Kindly inform to me how to setup multi master replication in Postgres.

Kindly provide the link of step by step setup of this or any video or setup
document regarding this.

Regards
Sachin


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Sterpu Victor
I think I fixed the problem by executing the function AFTER insert or 
update but I'm not sure.

Until now the execution was before insert or update.

-- Original Message --
From: "Sterpu Victor" 
To: "PostgreSQL General" 
Sent: 28/3/2016 9:32:17 AM
Subject: [GENERAL] Unique values on multiple tables


Hello

I have 2 tables and I must make asure unique values like this.

table1
id
nr - integer

table2
id
id_table1 - FK in Table 1
valid_from - timestamp

There must be unique values for:
- nr - from table1
and
- YEAR(MIN(valid_from)) from table 2

I already made this with a function but in rare cases the function 
fails: when the insert time is very close for 2 inserts the check will 
fail and I will have 2 numbers on the same year.

How should I fix this?

This is the function:

CREATE OR REPLACE FUNCTION table2_check_uni_func()  RETURNS trigger AS
$BODY$
DECLARE
currenr_nr_fo integer;
current_id integer;
BEGIN
IF( (SELECT id FROM table2 WHERE id_table1=NEW.id_table1 ORDER BY 
valid_from ASC LIMIT 1) = NEW.id )  THEN /*IF the first valid from is 
edited*/
 SELECT INTO currenr_nr_fo f.nrfo FROM table1 f WHERE f.id 
= NEW.id_table1;

 IF( (SELECT count(*) as nr
  FROM table1 f
  JOIN table2 fd1 ON (fd1.id_table1 = f.id AND 
to_char(fd1.valid_from, '')=TO_CHAR(NEW.valid_from, ''))
  LEFT JOIN table2 fd2 ON (fd2.id_table1 = f.id AND 
fd2.valid_from 0 ) THEN  RAISE EXCEPTION 'Nr % already used', 
currenr_nr_fo ;

 ELSE
  RETURN NEW;
 END IF;
 ELSE
  RETURN NEW;
END IF;
END;

Thank you.


DISCLAIMER:
Acest mesaj de posta electronica si documentele aferente sunt 
confidentiale. Este interzisa distribuirea, dezvaluirea sau orice alt 
mod de utilizare a lor. Daca nu sunteti destinatarul acestui mesaj, 
este interzis sa actionati in baza acestor informatii. Citirea, 
copierea, distribuirea, dezvaluirea sau utilizarea in alt mod a 
informatiei continute in acest mesaj constituie o incalcare a legii. 
Daca ati primit mesajul din greseala, va rugam sa il distrugeti, 
anuntand expeditorul de eroarea comisa. Intrucat nu poate fi garantat 
faptul ca posta electronica este un mod sigur si lipsit de erori de 
transmitere a informatiilor, este responsabilitatea dvs. sa va 
asigurati ca mesajul (inclusiv documentele alaturate lui) este validat 
si autorizat spre a fi utilizat in mediul dvs.





[GENERAL] Unique values on multiple tables

2016-03-28 Thread Sterpu Victor

Hello

I have 2 tables and I must make asure unique values like this.

table1
id
nr - integer

table2
id
id_table1 - FK in Table 1
valid_from - timestamp

There must be unique values for:
- nr - from table1
and
- YEAR(MIN(valid_from)) from table 2

I already made this with a function but in rare cases the function 
fails: when the insert time is very close for 2 inserts the check will 
fail and I will have 2 numbers on the same year.

How should I fix this?

This is the function:

CREATE OR REPLACE FUNCTION table2_check_uni_func()  RETURNS trigger AS
$BODY$
DECLARE
currenr_nr_fo integer;
current_id integer;
BEGIN
IF( (SELECT id FROM table2 WHERE id_table1=NEW.id_table1 ORDER BY 
valid_from ASC LIMIT 1) = NEW.id )  THEN /*IF the first valid from is 
edited*/
 SELECT INTO currenr_nr_fo f.nrfo FROM table1 f WHERE f.id = 
NEW.id_table1;

 IF( (SELECT count(*) as nr
  FROM table1 f
  JOIN table2 fd1 ON (fd1.id_table1 = f.id AND 
to_char(fd1.valid_from, '')=TO_CHAR(NEW.valid_from, ''))
  LEFT JOIN table2 fd2 ON (fd2.id_table1 = f.id AND 
fd2.valid_from 0 ) THEN  RAISE EXCEPTION 'Nr % already used', 
currenr_nr_fo ;

 ELSE
  RETURN NEW;
 END IF;
 ELSE
  RETURN NEW;
END IF;
END;

Thank you.