Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread John R Pierce

On 4/6/2016 3:55 AM, Alexey Bashtanov wrote:
I am searching for a proper database schema version management system. 



At my $job we're old school. our schemas are versioned. there's a 
settings table with (setting TEXT, value TEXT) fields, a row in that is 
('version', '1.0')  or whatever.


each new release of the schema is released as a .SQL file which builds 
the full schema from scratch, and a .SQL file which updates the previous 
version to the new version. the full build and update .sql files are 
kept in our source code control along with the rest of our software. 
we're quite careful about how we modify our schema so it can be done 
online, update the schema on the live database, then update and restart 
the application/middleware.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-06 Thread Jeff Janes
On Wed, Apr 6, 2016 at 8:35 AM, rob stone  wrote:
> On Wed, 2016-04-06 at 10:33 +, Albe Laurenz wrote:
>
> 
>> Every row has two system columns associated with it: xmin and xmax
>>
>> xmin is the transaction ID that created the row, while xmax is
>> the transaction ID that removed the row.
>>
>> So when an update takes place, xmax of the original row and xmin
>> of the new row are set to the transaction ID of the current
>> transaction.

> Out of curiosity:-
> 1) If you only updated a column defined as BOOLEAN, DATE or TIMESTAMP,
> the tuples physical size should be the same. Is it written back to the
> same page with altered xmin, xmax values?

Being the same size doesn't matter, because it *has* to be copied.

If there is room for the copy to go onto the same page, then that is
done.  If there is not room, then it goes onto a different page.

>
> 2) If you defined a table with a FILLFACTOR of say 70%, and there is
> still space left on its page for the updated tuple, does the same
> apply?
>
> I am curious because of "while xmax is the transaction ID that
> *removed* the row".

"marked for removal" would be more accurate.  If the row were actually
physically removed, it would no longer have a xmax to set.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is not accepting commands to avoid wraparound data loss in database

2016-04-06 Thread Jeff Janes
On Wed, Apr 6, 2016 at 8:13 AM, pedrovlf  wrote:
> Hi,
>
> I'm following error in my DBMS,
>
> < 2016-04-05 17:02:42.053 BRT >ERROR:  database is not accepting commands to
> avoid wraparound data loss in database "zabbix"
> < 2016-04-05 17:02:42.053 BRT >HINT:  Stop the postmaster and vacuum that
> database in single-user mode.
>
>
> I'm running the vacuum in single mode, but is taking too long ... you can
> retrieve the base otherwise? Perhaps with truncate on a table ...

Sure.  Find the single-user process with top, and then strace it to
see what it is doing:

strace -p 28465 -y

read(11,
"\1\0\0\0hP\352=\0\0\0\0\214\2\220\2\0 \4
\0\0\0\0\320\237`\0\240\237`\0"..., 8192) = 8192
lseek(11, 124485632,
SEEK_SET) = 124485632

So the problem is file 18837.

Interrupt the vacuum, and run:

select * from pg_class where relfilenode=18786;

(I see I actually messed up my example, the strace I showed was
actually form a different session, but the principle still applies.
Pretend that 18837 and 18786 are the same thing)

...
 1: relname = "foo_md5_idx" (typeid = 19, len = 64, typmod
= -1, byval = f)

So I see that in my case, it is a index which is holding things up
(and I happen to know this index is a gin index.  This make sense,
because gin and gist indexes are horribly slow to vacuum.)  So in my
case I can just drop the index, rather than truncate the table.  You
might not be so lucky.

So, still in single user mode, I drop the index, redo the vacuum, and
abracadabra.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread Bill Moran
On Wed, 6 Apr 2016 11:55:40 +0100
Alexey Bashtanov  wrote:

> Hi all,
> 
> I am searching for a proper database schema version management system.
> 
> My criteria are the following:
> 0) Open-source, supports postgresql
> 1) Uses psql to execute changesets (to have no problems with COPY, 
> transaction management or sophisticated DDL commands, and to benefit 
> from scripting)
> 2) Support repeatable migrations (SQL files that get applied every time 
> they are changed, it is useful for functions or views tracking).
> 
> Reasonable?
> 
> But unfortunately it looks like each of Liquibase, Flyway, SqlHawk, 
> MigrateDB, Schema-evolution-manager, Depesz's Versioning, Alembic and 
> Sqitch does not satisfy some of those, right?
> 
> What DB VCS do you use and how does it related with the criteria listed 
> above?
> Do you have any idea what other systems to try?

http://dbsteward.org/

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade error regarding hstore operator

2016-04-06 Thread Tom Lane
"Feld, Michael (IMS)"  writes:
> Thanks for the assist Tom. That worked for us. Noticing a different
> issue following the pg_upgrade. If we take a pg_dump of a database on
> this upgraded instance with the hstore extension and try to pg_restore
> it back up to the same instance we get the following errors

Those are the *only* errors you get?  That seems rather odd.  I could
believe something like this happening if, say, you had an "unpackaged"
(that is, pre-extensions) version of hstore lying about.  But then
you'd probably get conflicts on all the hstore-related objects, not
only the opclasses.

In any case, by far the most likely explanation is that you're trying to
restore into a non-empty database, probably because you've put stuff into
template1 and are cloning the new database from there.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade error regarding hstore operator

2016-04-06 Thread Feld, Michael (IMS)
Thanks for the assist Tom. That worked for us. Noticing a different issue 
following the pg_upgrade. If we take a pg_dump of a database on this upgraded 
instance with the hstore extension and try to pg_restore it back up to the same 
instance we get the following errors (ignore the likeness to your name, 
apparently when this cluster was created years ago they chose to set it up in 
your honor):

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3879; 2753 745119 OPERATOR 
FAMILY btree_hstore_ops tomlane
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"btree_hstore_ops" for access method "btree" already exists
Command was: CREATE OPERATOR FAMILY btree_hstore_ops USING btree;

pg_restore: [archiver (db)] Error from TOC entry 3880; 2753 745127 OPERATOR 
FAMILY gin_hstore_ops tomlane
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"gin_hstore_ops" for access method "gin" already exists
Command was: CREATE OPERATOR FAMILY gin_hstore_ops USING gin;

pg_restore: [archiver (db)] Error from TOC entry 3881; 2753 745137 OPERATOR 
FAMILY gist_hstore_ops tomlane
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"gist_hstore_ops" for access method "gist" already exists
Command was: CREATE OPERATOR FAMILY gist_hstore_ops USING gist;

pg_restore: [archiver (db)] Error from TOC entry 3882; 2753 745151 OPERATOR 
FAMILY hash_hstore_ops tomlane
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"hash_hstore_ops" for access method "hash" already exists
Command was: CREATE OPERATOR FAMILY hash_hstore_ops USING hash;

We do not have this issue for any new databases created following the 
pg_upgrade. I noticed that new databases have the hstore 1.3 version while the 
originals still have the 1.1 version. I updated the extension on one of these 
to see if that would resolve the issue and it did not. Thanks for any help you 
can offer.

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Tuesday, March 08, 2016 6:22 PM
To: Feld, Michael (IMS) 
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_upgrade error regarding hstore operator

"Feld, Michael (IMS)"  writes:
> I am attempting to upgrade my organization's database cluster from
> 9.1.19 to 9.5.1 using the pg_upgrade utility.

That's kind of a big jump :-( ... you missed the versions where => was 
deprecated as an operator name.

> I tried dropping the operator before doing the upgrade but it's dependent on 
> the existence of the hstore extension. Ideas?

The clean solution would be to copy share/extension/hstore--1.0--1.1.sql
from the 9.5 installation into the 9.1 installation and then do

ALTER EXTENSION hstore UPDATE TO '1.1';

Under the hood that's just doing

ALTER EXTENSION hstore DROP OPERATOR => (text, text); DROP OPERATOR => (text, 
text);

but if you did that manually, you'd have a problem when you want to update 
hstore to current versions later.  If you do what I suggest, the extension will 
properly look like it's 1.1 after pg_upgrade'ing.

regards, tom lane



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Deadlock between VACUUM and ALTER TABLE commands

2016-04-06 Thread Kevin Burke
Hello,
Occasionally our test suite gets into a deadlocked state. I was curious why
this happens, and what our options are for dealing with it.

We run ~2900 tests on an Ubuntu machine in sequence against a Postgres
9.4.6 database. There are about 60 tables; each test runs ~3 queries, and
the largest test probably inserts ~1000 rows. To ensure each test starts
with a clean slate, we run the following query in between each test run:

ALTER TABLE table1 DISABLE TRIGGER ALL; DELETE FROM table1; ALTER TABLE
table1 ENABLE TRIGGER ALL;
ALTER TABLE table2 DISABLE TRIGGER ALL; DELETE FROM table2; ALTER TABLE
table2 ENABLE TRIGGER ALL;
...
ALTER TABLE table60 DISABLE TRIGGER ALL; DELETE FROM table60; ALTER TABLE
table60 ENABLE TRIGGER ALL;
ALTER SEQUENCE  RESTART WITH 1;

We also have a check at the end of every test run that the test has
returned all database connections to the pool, so I don't believe there are
any other database queries in flight at this time.

*Why mess around with DISABLE/ENABLE TRIGGER instead of TRUNCATE?* We
observed that TRUNCATE took about 200ms, but this method takes about 13ms.
Over a our test suite TRUNCATE is going to more or less double the length
of the suite.

We could resolve some kind of foreign key dependency tree, issue the DELETE
FROM's in the right order, and skip DISABLE TRIGGER etc., but haven't
invested the time to do this or figured out how to maintain it/update it as
we add new tables.

We could also run each test in a transaction and issue a ROLLBACK at the
end, unfortunately our ORM
 does not
support transactions.

We observed that when we pushed tests to a third-party CI service, maybe 1
in 100 test runs fails with a mysterious timeout (set to 18 seconds). It
doesn't seem to correlate with any one test.

I finally figured out how to enable the right logging and export our
database logs from the remote machine. We saw the following results:
https://gist.github.com/kevinburkeshyp/4f5eef58411b1c3c1160a6f6008135b8.
I've also pasted relevant parts of the Postgres configuration at this URL.

Specifically we kick off the ALTER TABLE ... DELETE, a second later (our
deadlock_timeout) a log message appears that the ALTER TABLE is waiting for
an AccessExclusiveLock held by process 16936... which is later revealed to
be an autovacuum process of the "events" table.

(The events table also gets cleared between every test. It has 5 columns
that store less-than-60-byte integers, and one created_at timestamp. There
are indexes on the primary key and on LOWER(object_id), another column. We
need to put indexes on the LOWER() fields because of our ORM
.)

Finally, the test timeout causes the application layer to start closing
connections, at which point the ALTER TABLE gains the exclusive lock.

Here are my questions:

- An AccessExclusiveLock seems to indicate that the vacuum is running a
VACUUM FULL, if I am reading the documentation correctly. Yet the
documentation for autovacuum states that "The autovacuum daemon attempts to
work this way, and in fact will never issue VACUUM FULL." The statements
from the logs and from the documentation seem to be in conflict; am I
missing something?

- Why does vacuum take so long/hang? I would expect it to be a fast
operation.

- Should I just turn off autovacuum altogether? If not, how should I tune
it? Our test run is 10k queries, 6 minutes, each of the ~3k tests runs that
DELETE FROM query listed above, and maybe an average of 2 other queries.

- What other information can I/should I log here?

- Why don't we run into this more often?

-- 
kevin


Re: [GENERAL] PostgreSQL 9.5.1 -> PG_REWIND NOT FOUND

2016-04-06 Thread Adrian Klaver

On 04/06/2016 04:02 AM, sgringer wrote:

Hello,

I have installed PostgreSQL 9.5.1 on x86_64-pc-linux-gnu, compiled by gcc
(Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit on Ubuntu 14.04.04 from
http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main 9.5

Now after replication failure I want use new pg_rewind but it seems that
pg_rewind is not installed.


How did you determine it was not installed?

What packages did you install from the repo?



How can I activate/install pg_rewind??

Thanks!



--
View this message in context: 
http://postgresql.nabble.com/PostgreSQL-9-5-1-PG-REWIND-NOT-FOUND-tp5897145.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after replication switchover

2016-04-06 Thread Alan Hodgson
On Wednesday, April 06, 2016 10:33:16 AM Lars Arvidson wrote:
> > I'd guess it's probably more like option 3 - Glusterfs ate my database.
> 
> Hi, thanks for your reply!
> We do archive logs on a distributed Glusterfs volume in case the streaming
> replication gets too far behind and the transaction logs have been removed.
> Would a restore of a corrupt archived log file give the symptoms we are
> seeing? Would not Postgresql detect that the logfile was corrupt? Are there
> some way I can analyze archived logs files to see if this is the problem?
> 

If it's just storing the logs, I doubt it's the cause of the problem. You can 
ignore my message. I had too much fun fighting with Gluster recently.

I reread your original full post, and the one thing that stuck out for me was 
"the clusters are now replicating from each other". I feel like that could be 
a problem. But someone more intimate with the replication might want to input 
on that.

Other than that, I wonder if you just have a hardware problem with your 
storage.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after replication switchover

2016-04-06 Thread Lars Arvidson
> I'd guess it's probably more like option 3 - Glusterfs ate my database.

Hi, thanks for your reply!
We do archive logs on a distributed Glusterfs volume in case the streaming 
replication gets too far behind and the transaction logs have been removed.
Would a restore of a corrupt archived log file give the symptoms we are seeing? 
Would not Postgresql detect that the logfile was corrupt?
Are there some way I can analyze archived logs files to see if this is the 
problem?

Regards
Lars Arvidson


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I can't see wal receiver process in one node

2016-04-06 Thread DrakoRod
Hi Sameer

Yesterday I reviewed the configuration in both servers.

Configured each with the restore_command and archive_cleanup_command (to
clean wal archives). 

But I saw in the Node B (only this node) have active the parameter named
recovery_target_timeline = 'latest', this was because the one production
node failed and the node B had that resync up the replication with another
production node.

My question. This parameter if not disabled affect the replication?

Best Regards! 
DrakoRod



-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://postgresql.nabble.com/I-can-t-see-wal-receiver-process-in-one-node-tp5896950p5897234.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] database is not accepting commands to avoid wraparound data loss in database

2016-04-06 Thread pedrovlf
Hi,

I'm following error in my DBMS,

< 2016-04-05 17:02:42.053 BRT >ERROR:  database is not accepting commands to
avoid wraparound data loss in database "zabbix"
< 2016-04-05 17:02:42.053 BRT >HINT:  Stop the postmaster and vacuum that
database in single-user mode.


I'm running the vacuum in single mode, but is taking too long ... you can
retrieve the base otherwise? Perhaps with truncate on a table ...

thanks



--
View this message in context: 
http://postgresql.nabble.com/database-is-not-accepting-commands-to-avoid-wraparound-data-loss-in-database-tp5897207.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.5.1 -> PG_REWIND NOT FOUND

2016-04-06 Thread sgringer
Hello, 

I have installed PostgreSQL 9.5.1 on x86_64-pc-linux-gnu, compiled by gcc
(Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit on Ubuntu 14.04.04 from
http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main 9.5

Now after replication failure I want use new pg_rewind but it seems that
pg_rewind is not installed.
How can I activate/install pg_rewind??

Thanks!



--
View this message in context: 
http://postgresql.nabble.com/PostgreSQL-9-5-1-PG-REWIND-NOT-FOUND-tp5897145.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comparing two JSON objects in 9.3

2016-04-06 Thread yuri.ivane...@progforce.com
There's also another way using hstore instead of json:
https://wiki.postgresql.org/wiki/Audit_trigger_91plus



--
View this message in context: 
http://postgresql.nabble.com/comparing-two-JSON-objects-in-9-3-tp5803617p5897128.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi-row constraints, how to avoid unnecessary trigger execution?

2016-04-06 Thread Andreas Joseph Krogh
På onsdag 06. april 2016 kl. 19:00:05, skrev Tobia Conforto <
tobia.confo...@gruppo4.eu >:
Sándor,
 I'd rather have the application developers use regular DML, which could 
become quite complex, and just perform my check on the database side, at 
transaction commit time.

 Andreas,
 thanks, but I need to avoid duplicate executions on different rows too.

 I just came up with this "hack" which seems to be working:

 create or replace function my_trigger() returns trigger as $$
 begin
 create temporary table my_trigger() on commit drop;

 -- perform expensive test here and raise error if it fails
 if ... then
 raise ...;
 end if;

 return null;
 exception when duplicate_table then
 -- already ran in the current transaction, skip test
 return null;
 end;
 $$ language 'plpgsql';

 create constraint trigger my_trigger after insert or update or delete on 
my_table
 initially deferred for each row execute procedure my_trigger();

 Any improvement is welcome.
 
You are aware that your suggestion also will fire for each row?
 
Your function will also suffer from executing multiple times for the same row 
if it's updated multiple times within the same transaction, which my suggestion 
specifically avoids.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Multi-row constraints, how to avoid unnecessary trigger execution?

2016-04-06 Thread Tobia Conforto
Sándor,
I'd rather have the application developers use regular DML, which could become 
quite complex, and just perform my check on the database side, at transaction 
commit time.

Andreas,
thanks, but I need to avoid duplicate executions on different rows too.

I just came up with this "hack" which seems to be working:

create or replace function my_trigger() returns trigger as $$
begin
create temporary table my_trigger() on commit drop;

-- perform expensive test here and raise error if it fails
if ... then
raise ...;
end if;

return null;
exception when duplicate_table then
-- already ran in the current transaction, skip test
return null;
end;
$$ language 'plpgsql';

create constraint trigger my_trigger after insert or update or delete 
on my_table
initially deferred for each row execute procedure my_trigger();

Any improvement is welcome.

-Tobia

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi-row constraints, how to avoid unnecessary trigger execution?

2016-04-06 Thread Andreas Joseph Krogh
På onsdag 06. april 2016 kl. 18:32:50, skrev Tobia Conforto <
tobia.confo...@gruppo4.eu >:
I have a complex data validation requirement that spans many rows and possibly 
more than one table.

 The application must be able to perform several data manipulation statements 
that could invalidate the requirement between one another, and only have the 
database check this requirement at transaction commit time.

 Ideally I would have some sort of after trigger that is deferred to commit 
time, have it search for invalid or missing records and raise errors if any is 
found.

 If I'm reading the manual correctly, the only kind of trigger that can be 
deferred to commit time is a constraint trigger. The problem is that this 
trigger must be declared for each row, but I need to only perform the 
validation once per transaction, because it executes complex queries.

 What is the best way to address this issue?

 Is there any other way, other than a constraint trigger, to defer execution 
of a piece of code at transaction commit time?

 Otherwise, can I check for repeated invocations of my trigger function in the 
same transaction and return early on the second and following ones? I could do 
that by creating a temporary table on commit drop, but it seems overkill. Is 
there a lighter solution?
 
Hi.
 
(note that my answer here only prevents executing the trigger-logic more than 
oncefor each row, so it will fire for each row affected at commit, just not 
multiple times for the same row it it's updated several times in the same 
transaction)
 
The trick is to use constraint-triggers, and to have a condition (column) to 
test for so that it does the actual work only once.
 
Triggers in PG is fired in alphabetical order so a good naming-scheme for such 
triggers is _1, _2, _3
 
It's the first trigger which does the actual work (in this case 
index_email_1_tf()).
 
I use a special column, t_updated, for checking. This column has no other 
purpose than to help the triggers. Note that trigger _2 is NOT DEFERRED, this 
is important.
 
Here is what I use:
 
-- Trigger function to index email CREATE OR REPLACE FUNCTION index_email_1_tf
()returns TRIGGER AS $$ declare  v_email message; BEGIN  SELECT * FROM message 
WHERE entity_id = NEW.entity_id INTO v_email; perform index_email(v_email);  
RETURN NEW;END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION index_email_2_tf
()returns TRIGGER AS $$ BEGIN  update message set t_updated = TRUE WHERE 
entity_id = NEW.entity_id; RETURN NULL; END; $$ LANGUAGE plpgsql; CREATE OR 
REPLACE FUNCTIONindex_email_3_tf() returns TRIGGER AS $$ BEGIN  update message 
set t_updated = NULL WHERE entity_id = NEW.entity_id; RETURN NULL; END; $$ 
LANGUAGEplpgsql; CREATE CONSTRAINT TRIGGER index_email_1_t AFTER INSERT OR 
UPDATE OFre_index ON message DEFERRABLE INITIALLY DEFERRED FOR EACH ROW WHEN 
(NEW.t_updatedIS NULL) EXECUTE PROCEDURE index_email_1_tf(); CREATE CONSTRAINT 
TRIGGERindex_email_2_t AFTER INSERT OR UPDATE OF re_index ON message -- NOT 
DEFERREDFOR EACH ROW WHEN (NEW.t_updated IS NULL) EXECUTE PROCEDURE 
index_email_2_tf(); CREATE CONSTRAINT TRIGGER index_email_3_t AFTER INSERT OR 
UPDATE OFt_updated ON message DEFERRABLE INITIALLY DEFERRED FOR EACH ROW WHEN 
(NEW.t_updated)EXECUTE PROCEDURE index_email_3_tf(); 
 
Hope this helps
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Multi-row constraints, how to avoid unnecessary trigger execution?

2016-04-06 Thread Sándor Daku
On 6 April 2016 at 18:32, Tobia Conforto  wrote:

> I have a complex data validation requirement that spans many rows and
> possibly more than one table.
>
> The application must be able to perform several data manipulation
> statements that could invalidate the requirement between one another, and
> only have the database check this requirement at transaction commit time.
>
> Ideally I would have some sort of after trigger that is deferred to commit
> time, have it search for invalid or missing records and raise errors if any
> is found.
>
> If I'm reading the manual correctly, the only kind of trigger that can be
> deferred to commit time is a constraint trigger. The problem is that this
> trigger must be declared for each row, but I need to only perform the
> validation once per transaction, because it executes complex queries.
>
> What is the best way to address this issue?
>
> Is there any other way, other than a constraint trigger, to defer
> execution of a piece of code at transaction commit time?
>
> Otherwise, can I check for repeated invocations of my trigger function in
> the same transaction and return early on the second and following ones? I
> could do that by creating a temporary table on commit drop, but it seems
> overkill. Is there a lighter solution?
>
> -Tobia
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



Are you continuously feeding data to the db via the app during the
transaction or working on data which is already in the database?
In the second case you can write a db side function which can do the work,
validate and commit or rollback at the end.

Regards,
Sándor


[GENERAL] Multi-row constraints, how to avoid unnecessary trigger execution?

2016-04-06 Thread Tobia Conforto
I have a complex data validation requirement that spans many rows and possibly 
more than one table.

The application must be able to perform several data manipulation statements 
that could invalidate the requirement between one another, and only have the 
database check this requirement at transaction commit time.

Ideally I would have some sort of after trigger that is deferred to commit 
time, have it search for invalid or missing records and raise errors if any is 
found.

If I'm reading the manual correctly, the only kind of trigger that can be 
deferred to commit time is a constraint trigger. The problem is that this 
trigger must be declared for each row, but I need to only perform the 
validation once per transaction, because it executes complex queries.

What is the best way to address this issue?

Is there any other way, other than a constraint trigger, to defer execution of 
a piece of code at transaction commit time?

Otherwise, can I check for repeated invocations of my trigger function in the 
same transaction and return early on the second and following ones? I could do 
that by creating a temporary table on commit drop, but it seems overkill. Is 
there a lighter solution?

-Tobia

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread Jeff Janes
On Wed, Apr 6, 2016 at 3:55 AM, Alexey Bashtanov  wrote:
> Hi all,
>
> I am searching for a proper database schema version management system.
>
> My criteria are the following:
> 0) Open-source, supports postgresql
> 1) Uses psql to execute changesets (to have no problems with COPY,
> transaction management or sophisticated DDL commands, and to benefit from
> scripting)
> 2) Support repeatable migrations (SQL files that get applied every time they
> are changed, it is useful for functions or views tracking).

If you are using COPY, then you must be loading data (not just
changing schemas).

But if you are loading data, how can you have it be repeatable?  How
would it know which data needs to be (or not be) loaded again?  Do you
want it do a three way comparison (The current patch it is trying to
apply, the most recent patch that had been applied, and the live
database) and resolve conflicts?

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-06 Thread rob stone
On Wed, 2016-04-06 at 10:33 +, Albe Laurenz wrote:


> Every row has two system columns associated with it: xmin and xmax
> 
> xmin is the transaction ID that created the row, while xmax is
> the transaction ID that removed the row.
> 
> So when an update takes place, xmax of the original row and xmin
> of the new row are set to the transaction ID of the current
> transaction.
> 
> 
> Yours,
> Laurenz Albe
> 


Hello Laurenz,

Out of curiosity:-
1) If you only updated a column defined as BOOLEAN, DATE or TIMESTAMP,
the tuples physical size should be the same. Is it written back to the
same page with altered xmin, xmax values?

2) If you defined a table with a FILLFACTOR of say 70%, and there is
still space left on its page for the updated tuple, does the same
apply?

I am curious because of "while xmax is the transaction ID that
*removed* the row".

Not urgent.

Cheers,
rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread Thomas Kellerer
Alexey Bashtanov schrieb am 06.04.2016 um 12:55:
> I am searching for a proper database schema version management system.
> 
> My criteria are the following:
> 0) Open-source, supports postgresql
> 1) Uses psql to execute changesets (to have no problems with COPY, 
> transaction management or sophisticated DDL commands, and to benefit from 
> scripting)
> 2) Support repeatable migrations (SQL files that get applied every time they 
> are changed, it is useful for functions or views tracking).
> 
> Reasonable?
> 
> But unfortunately it looks like each of Liquibase, Flyway, SqlHawk, 
> MigrateDB, Schema-evolution-manager, 
> Depesz's Versioning, Alembic and Sqitch does not satisfy some of those, right?
> 
> What DB VCS do you use and how does it related with the criteria listed above?
> Do you have any idea what other systems to try?

We are quite happy with Liquibase (using the XML format exclusively). 

COPY FROM stdin isn't supported that is true, but so far we did not have the 
need for hat. 
We don't do bulk imports in our schema migrations. 

DDL commands can easily be run using the  tag, but for e.g. stored 
functions or view definitions we usually use  runOnChange="true" 
(that's your #2) and we only use a single statement per file so that we can 
also use splitStatements=false to avoid any problems with embedded semicolons 
(which in turn means that the included SQL file can easily be tested with psql 
or any other SQL client)

We written our own implementation of the Liquibase "Database" class (actually 
only a slightly modified version of the built-in one) that gets the quoting 
right for Liquibase & Postgres. Liquibase automatically quotes identifiers if 
they are no completely written in lowercase which is a bit annoying. 

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-06 Thread Joshua D. Drake

On 04/06/2016 02:39 AM, Rakesh Kumar wrote:

Hello

I understand that when an update of say 100,000 rows are made, PG
writes the updated rows as a new row. These new rows are not visible
to any sessions except the one creating it. At commit time PG flips
something internally to make these rows visible to all.

My Q: what happens to those rows which use to contain the values
before the update. Shouldn't something change in those rows to
indicate that those rows are no longer valid. Who does it chain those
rows to the new rows.


They are marked dead and at a threshold vacuum will come along 
automatically and mark them reusable.


The vacuum and maintenance docs explain this pretty well.

Sincerely,

JD



thanks.





--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread Adrian Klaver

On 04/06/2016 03:55 AM, Alexey Bashtanov wrote:

Hi all,

I am searching for a proper database schema version management system.

My criteria are the following:
0) Open-source, supports postgresql
1) Uses psql to execute changesets (to have no problems with COPY,
transaction management or sophisticated DDL commands, and to benefit
from scripting)
2) Support repeatable migrations (SQL files that get applied every time
they are changed, it is useful for functions or views tracking).

Reasonable?

But unfortunately it looks like each of Liquibase, Flyway, SqlHawk,
MigrateDB, Schema-evolution-manager, Depesz's Versioning, Alembic and
Sqitch does not satisfy some of those, right?


I use Sqitch and supports the above afaict. The only one I would have a 
question about is 2) as I am not quite sure what you are getting at.




What DB VCS do you use and how does it related with the criteria listed
above?
Do you have any idea what other systems to try?

Regards,
   Alexey





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclude constraint using custom operator

2016-04-06 Thread Tom Lane
Tobia Conforto  writes:
> Stated differently, is there a simple way to extend or "subclass" the builtin 
> range_ops operator family, adding my own operator to it, so that I can use it 
> in an exclude index?

No.  In a GIST opclass, the set of supported operator strategies is
determined by the code in the opclass support functions (primarily,
what the "consistent" function knows how to do).  You'd have to go
fiddle with that C code before you could add a new operator to the
opclass.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what database schema version management system to use?

2016-04-06 Thread Achilleas Mantzios

On 06/04/2016 13:55, Alexey Bashtanov wrote:

Hi all,

I am searching for a proper database schema version management system.

My criteria are the following:
0) Open-source, supports postgresql
1) Uses psql to execute changesets (to have no problems with COPY, transaction 
management or sophisticated DDL commands, and to benefit from scripting)
2) Support repeatable migrations (SQL files that get applied every time they 
are changed, it is useful for functions or views tracking).

Reasonable?

But unfortunately it looks like each of Liquibase, Flyway, SqlHawk, MigrateDB, 
Schema-evolution-manager, Depesz's Versioning, Alembic and Sqitch does not 
satisfy some of those, right?

What DB VCS do you use and how does it related with the criteria listed above?
Do you have any idea what other systems to try?


Maybe Git then interface with smth like teamcity to apply your changes. Honestly you are asking too much. The classic problem is to find a tool that would translate DDL diffs into ALTER commands, if 
you want to store pure DDL CREATE statements. I have watched many presentations of people on the same boat as you, and they all implemented their own solutions. Good luck with your solution and keep 
us posted, many ppl might benefit from this.




Regards,
  Alexey





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] what database schema version management system to use?

2016-04-06 Thread Alexey Bashtanov

Hi all,

I am searching for a proper database schema version management system.

My criteria are the following:
0) Open-source, supports postgresql
1) Uses psql to execute changesets (to have no problems with COPY, 
transaction management or sophisticated DDL commands, and to benefit 
from scripting)
2) Support repeatable migrations (SQL files that get applied every time 
they are changed, it is useful for functions or views tracking).


Reasonable?

But unfortunately it looks like each of Liquibase, Flyway, SqlHawk, 
MigrateDB, Schema-evolution-manager, Depesz's Versioning, Alembic and 
Sqitch does not satisfy some of those, right?


What DB VCS do you use and how does it related with the criteria listed 
above?

Do you have any idea what other systems to try?

Regards,
  Alexey


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-06 Thread Albe Laurenz
Rakesh Kumar wrote:
> I understand that when an update of say 100,000 rows are made, PG
> writes the updated rows as a new row. These new rows are not visible
> to any sessions except the one creating it. At commit time PG flips
> something internally to make these rows visible to all.
> 
> My Q: what happens to those rows which use to contain the values
> before the update. Shouldn't something change in those rows to
> indicate that those rows are no longer valid. Who does it chain those
> rows to the new rows.

Every row has two system columns associated with it: xmin and xmax

xmin is the transaction ID that created the row, while xmax is
the transaction ID that removed the row.

So when an update takes place, xmax of the original row and xmin
of the new row are set to the transaction ID of the current transaction.

Furthermore, the commit log (CLOG) logs for each transaction whether
it was committed or rolled back.

Now when a backend examines a row, it first checks if the row is
visible, i.e. xmin must be less or equal to the current transaction ID
and xmax must be 0 or greater than the current transaction ID
or belong to a transaction that was rolled back.

To save CLOG lookups, the first reader who consults the CLOG will
save the result in so-called "hint bits" on the row itself.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclude constraint using custom operator

2016-04-06 Thread Tobia Conforto
Stated differently, is there a simple way to extend or "subclass" the builtin 
range_ops operator family, adding my own operator to it, so that I can use it 
in an exclude index?

I couldn't find a SQL definition for the range_ops family, probably because 
it's defined and implemented in C and header files.

-Tobia


> On 5 Apr 2016, at 19:30, Tobia Conforto  wrote:
> 
> Hello
> 
> I'd like to use the Nested Set model[1] to represent a hierarchical data 
> structure in PostgreSQL. I'm considering using a single int4range column 
> instead of the traditional two columns (lft and rgt) because the two values 
> do in fact represent a range of integers. This should allow me to add a 
> constraint to the range column that checks the consistency of the Nested Set 
> model.
> 
> My column is currently:
> 
>   pos int4range not null check (lower(pos) between 1 and upper(pos) - 2)
> 
> This is already performing the traditional check of lft >= 1 and rgt > lft, 
> given the canonical form of an int4range.
> 
> Additionally, I would like to add an exclude constraint to validate the 
> Nested Set model: any two ranges must be disjoint (not a.pos && b.pos) OR 
> strictly contained one within the other, without sharing either bound.
> 
> One possible expression for the *exclusion* is that two bad ranges overlap 
> and each is right- or left-bound by the other. I can easily write that as a 
> commutative operator[2]:
> 
>   create function bad_nested(int4range, int4range) returns boolean as $$
>   begin
>   return $1 && $2 and ($1 &< $2 or $1 &> $2) and ($2 &< $1 or $2 
> &> $1);
>   end;
>   $$ language 'plpgsql' immutable;
> 
>   create operator &/ (
>   leftarg= int4range
>   ,   rightarg   = int4range
>   ,   procedure  = bad_nested
>   ,   commutator = &/
>   );
> 
> Unfortunately, trying to use this operator in an exclude constraint makes 
> PostgreSQL complain about operator families:
> 
>   create table test (
>   pos int4range not null check (lower(pos) between 1 and 
> upper(pos) - 2)
>   ,   exclude using gist (pos with &/)
>   );
> 
>   ERROR: operator &/(int4range,int4range) is not a member of operator 
> family "range_ops"
>   Detail: The exclusion operator must be related to the index operator 
> class for the constraint.
> 
> I don't fully understand operator classes / families and I find the manual is 
> being somewhat cryptic about them.
> 
> Can anybody suggest the right create statement(s) to enable using my operator 
> in a GiST index?
> 
> What other "strategies", functions, or operators should I write? The standard 
> ordering of ranges with "<" should be enough, because it maps to the 
> traditional ordering of nested sets.
> 
> Can I do everything in sql / plpgsql or do I need to write C code?
> 
> 
> -Tobia
> 
> 
> [1] https://en.wikipedia.org/wiki/Nested_set_model
> 
> [2] There may be simpler / faster expressions for the same condition. A 
> custom plpgsql expression on the lower and upper bounds will probably be 
> faster, if I store the four integers as local variables. But that's just an 
> optimization.
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trying to understand page structures in PG

2016-04-06 Thread Rakesh Kumar
Hello

I understand that when an update of say 100,000 rows are made, PG
writes the updated rows as a new row. These new rows are not visible
to any sessions except the one creating it. At commit time PG flips
something internally to make these rows visible to all.

My Q: what happens to those rows which use to contain the values
before the update. Shouldn't something change in those rows to
indicate that those rows are no longer valid. Who does it chain those
rows to the new rows.

thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general