Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Jan Bakuwel
Hi David,


> Start a "savepoint" before each sub-update and rollback to the savepoint if 
> the update fails, and then try again with different data.  If it succeeds you 
> then release the savepoint anad move on.

Yeah... not ideal in my case & will result in "messy" code...

Would be nice to have an option in PostgreSQL something along the lines
of:  'abort-transaction-on-constraint-violation = false'

Jan




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Leif Biberg Kristensen
 Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :

> Would be nice to have an option in PostgreSQL something along the lines
> of:  'abort-transaction-on-constraint-violation = false'

That option is called MySQL with MyISAM tables.

Seriously, if the user encounters a constraint violation, that is IMO a 
symptom of bad design. Such conditions should be checked and caught _before_ 
the transaction begins.

regards, Leif

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Re: Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Thomas Kellerer

Leif Biberg Kristensen wrote on 30.12.2011 10:44:

  Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :


Would be nice to have an option in PostgreSQL something along the lines
of:  'abort-transaction-on-constraint-violation = false'


That option is called MySQL with MyISAM tables.


Not true.

Oracle and others (I believe at least DB2) behave such that you can insert a 
bunch of rows and if one or more throw a constraint violation, the transaction 
can still be committed persisting those that do not violate the constraint.



--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Re: Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Marcin Mirosław
W dniu 30.12.2011 12:03, Thomas Kellerer pisze:
> Leif Biberg Kristensen wrote on 30.12.2011 10:44:
>>   Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :
>>
>>> Would be nice to have an option in PostgreSQL something along the lines
>>> of:  'abort-transaction-on-constraint-violation = false'
>>
>> That option is called MySQL with MyISAM tables.
>>
> Not true.
> 
> Oracle and others (I believe at least DB2) behave such that you can
> insert a bunch of rows and if one or more throw a constraint violation,
> the transaction can still be committed persisting those that do not
> violate the constraint.

Hi,
isn't this option:
http://www.postgresql.org/docs/current/static/sql-set-constraints.html ?
Regards

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Re: Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Thomas Kellerer

Marcin Mirosław wrote on 30.12.2011 12:07:

Would be nice to have an option in PostgreSQL something along the lines
of:  'abort-transaction-on-constraint-violation = false'


That option is called MySQL with MyISAM tables.


Not true.

Oracle and others (I believe at least DB2) behave such that you can
insert a bunch of rows and if one or more throw a constraint violation,
the transaction can still be committed persisting those that do not
violate the constraint.


Hi,
isn't this option:
http://www.postgresql.org/docs/current/static/sql-set-constraints.html ?
Regards


Not that's something different.
It would still prevent comitting the transaction if the constraint check fails 
at the end.

This strict transaction concept is somewhat irritating when you come from other 
DBMS (such as Oracle or DB2).
Using savepoints is the only option to "simulate" that behaviour in PostgreSQL 
(and then the constraints need to be immediate)

Thomas



--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Leif Biberg Kristensen
 Fredag 30. desember 2011 05.25.22 skrev Jan Bakuwel :
> Of course I can start testing existing values in the database before
> accepting them in the user interface but that's putting the horse behind
> the cart. I much rather use the constraints at the database level to
> tell me a particular update can't be done and do that without loosing
> everything else I happened to have done in that transaction until that
> point.

Here's an example from a plpgsql function I wrote, where a possible violation 
of unique constraint on (parent_id, source_text) is checked within the 
transaction:

SELECT source_id FROM sources WHERE parent_id = par_id AND source_text = txt 
INTO x;
IF NOT FOUND THEN
INSERT INTO sources (parent_id, source_text, sort_order, source_date, 
part_type)
VALUES (par_id, txt, srt, true_date_extract(txt), pt) RETURNING 
source_id INTO src_id;
ELSE
RAISE NOTICE 'Source % has the same parent id and text as you tried to 
enter.', x;
RETURN -x; -- abort the transaction and return the offended source id as a 
negative number.
END IF;

I don't know if it's considered good form to issue a RETURN in the middle of a 
function on an error condition, but the main point is that you can take an 
alternate action when the violation is about to happen. Before I introduced 
this test, the PHP interface just barfed all over the place with "transaction 
aborted" messages.

Here's another test from the same function, where the alternate action is 
basically a no-op:

-- don't violate unique constraint on (source_fk, event_fk) in the 
event_citations table.
-- if this source-event association already exists, it's rather pointless to 
repeat it.
PERFORM * FROM event_citations WHERE event_fk = event AND source_fk = src_id;
IF NOT FOUND THEN
INSERT INTO event_citations (event_fk, source_fk) VALUES (event, 
src_id);
ELSE
RAISE NOTICE 'citation exists';
END IF;

regards, Leif

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] avoid the creating the type for setof

2011-12-30 Thread John Fabiani
Hi,
I recall somewhere I saw a simple plpgsql function that returned a table with 
more than one record that did not use a defined type or a temp table ( at 
least I think I did).  Is it possible to create such a function that will 
return more than one record and not require a record type or temp table?

something like

return setof record as
return query select ...

return


Johnf

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] avoid the creating the type for setof

2011-12-30 Thread Thomas Kellerer

John Fabiani wrote on 30.12.2011 15:26:

Hi,
I recall somewhere I saw a simple plpgsql function that returned a table with
more than one record that did not use a defined type or a temp table ( at
least I think I did).  Is it possible to create such a function that will
return more than one record and not require a record type or temp table?

something like

return setof record as
return query select ...

return


Sure:

create or replace function
  returns table (f1 text, f2 integer)
as
$$
   select col1, col2 from some table;
$$
language sql;

If you are using PL/pgSQL you need to use "RETURN QUERY SELECT ..." inside the 
function.

For more details see the examples in the manual:
http://www.postgresql.org/docs/current/static/sql-createfunction.html
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING

Thomas




--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] avoid the creating the type for setof

2011-12-30 Thread Adrian Klaver
On Friday, December 30, 2011 6:26:19 am John Fabiani wrote:
> Hi,
> I recall somewhere I saw a simple plpgsql function that returned a table
> with more than one record that did not use a defined type or a temp table
> ( at least I think I did).  Is it possible to create such a function that
> will return more than one record and not require a record type or temp
> table?
> 
> something like
> 
> return setof record as
> return query select ...

http://www.postgresql.org/docs/9.0/interactive/plpgsql-declarations.html#PLPGSQL-DECLARATION-PARAMETERS

39.3.1. Declaring Function Parameters

Search for RETURNS TABLE

> 
> return
> 
> 
> Johnf

-- 
Adrian Klaver
[email protected]

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] avoid the creating the type for setof

2011-12-30 Thread John Fabiani
On Friday, December 30, 2011 06:26:19 AM John Fabiani wrote:
> Hi,
> I recall somewhere I saw a simple plpgsql function that returned a table
> with more than one record that did not use a defined type or a temp table (
> at least I think I did).  Is it possible to create such a function that
> will return more than one record and not require a record type or temp
> table?
> 
> something like
> 
> return setof record as
> return query select ...
> 
> return
> 
> 
> Johnf

Thanks all - I knew I had seen it in the past.

Johnf

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Jan Bakuwel
Hi Leif,

On 30/12/11 22:44, Leif Biberg Kristensen wrote:
>  Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :
>
>> Would be nice to have an option in PostgreSQL something along the lines
>> of:  'abort-transaction-on-constraint-violation = false'
> That option is called MySQL with MyISAM tables.
>
> Seriously, if the user encounters a constraint violation, that is IMO a 
> symptom of bad design. Such conditions should be checked and caught _before_ 
> the transaction begins.

Really?

One of my detail tables here is a list of codes. The design currently is
so that you are not allowed to add two identical codes in that table for
a particular related master record, ie. if you try it raises a
constraint violation (duplicate key). Users try anyway (you know those
pesky users doing things they're not supposed to do).

Why would that a bad design?

I simply want to tell the user: sorry you can't do this because it
violates a constraint (duplicate key).

Sometimes they try to delete something that has other records referring
to it and the database design is so that it won't cascade delete (for
various reasons). In that case I want to tell them: sorry you can't do
this because there are related records.

In a well designed system, you'd have those constraints at the database
level not the application level and use exception handling to deal with
these, not write tests to find out the possible error conditions
beforehand. Of course it's possible to write code around all of this
(and I'm starting to realise that is what I might have to do) but I
consider that bad design.

I don't claim to know all other RDBMS but I think PostgreSQL might be
one of the few (or only one) that considers a constraint violation
something really really really serious... so serious that the
transaction will have to be aborted. Quite a few other RDBMS will give
you the error but will also allow you to continue on your merry way (and
not loose everything you've done up to that point).

Why-o-why have the PostgreSQL developers decided to do it this way...?

regards,
Jan




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Tom Lane
Jan Bakuwel  writes:
> Why-o-why have the PostgreSQL developers decided to do it this way...?

Because starting and cleaning up a subtransaction is an expensive thing.
If we had auto-rollback at the statement level, you would be paying that
overhead for every statement in every transaction, whether you need it
or not (since obviously there's no way to forecast in advance whether a
statement will fail).  Making it depend on explicit savepoints allows
the user/application to control whether that overhead is expended or
not.

If you want to pay that price all the time, there are client-side
frameworks that will do it for you, or you can roll your own easily
enough.  So we do not see it as a big deal that the database server
itself doesn't act that way.

regards, tom lane

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Scott Marlowe
On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane  wrote:
> Jan Bakuwel  writes:
>> Why-o-why have the PostgreSQL developers decided to do it this way...?
>
> Because starting and cleaning up a subtransaction is an expensive thing.
> If we had auto-rollback at the statement level, you would be paying that
> overhead for every statement in every transaction, whether you need it
> or not (since obviously there's no way to forecast in advance whether a
> statement will fail).  Making it depend on explicit savepoints allows
> the user/application to control whether that overhead is expended or
> not.
>
> If you want to pay that price all the time, there are client-side
> frameworks that will do it for you, or you can roll your own easily
> enough.  So we do not see it as a big deal that the database server
> itself doesn't act that way.

Having used PostgreSQL a LOT, I find that being able to throw an
entire update at the db and having it fail / be rolled back / CTRL-C
out of and fix the problem is actually much less work than the
frameworks for other databases.  Once you've chased down bad data in a
load file a few times, it's really pretty easy to spot and fix these
issues and just run the whole transaction again.  Since PostgreSQL
doesn't have a very big penalty for rolling back a whole transaction
it's not that bad.  Some dbs, like MySQL with innodb table handler
have a 10:1 or greater penalty for rollbacks.  Insert a million rows
in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
a rollback is generally instantaneous, with the only real cost being
bloat in the tables or indexes.

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Samuel Gendler
On Fri, Dec 30, 2011 at 4:03 PM, Scott Marlowe wrote:

> On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane  wrote:
> > Jan Bakuwel  writes:
> >> Why-o-why have the PostgreSQL developers decided to do it this way...?
> >
> > Because starting and cleaning up a subtransaction is an expensive thing.
> > If we had auto-rollback at the statement level, you would be paying that
> > overhead for every statement in every transaction, whether you need it
> > or not (since obviously there's no way to forecast in advance whether a
> > statement will fail).  Making it depend on explicit savepoints allows
> > the user/application to control whether that overhead is expended or
> > not.
> >
> > If you want to pay that price all the time, there are client-side
> > frameworks that will do it for you, or you can roll your own easily
> > enough.  So we do not see it as a big deal that the database server
> > itself doesn't act that way.
>
> Having used PostgreSQL a LOT, I find that being able to throw an
> entire update at the db and having it fail / be rolled back / CTRL-C
> out of and fix the problem is actually much less work than the
> frameworks for other databases.  Once you've chased down bad data in a
> load file a few times, it's really pretty easy to spot and fix these
> issues and just run the whole transaction again.  Since PostgreSQL
> doesn't have a very big penalty for rolling back a whole transaction
> it's not that bad.  Some dbs, like MySQL with innodb table handler
> have a 10:1 or greater penalty for rollbacks.  Insert a million rows
> in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
> a rollback is generally instantaneous, with the only real cost being
> bloat in the tables or indexes.
>

More to the point - if a statement is truly independent of all the other
statements in a transaction, it would seem that the transaction itself is
poorly defined.  The whole point of a transaction is to define an atomic
unit of work. If you don't care about atomicity, enable auto commit and
just catch the constraint violation exception and continue on your merry
way.  Yes, on occasion, working around the way postgresql functions causes
extra work for a developer (I don't think anyone is suggesting that it
should change the end user experience, as was sort-of implied by one
response on this thread), but so too can code which is not atomic cause
extra work for a developer - and transactions are intended to be atomic, so
it makes far more sense to me to implement it the postgres way and incur
the modicum of extra developer overhead in the few cases where I may want
to deal with acceptable constraint violations rather than in the many cases
where I want a transaction to be atomic.

In the example of users adding a new value to an enumerated list in the
same unit of work as other rows are inserted in, it is likely not too much
work to use a trigger to check the insert prior to executing it - assuming
that list is in another table with just a foreign key going into the table
the majority of your inserts are going to.  Alternatively, if you aren't
doing a bulk insert via a copy, it probably isn't too much work to
construct the set of inserts needed for the joined table separately and
issue those in separate transactions before doing the main transaction.


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Jan Bakuwel
Hi,

Thanks for all having responded to my mail.

I understand there's no way around it at the moment so I'll have to
start writing some code to deal with this behaviour.

cheers!
Jan






smime.p7s
Description: S/MIME Cryptographic Signature


[SQL] Nested custom types: array - unable to insert

2011-12-30 Thread John Poole
I am trying to create a custom data type for phone numbers where
I have a primary phone number and then an array of additional
phone numbers qualified by certain types.

Below is a set of SQL commands I used to set up my custom
types.  I am unable to insert into the array field and wonder
if I have found a bug, or I'm just missing some basic technique.

Here is the error I receive when trying to insert into the array field:

ERROR:  cannot cast type record[] to phonenumber_type
LINE 2: ...2,'office'), (333,'cell'),(444,'eve')]::phonenum...

Can someone provide me an example of how to insert one or more
records into the secondary array component of the type or provide
some light on creating a custom type that would hold two fields:
1) a single type
2) an array of types  (this field may be null)


Thank you.

John Poole

=
Here is the SQL to recreate my attempt:
--
-- Creating a custom data type and inserting/updating example
--
-- create database demo_typestudy1;

create type phoneattribute_type as ENUM
('home','office','day','eve','mobile','fax');


create type phonenumber_type as
(numbers int,
phone_type phoneattribute_type
);

create type contactphone_type as
(primarynumber phonenumber_type,
othernumbers phonenumber_type ARRAY
);

create table people
(id integer PRIMARY KEY,
username text NOT NULL UNIQUE,
phone contactphone_type
);

--
-- create a record w/o phone
--
insert into people
(id, username)
VALUES
(1,'bob');
--
-- add the custom data type: contactphone, just the primary and no other
-- we'll try adding array values later
--
update people
set phone = ((1234567899,'home'),NULL)
where id = 1;
--
-- also more qualified
--
update people
set phone = ROW(ROW(1234567899,'home'),NULL)
where id = 1;
--
-- most qualified (with casting)
--
update people
set phone = CAST(ROW(CAST(ROW(1234567899,'home') as phonenumber_type),NULL)
as contactphone_type)
where id = 1;
--
-- view it
--
select phone
from people where id = 1;
--
-- try adding to the array field
-- replace the primary and add additional phone numbers
-- This is where things go awry.
--
update people
set phone = CAST(ROW(CAST(ROW(111,'home') as
phonenumber_type),ARRAY[(222,'office'),
(333,'cell'),(444,'eve')]::phonenumber_type)
as contactphone_type)
where id = 1;

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Nested custom types: array - unable to insert

2011-12-30 Thread David Johnston

On Dec 30, 2011, at 21:53, John Poole  wrote:

> I am trying to create a custom data type for phone numbers where
> I have a primary phone number and then an array of additional
> phone numbers qualified by certain types.
> 
> Below is a set of SQL commands I used to set up my custom
> types.  I am unable to insert into the array field and wonder
> if I have found a bug, or I'm just missing some basic technique.
> 
> Here is the error I receive when trying to insert into the array field:
> 
> ERROR:  cannot cast type record[] to phonenumber_type
> LINE 2: ...2,'office'), (333,'cell'),(444,'eve')]::phonenum...
> 
> Can someone provide me an example of how to insert one or more
> records into the secondary array component of the type or provide
> some light on creating a custom type that would hold two fields:
> 1) a single type
> 2) an array of types  (this field may be null)
> 
> 
> Thank you.
> 
> John Poole
> 
> =
> Here is the SQL to recreate my attempt:
> --
> -- Creating a custom data type and inserting/updating example
> --
> -- create database demo_typestudy1;
> 
> create type phoneattribute_type as ENUM
> ('home','office','day','eve','mobile','fax');
> 
> 
> create type phonenumber_type as
> (numbers int,
> phone_type phoneattribute_type
> );
> 
> create type contactphone_type as
> (primarynumber phonenumber_type,
> othernumbers phonenumber_type ARRAY
> );
> 
> create table people
> (id integer PRIMARY KEY,
> username text NOT NULL UNIQUE,
> phone contactphone_type
> );
> 
> --
> -- create a record w/o phone
> --
> insert into people
> (id, username)
> VALUES
> (1,'bob');
> --
> -- add the custom data type: contactphone, just the primary and no other
> -- we'll try adding array values later
> --
> update people
> set phone = ((1234567899,'home'),NULL)
> where id = 1;
> --
> -- also more qualified
> --
> update people
> set phone = ROW(ROW(1234567899,'home'),NULL)
> where id = 1;
> --
> -- most qualified (with casting)
> --
> update people
> set phone = CAST(ROW(CAST(ROW(1234567899,'home') as phonenumber_type),NULL)
>   as contactphone_type)
> where id = 1;
> --
> -- view it
> --
> select phone
> from people where id = 1;
> --
> -- try adding to the array field
> -- replace the primary and add additional phone numbers
> -- This is where things go awry.
> --
> update people
> set phone = CAST(ROW(CAST(ROW(111,'home') as
> phonenumber_type),ARRAY[(222,'office'),
> (333,'cell'),(444,'eve')]::phonenumber_type)
>   as contactphone_type)
> where id = 1;
> 
> 

... ,ARRAY[ ... ]::phonenumber_type[]

You need to cast to an array of the type which is done by adding the trailing 
brackets to the type.

Also, your CREATE TYPE syntax (the "othernumbers phonenumber_type ARRAY" part) 
is something I have not yet seen.  Where did you see this? I would have 
expected it to read "othernumbers phonenumber_type[]"; without the word ARRAY.

David J.

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Nested custom types: array - unable to insert [SOLVED]

2011-12-30 Thread John L. Poole



On 12/30/2011 7:31 PM, David Johnston wrote:

On Dec 30, 2011, at 21:53, John Poole  wrote:


I am trying to create a custom data type for phone numbers where
I have a primary phone number and then an array of additional
phone numbers qualified by certain types.

Below is a set of SQL commands I used to set up my custom
types.  I am unable to insert into the array field and wonder
if I have found a bug, or I'm just missing some basic technique.

Here is the error I receive when trying to insert into the array field:

ERROR:  cannot cast type record[] to phonenumber_type
LINE 2: ...2,'office'), (333,'cell'),(444,'eve')]::phonenum...

Can someone provide me an example of how to insert one or more
records into the secondary array component of the type or provide
some light on creating a custom type that would hold two fields:
1) a single type
2) an array of types  (this field may be null)


Thank you.

John Poole

=
Here is the SQL to recreate my attempt:
--
-- Creating a custom data type and inserting/updating example
--
-- create database demo_typestudy1;

create type phoneattribute_type as ENUM
('home','office','day','eve','mobile','fax');


create type phonenumber_type as
(numbers int,
phone_type phoneattribute_type
);

create type contactphone_type as
(primarynumber phonenumber_type,
othernumbers phonenumber_type ARRAY
);

create table people
(id integer PRIMARY KEY,
username text NOT NULL UNIQUE,
phone contactphone_type
);

--
-- create a record w/o phone
--
insert into people
(id, username)
VALUES
(1,'bob');
--
-- add the custom data type: contactphone, just the primary and no other
-- we'll try adding array values later
--
update people
set phone = ((1234567899,'home'),NULL)
where id = 1;
--
-- also more qualified
--
update people
set phone = ROW(ROW(1234567899,'home'),NULL)
where id = 1;
--
-- most qualified (with casting)
--
update people
set phone = CAST(ROW(CAST(ROW(1234567899,'home') as phonenumber_type),NULL)
   as contactphone_type)
where id = 1;
--
-- view it
--
select phone
from people where id = 1;
--
-- try adding to the array field
-- replace the primary and add additional phone numbers
-- This is where things go awry.
--
update people
set phone = CAST(ROW(CAST(ROW(111,'home') as
phonenumber_type),ARRAY[(222,'office'),
(333,'cell'),(444,'eve')]::phonenumber_type)
   as contactphone_type)
where id = 1;



... ,ARRAY[ ... ]::phonenumber_type[]

You need to cast to an array of the type which is done by adding the trailing 
brackets to the type.

Also, your CREATE TYPE syntax (the "othernumbers phonenumber_type ARRAY" part) is 
something I have not yet seen.  Where did you see this? I would have expected it to read 
"othernumbers phonenumber_type[]"; without the word ARRAY.

David J.

David et al.,
That works.

demo_typestudy1=# update people
demo_typestudy1-# set phone = CAST(ROW(CAST(ROW(111,'home') as 
phonenumber_type),ARRAY[(222,'office'), 
(333,'mobile'),(444,'eve')]::phonenumber_type[])

demo_typestudy1(# as contactphone_type)
demo_typestudy1-# where id = 1;
UPDATE 1
demo_typestudy1=#

To answer your question, I did originally try

 "othernumbers phonenumber_type[]"

but as I kept running into a wall, I figured I had not tried
a correct syntax and somewhere found the "ARRAY" syntax
as an acceptable form; I figured the more explicit, the better.
It may have been from a railroad diagram.
The type definition was accepted so I ran with it.

Thank you, I spent several hours gated by this issue, so your
insight has been a tremendous help.

John

<>
-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Scott Marlowe
On Fri, Dec 30, 2011 at 6:19 PM, Samuel Gendler
 wrote:
>
>
> On Fri, Dec 30, 2011 at 4:03 PM, Scott Marlowe 
> wrote:
>>
>> On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane  wrote:
>> > Jan Bakuwel  writes:
>> >> Why-o-why have the PostgreSQL developers decided to do it this way...?
>> >
>> > Because starting and cleaning up a subtransaction is an expensive thing.
>> > If we had auto-rollback at the statement level, you would be paying that
>> > overhead for every statement in every transaction, whether you need it
>> > or not (since obviously there's no way to forecast in advance whether a
>> > statement will fail).  Making it depend on explicit savepoints allows
>> > the user/application to control whether that overhead is expended or
>> > not.
>> >
>> > If you want to pay that price all the time, there are client-side
>> > frameworks that will do it for you, or you can roll your own easily
>> > enough.  So we do not see it as a big deal that the database server
>> > itself doesn't act that way.
>>
>> Having used PostgreSQL a LOT, I find that being able to throw an
>> entire update at the db and having it fail / be rolled back / CTRL-C
>> out of and fix the problem is actually much less work than the
>> frameworks for other databases.  Once you've chased down bad data in a
>> load file a few times, it's really pretty easy to spot and fix these
>> issues and just run the whole transaction again.  Since PostgreSQL
>> doesn't have a very big penalty for rolling back a whole transaction
>> it's not that bad.  Some dbs, like MySQL with innodb table handler
>> have a 10:1 or greater penalty for rollbacks.  Insert a million rows
>> in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
>> a rollback is generally instantaneous, with the only real cost being
>> bloat in the tables or indexes.
>
>
> More to the point - if a statement is truly independent of all the other
> statements in a transaction, it would seem that the transaction itself is
> poorly defined.  The whole point of a transaction is to define an atomic
> unit of work. If you don't care about atomicity, enable auto commit and just
> catch the constraint violation exception and continue on your merry way.

But the performance penalty for autocommit is huge.  It's still almost
always faster to run a single big transaction and fix errors than to
do single commits when you're doing a large import.

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql