On Thu, Apr 03, 2008 at 03:57:38PM -0400, Tom Lane wrote:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> > AFAIK the state of the art is actually to load the data into a table which
> > closely matches the source material, sometimes just columns of text. Then 
> > copy
> > it all to another table doing transformations. Not impressed.
> 
> I liked the idea of allowing COPY FROM to act as a table source in a
> larger SELECT or INSERT...SELECT.  Not at all sure what would be
> involved to implement that, but it seems a lot more flexible than
> any other approach.

I'm not sure why new syntax is needed, what's wrong with having a simple
set of procedures like:

 readtsv(filename TEXT) AS SETOF RECORD

You'd then be free to do whatever "transformations" you wanted:

  INSERT INTO table (i,j)
    SELECT i, MIN(j::INTEGER)
    FROM readtsv("file.dat") x(i INTEGER, j TEXT)
    WHERE j ~ '^[0-9]+$'
    GROUP BY i;

You could even have a readlines(filename) procedure that just gives you
back a SETOF TEXT and you can do the parsing yourself.  An associated
regexp split to RECORD would be nice then.


  Sam

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to