Le mardi 08 avril 2008, Tom Lane a écrit :
Dimitri Fontaine [EMAIL PROTECTED] writes:
And my main concern would still be left as-is, COPY wouldn't have any
facility to cope with data representation not matching what datatype
input functions want to read.
That's sufficiently covered by the
Dimitri Fontaine [EMAIL PROTECTED] writes:
Le mardi 08 avril 2008, Tom Lane a écrit :
That's sufficiently covered by the proposal to allow a COPY FROM as a
table source within SELECT, no?
Well, yes, the table source has text as datatypes and the select expression
on
the column will call
Tom Lane wrote:
Dimitri Fontaine [EMAIL PROTECTED] writes:
Le mardi 08 avril 2008, Tom Lane a écrit :
That's sufficiently covered by the proposal to allow a COPY FROM as a
table source within SELECT, no?
Well, yes, the table source has text as datatypes and the select
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
(One of the issues that'd have to be addressed to allow a table source
syntax is whether it's sane to allow multiple COPY FROM STDIN in a
single query. If so, how does it work; if not, how do we prevent it?)
I don't see why it
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Could we make each COPY target
behave like an SRF, stashing its data in a tuplestore?
The first question is what is the wire-protocol definition. In
particular, how would the client know what order to send the COPY
datasets
Andrew Dunstan [EMAIL PROTECTED] writes:
Is there a big demand for multiple datasets on the wire in a situation
like this? How about if we allow multiple COPY targets but at most one
from STDIN, at least for one go round?
That's exactly what I was saying (or at least trying to imply) as the
On Apr 3, 2008, at 4:51 PM, Andrew Dunstan wrote:
Several years ago Bruce and I discussed the then theoretical use of
a SELECT query as the source for COPY TO, and we agreed that the
sane analog would be to have an INSERT query as the target of COPY
FROM.
This idea seems to take that
Decibel! wrote:
On Apr 3, 2008, at 4:51 PM, Andrew Dunstan wrote:
Several years ago Bruce and I discussed the then theoretical use of a
SELECT query as the source for COPY TO, and we agreed that the sane
analog would be to have an INSERT query as the target of COPY FROM.
This idea seems to
Le Monday 07 April 2008 21:04:26 Andrew Dunstan, vous avez écrit :
Quite apart from any other reason why not, this would be a horrid hack
and is just the sort of feature we rightly eschew, IMNSHO. COPY is
designed as a bulk load/unload facility. It's fragile enough in that role.
And my main
Dimitri Fontaine [EMAIL PROTECTED] writes:
And my main concern would still be left as-is, COPY wouldn't have any
facility
to cope with data representation not matching what datatype input functions
want to read.
That's sufficiently covered by the proposal to allow a COPY FROM as a
table
On Thu, Apr 03, 2008 at 09:38:42PM -0400, Tom Lane wrote:
Sam Mason [EMAIL PROTECTED] writes:
On Thu, Apr 03, 2008 at 03:57:38PM -0400, Tom Lane wrote:
I liked the idea of allowing COPY FROM to act as a table source in a
larger SELECT or INSERT...SELECT. Not at all sure what would be
Hi,
On Thu, Apr 3, 2008 at 6:47 PM, Dimitri Fontaine [EMAIL PROTECTED]
wrote:
Here's a proposal for COPY to support the T part of an ETL, that is adding
the
capability for COPY FROM to Transform the data it gets.
The idea is quite simple: adding to COPY FROM the option to run a function
Data transformation while doing a data load is a requirement now and
then.
Considering that users will have to do mass updates *after* the load
completes to mend the data to their liking should be reason enough to do
this while the loading is happening. I think to go about it the right
way
Dimitri Fontaine [EMAIL PROTECTED] writes:
Here's a proposal for COPY to support the T part of an ETL, that is adding
the
capability for COPY FROM to Transform the data it gets.
The idea is quite simple: adding to COPY FROM the option to run a function on
the data before to call
Le jeudi 03 avril 2008, PFC a écrit :
CREATE FLATFILE READER mydump (
id INTEGER,
dateTEXT,
...
) FROM file 'dump.txt'
(followed by delimiter specification syntax identical to COPY, etc)
;
[...]
INSERT INTO mytable (id, date, ...) SELECT id, NULLIF(
On Thu, 2008-04-03 at 16:44 +0200, PFC wrote:
CREATE FLATFILE READER mydump (
id INTEGER,
dateTEXT,
...
) FROM file 'dump.txt'
(followed by delimiter specification syntax identical to COPY, etc)
;
Very cool idea, but why would you need to create a reader object
INSERT INTO mytable (id, date, ...) SELECT id, NULLIF( date,
'-00-00' ), ... FROM mydump WHERE (FKs check and drop the borken
records);
What do we gain against current way of doing it, which is:
COPY loadtable FROM 'dump.txt' WITH ...
INSERT INTO destination_table(...) SELECT
On Thu, 03 Apr 2008 16:57:53 +0200, Csaba Nagy [EMAIL PROTECTED] wrote:
On Thu, 2008-04-03 at 16:44 +0200, PFC wrote:
CREATE FLATFILE READER mydump (
id INTEGER,
dateTEXT,
...
) FROM file 'dump.txt'
(followed by delimiter specification syntax identical to COPY,
Le jeudi 03 avril 2008, Tom Lane a écrit :
The major concern I have about this is to ensure that no detectable
overhead is added to COPY when the feature isn't being used.
Well, when COLUMN x CONVERT USING or whatever syntax we choose is not used, we
default to current code path, that is we do
Tom Lane [EMAIL PROTECTED] writes:
Dimitri Fontaine [EMAIL PROTECTED] writes:
Here's a proposal for COPY to support the T part of an ETL, that is adding
the
capability for COPY FROM to Transform the data it gets.
The idea is quite simple: adding to COPY FROM the option to run a function
Gregory Stark [EMAIL PROTECTED] writes:
AFAIK the state of the art is actually to load the data into a table which
closely matches the source material, sometimes just columns of text. Then copy
it all to another table doing transformations. Not impressed.
I liked the idea of allowing COPY FROM
Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
AFAIK the state of the art is actually to load the data into a table which
closely matches the source material, sometimes just columns of text. Then copy
it all to another table doing transformations. Not impressed.
I liked the
On Thu, Apr 03, 2008 at 03:57:38PM -0400, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
AFAIK the state of the art is actually to load the data into a table which
closely matches the source material, sometimes just columns of text. Then
copy
it all to another table doing
Sam Mason [EMAIL PROTECTED] writes:
On Thu, Apr 03, 2008 at 03:57:38PM -0400, Tom Lane wrote:
I liked the idea of allowing COPY FROM to act as a table source in a
larger SELECT or INSERT...SELECT. Not at all sure what would be
involved to implement that, but it seems a lot more flexible than
24 matches
Mail list logo