hi NikhilS.
2008/3/14, NikhilS <[EMAIL PROTECTED]>:
>
> Hi Longlong,
>
>
> > > i think this is a better idea.
> > from *NikhilS *
> > http://archives.postgresql.org/pgsql-hackers/2007-12/msg00584.php
> > But instead of using a per insert or a batch insert substraction, I am
> > thinking that we ca
Hi Longlong,
> > i think this is a better idea.
> from *NikhilS *
> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00584.php
> But instead of using a per insert or a batch insert substraction, I am
> thinking that we can start off a subtraction and continue it till we
> encounter a failu
2008/3/12, Neil Conway <[EMAIL PROTECTED]>:
>
> I don't see why creating index entries in bulk has anything to do with
> COPY vs. INSERT: if a lot of rows are being loaded into the table in a
> single command, it would be a win to create the index entries in bulk,
> regardless of whether COPY or IN
On Tue, 2008-03-11 at 15:18 -0700, Neil Conway wrote:
> Note also that pg_bulkload currently does something analogous to this
> outside of the DBMS proper:
>
> http://pgbulkload.projects.postgresql.org/
Sorry, wrong project. I mean pgloader:
http://pgfoundry.org/projects/pgloader/
-Neil
--
On Tue, 2008-03-11 at 20:56 +0800, longlong wrote:
> This would be a nice feature. Right now there are often applications
> where there is a data loading or staging table that ends up being
> merged with a larger table after some cleanup. Moving that data from
> the preperation area into the fina