> Thank you, Kevin -- this is helpful.
Thank you David, too.
> But it still leaves questions for me.
Still...
Alex Goncharov wrote:
>>> How do I decide, before starting a COPY data load, whether such a load
>>> protection ("complexity") makes sense ("i
Thank you, Kevin -- this is helpful.
But it still leaves questions for me.
Kevin Grittner wrote:
> Alex Goncharov wrote:
> > The whole thing is aborted then, and the good 99 records are not
> > making it into the target table.
>
> Right. This is one reason people often
On the COPY's atomicity -- looking for a definitive answer from a core
developer, not a user's guess, please.
Suppose I COPY a huge amount of data, e.g. 100 records.
My 99 records are fine for the target, and the 100-th is not -- it
comes with a wrong record format or a target constraint violatio
,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) *
| So it means there will be visible impact if the nature of DB interaction is
DB
| insert/select. We do that mostly in my app.
You can't say a "visible impact" unless you can measure it in your
specific application.
Let's say ODBC ta
,--- You/Divakar (Wed, 8 Dec 2010 20:31:30 -0800 (PST)) *
| Is there any performance penalty when I use ODBC library vs using libpq?
In general, yes.
In degenerate cases when most of the work happens in the server, no.
You need to measure in the contents of your specific application.
-- Ale
,--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) *
| Yes, I have got 2 segments and a master host. So, in a way processing
| should be faster in Greenplum.
No, it should not: it all depends on your data, SQL statements and
setup.
In my own experiments, with small amounts of stored data, P
,--- You/Suvankar (Mon, 13 Jul 2009 16:53:41 +0530) *
| I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB
| as well as a Greenplum DB.
|
| The Primary key is a composite one comprising of 2 columns (so_no,
| serial_no).
|
| The execution of the following query takes 8