Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2017-05-10 Thread Alvaro Herrera
FWIW I ended up reverting the whole thing, even from master. A more complete solution would have to be researched. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2017-01-10 Thread Daniel Verite
Alvaro Herrera wrote: > With that, pushed and I hope this is closed for good. Great! I understand the fix couldn't be backpatched because we are not allowed to change the StringInfo struct in existing releases. But it occurs to me that the new "long_ok" flag might not be necessary after

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2017-01-10 Thread Alvaro Herrera
Alvaro Herrera wrote: > Daniel Verite wrote: > > > My tests are OK too but I see an issue with the code in > > enlargeStringInfo(), regarding integer overflow. > > The bit of comment that says: > > > > Note we are assuming here that limit <= INT_MAX/2, else the above > > loop could overflow.

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2017-01-09 Thread Alvaro Herrera
Daniel Verite wrote: > My tests are OK too but I see an issue with the code in > enlargeStringInfo(), regarding integer overflow. > The bit of comment that says: > > Note we are assuming here that limit <= INT_MAX/2, else the above > loop could overflow. > > is obsolete, it's now INT_MAX

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-12-09 Thread Alvaro Herrera
Daniel Verite wrote: > My tests are OK too but I see an issue with the code in > enlargeStringInfo(), regarding integer overflow. > The bit of comment that says: > > Note we are assuming here that limit <= INT_MAX/2, else the above > loop could overflow. > > is obsolete, it's now INT_MAX

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-12-09 Thread Alvaro Herrera
Alvaro Herrera wrote: > > I have now pushed this to 9.5, 9.6 and master. It could be backpatched > to 9.4 with ease (just a small change in heap_form_tuple); anything > further back would require much more effort. I had to revert this on 9.5 and 9.6 -- it is obvious (in hindsight) that changing

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-12-05 Thread Alvaro Herrera
Daniel Verite wrote: > My tests are OK too but I see an issue with the code in > enlargeStringInfo(), regarding integer overflow. Rats. I'll have a look later. You're probably right. In the meantime I added this problem to the Open Items wiki page for pg10, which I just created:

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-12-05 Thread Daniel Verite
Alvaro Herrera wrote: > I have now pushed this to 9.5, 9.6 and master. It could be backpatched > to 9.4 with ease (just a small change in heap_form_tuple); anything > further back would require much more effort. > > I used a 32-bit limit using sizeof(int32). I tested and all the >

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-12-01 Thread Alvaro Herrera
I have now pushed this to 9.5, 9.6 and master. It could be backpatched to 9.4 with ease (just a small change in heap_form_tuple); anything further back would require much more effort. I used a 32-bit limit using sizeof(int32). I tested and all the mentioned cases seem to work sanely; if you

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-29 Thread Alvaro Herrera
Daniel Verite wrote: > If we consider what would happen with the latest patch on a platform > with sizeof(int)=8 and a \copy invocation like this: > > \copy (select big,big,big,big,big,big,big,big,.. FROM > (select lpad('', 1024*1024*200) as big) s) TO /dev/null > > if we put enough

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-29 Thread Daniel Verite
Alvaro Herrera wrote: > But I realized that doing it this way is simple enough; > and furthermore, in any platforms where int is 8 bytes (ILP64), this > would automatically allow for 63-bit-sized stringinfos On such platforms there's the next problem that we can't send COPY rows through

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-28 Thread Alvaro Herrera
I just wrote: > The big advantage of your v3 patch is that it can be backpatched without > fear of breaking ABI, so I've struggled to maintain that property in my > changes. We can further tweak in HEAD-only; for example change the API > to use Size instead of int. I think that would be

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-28 Thread Alvaro Herrera
Daniel Verite wrote: > And in enlargeStringInfo() the patch adds this: > /* >* Maximum size for strings with allow_long=true. >* It must not exceed INT_MAX, as the StringInfo routines >* expect offsets into the buffer to fit into an int. >*/ > const int

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-28 Thread Daniel Verite
Alvaro Herrera wrote: > I propose to rename allow_long to huge_ok. "Huge" is the terminology > used by palloc anyway. I'd keep makeLongStringInfo() and > initLongStringInfo() though as interface, because using Huge instead of > Long there looks strange. Not wedded to that, though

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-24 Thread Alvaro Herrera
I propose to rename allow_long to huge_ok. "Huge" is the terminology used by palloc anyway. I'd keep makeLongStringInfo() and initLongStringInfo() though as interface, because using Huge instead of Long there looks strange. Not wedded to that, though (particularly as it's a bit inconsistent).

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-24 Thread Alvaro Herrera
Daniel Verite wrote: > Here's an updated patch. Compared to the previous version: > > - removed CopyStartSend (per comment #1 in review) > > - renamed flag to allow_long (comment #2) > > - resetStringInfo no longer resets the flag (comment #3). > > - allowLongStringInfo() is removed (comment

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-16 Thread Haribabu Kommi
Hi Tomas and Gerdan, This is a gentle reminder. you both are assigned as reviewers to the current patch in the 11-2016 commitfest. But you haven't shared any reviews yet, can you please try to share your views about the patch. This will help us in smoother operation of commitfest. Please Ignore

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-10-02 Thread Michael Paquier
On Fri, Sep 30, 2016 at 10:12 PM, Daniel Verite wrote: > Tomas Vondra wrote: > >> A few minor comments regarding the patch: >> >> 1) CopyStartSend seems pretty pointless - It only has one function call >> in it, and is called on exactly one place (and all other

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-09-30 Thread Daniel Verite
Tomas Vondra wrote: > A few minor comments regarding the patch: > > 1) CopyStartSend seems pretty pointless - It only has one function call > in it, and is called on exactly one place (and all other places simply > call allowLongStringInfo directly). I'd get rid of this function and >

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-09-28 Thread Daniel Verite
Tomas Vondra wrote: > 4) HandleParallelMessage needs a tweak, as it uses msg->len in a log > message, but with '%d' and not '%ld' (as needed after changing the type > to Size). This could be %zu for the Size type. It's supported by elog(). However, it occurs to me that if we don't

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-09-02 Thread Tomas Vondra
On 09/03/2016 02:21 AM, Tomas Vondra wrote: A few minor comments regarding the patch: 1) CopyStartSend seems pretty pointless - It only has one function call in it, and is called on exactly one place (and all other places simply call allowLongStringInfo directly). I'd get rid of this function

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-09-02 Thread Tomas Vondra
Hi, I've subscribed to review this patch in the current CF, so let me start with a brief summary of the thread as it started more than a year ago. In general the thread is about hitting the 1GB allocation size limit (and 32-bit length in FE/BE protocol) in various ways: 1) attributes are

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-04-25 Thread Craig Ringer
On 24 March 2016 at 01:14, Daniel Verite wrote: > > It provides a useful mitigation to dump/reload databases having > rows in the 1GB-2GB range, but it works under these limitations: > > - no single field has a text representation exceeding 1GB. > - no row as text

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-04-25 Thread Michael Paquier
On Sat, Apr 23, 2016 at 9:58 AM, Bruce Momjian wrote: > On Thu, Mar 3, 2016 at 10:31:26AM +0900, Michael Paquier wrote: >> On Thu, Mar 3, 2016 at 12:47 AM, Alvaro Herrera >> wrote: >> > Well, the CopyData message has an Int32 field for the message

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-04-22 Thread Bruce Momjian
On Thu, Mar 3, 2016 at 10:31:26AM +0900, Michael Paquier wrote: > On Thu, Mar 3, 2016 at 12:47 AM, Alvaro Herrera > wrote: > > Well, the CopyData message has an Int32 field for the message length. > > I don't know the FE/BE protocol very well but I suppose each row > >

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-23 Thread Daniel Verite
Alvaro Herrera wrote: > > tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + len); > > > > which fails because (HEAPTUPLESIZE + len) is again considered > > an invalid size, the size being 1468006476 in my test. > > Um, it seems reasonable to make this one be a huge-zero-alloc: > >

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-19 Thread Daniel Verite
Daniel Verite wrote: > # \copy bigtext2 from '/var/tmp/bigtext.sql' > ERROR: 54000: out of memory > DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 > more bytes. > CONTEXT: COPY bigtext2, line 1 > LOCATION: enlargeStringInfo, stringinfo.c:278 To go past that

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-18 Thread Alvaro Herrera
Daniel Verite wrote: > To go past that problem, I've tried tweaking the StringInfoData > used for COPY FROM, like the original patch does in CopyOneRowTo. > > It turns out that it fails a bit later when trying to make a tuple > from the big line, in heap_form_tuple(): > > tuple = (HeapTuple)

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-03 Thread Daniel Verite
Tom Lane wrote: > BTW, is anyone checking the other side of this, ie "COPY IN" with equally > wide rows? There doesn't seem to be a lot of value in supporting dump > if you can't reload ... Indeed, the lines bigger than 1GB can't be reloaded :( My test: with a stock 9.5.1 plus Alvaro's

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Michael Paquier
On Thu, Mar 3, 2016 at 12:47 AM, Alvaro Herrera wrote: > Well, the CopyData message has an Int32 field for the message length. > I don't know the FE/BE protocol very well but I suppose each row > corresponds to one CopyData message, or perhaps each column corresponds >

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Daniel Verite
Tomas Vondra wrote: > My guess is this is a problem at the protocol level - the 'd' message is > CopyData, and all the messages use int32 to define length. So if there's > a 2GB row, it's likely to overflow. Yes. Besides, the full message includes a negative length: > postgres=# \copy

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Alvaro Herrera
Daniel Verite wrote: > The cause of the crash turns out to be, in enlargeStringInfo(): > > needed += str->len + 1; /* total space required now */ > > needed is an int and str->len is an int64, so it overflows when the > size has to grow beyond 2^31 bytes, fails to enlarge the buffer and >

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Tomas Vondra
On 03/02/2016 04:23 PM, Tom Lane wrote: Tomas Vondra writes: On 03/02/2016 03:18 PM, Daniel Verite wrote: However, getting it to the client with \copy big2 to 'file' still produces the error in psql: lost synchronization with server: got message type "d" and

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Tom Lane
Tomas Vondra writes: > On 03/02/2016 03:18 PM, Daniel Verite wrote: >> However, getting it to the client with \copy big2 to 'file' >> still produces the error in psql: >> lost synchronization with server: got message type "d" >> and leaves an empty file, so there are

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Tomas Vondra
Hi, On 03/02/2016 03:18 PM, Daniel Verite wrote: I wrote: postgres=# \copy big2 to /dev/null lost synchronization with server: got message type "d", length -1568669676 The backend aborts with the following backtrace: Program terminated with signal 6, Aborted. #0 0x7f82ee68e165

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-02 Thread Daniel Verite
I wrote: > postgres=# \copy big2 to /dev/null > lost synchronization with server: got message type "d", length -1568669676 > > The backend aborts with the following backtrace: > > Program terminated with signal 6, Aborted. > #0 0x7f82ee68e165 in raise () from

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-01 Thread Tom Lane
"Daniel Verite" writes: > I've tried adding another large field to see what happens if the whole row > exceeds 2GB, and data goes to the client rather than to a file. > My idea was to check if the client side was OK with that much data on > a single COPY row, but it turns

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-01 Thread Daniel Verite
I wrote: > If splitting the table into 3 fields, each smaller than 512MB: > > postgres=# create table big2 as select > substring(binarycol from 1 for 300*1024*1024) as b1, > substring(binarycol from 1+300*1024*1024 for 300*1024*1024) as b2 , > substring(binarycol from 1+600*1024*1024

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-01 Thread Tom Lane
"Daniel Verite" writes: > Alvaro Herrera wrote: >> If others can try this patch to ensure it enables pg_dump to work on >> their databases, it would be great. > It doesn't seem to help if one field exceeds 1Gb, for instance when > inflated by a bin->hex

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-03-01 Thread Daniel Verite
Alvaro Herrera wrote: > If others can try this patch to ensure it enables pg_dump to work on > their databases, it would be great. It doesn't seem to help if one field exceeds 1Gb, for instance when inflated by a bin->hex translation. postgres=# create table big as select

Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-02-29 Thread Alvaro Herrera
A customer of ours was hit by this recently and I'd like to get it fixed for good. Robert Haas wrote: > On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby wrote: > > In any case, I don't think it would be terribly difficult to allow a bit > > more than 1GB in a StringInfo. Might

Re: [HACKERS] pg_dump / copy bugs with big lines ?

2015-04-07 Thread Robert Haas
On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby jim.na...@bluetreble.com wrote: In any case, I don't think it would be terribly difficult to allow a bit more than 1GB in a StringInfo. Might need to tweak palloc too; ISTR there's some 1GB limits there too. The point is, those limits are there on

Re: [HACKERS] pg_dump / copy bugs with big lines ?

2015-04-07 Thread Michael Paquier
On Wed, Apr 8, 2015 at 11:53 AM, Robert Haas robertmh...@gmail.com wrote: On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby jim.na...@bluetreble.com wrote: In any case, I don't think it would be terribly difficult to allow a bit more than 1GB in a StringInfo. Might need to tweak palloc too; ISTR

Re: [HACKERS] pg_dump / copy bugs with big lines ?

2015-04-06 Thread Jim Nasby
On 3/31/15 3:46 AM, Ronan Dunklau wrote: StringInfo uses int's to store length, so it could possibly be changed, but then you'd just error out due to MaxAllocSize. Now perhaps those could both be relaxed, but certainly not to the extent that you can shove an entire 1.6TB row into an output

Re: [HACKERS] pg_dump / copy bugs with big lines ?

2015-03-31 Thread Ronan Dunklau
Le lundi 30 mars 2015 18:45:41 Jim Nasby a écrit : On 3/30/15 5:46 AM, Ronan Dunklau wrote: Hello hackers, I've tried my luck on pgsql-bugs before, with no success, so I report these problem here. The documentation mentions the following limits for sizes: Maximum Field Size

[HACKERS] pg_dump / copy bugs with big lines ?

2015-03-30 Thread Ronan Dunklau
Hello hackers, I've tried my luck on pgsql-bugs before, with no success, so I report these problem here. The documentation mentions the following limits for sizes: Maximum Field Size 1 GB Maximum Row Size1.6 TB However, it seems like rows bigger than 1GB can't be COPYed out: