FWIW I ended up reverting the whole thing, even from master. A more
complete solution would have to be researched.
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-
Alvaro Herrera wrote:
> With that, pushed and I hope this is closed for good.
Great!
I understand the fix couldn't be backpatched because
we are not allowed to change the StringInfo struct in
existing releases. But it occurs to me that the new "long_ok"
flag might not be necessary after a
Alvaro Herrera wrote:
> Daniel Verite wrote:
>
> > My tests are OK too but I see an issue with the code in
> > enlargeStringInfo(), regarding integer overflow.
> > The bit of comment that says:
> >
> > Note we are assuming here that limit <= INT_MAX/2, else the above
> > loop could overflow.
Daniel Verite wrote:
> My tests are OK too but I see an issue with the code in
> enlargeStringInfo(), regarding integer overflow.
> The bit of comment that says:
>
> Note we are assuming here that limit <= INT_MAX/2, else the above
> loop could overflow.
>
> is obsolete, it's now INT_MAX ins
Daniel Verite wrote:
> My tests are OK too but I see an issue with the code in
> enlargeStringInfo(), regarding integer overflow.
> The bit of comment that says:
>
> Note we are assuming here that limit <= INT_MAX/2, else the above
> loop could overflow.
>
> is obsolete, it's now INT_MAX ins
Alvaro Herrera wrote:
>
> I have now pushed this to 9.5, 9.6 and master. It could be backpatched
> to 9.4 with ease (just a small change in heap_form_tuple); anything
> further back would require much more effort.
I had to revert this on 9.5 and 9.6 -- it is obvious (in hindsight) that
changing
Daniel Verite wrote:
> My tests are OK too but I see an issue with the code in
> enlargeStringInfo(), regarding integer overflow.
Rats. I'll have a look later. You're probably right.
In the meantime I added this problem to the Open Items wiki page for
pg10, which I just created: https://wiki.
Alvaro Herrera wrote:
> I have now pushed this to 9.5, 9.6 and master. It could be backpatched
> to 9.4 with ease (just a small change in heap_form_tuple); anything
> further back would require much more effort.
>
> I used a 32-bit limit using sizeof(int32). I tested and all the
> menti
I have now pushed this to 9.5, 9.6 and master. It could be backpatched
to 9.4 with ease (just a small change in heap_form_tuple); anything
further back would require much more effort.
I used a 32-bit limit using sizeof(int32). I tested and all the
mentioned cases seem to work sanely; if you can
Daniel Verite wrote:
> If we consider what would happen with the latest patch on a platform
> with sizeof(int)=8 and a \copy invocation like this:
>
> \copy (select big,big,big,big,big,big,big,big,.. FROM
> (select lpad('', 1024*1024*200) as big) s) TO /dev/null
>
> if we put enough copi
Alvaro Herrera wrote:
> But I realized that doing it this way is simple enough;
> and furthermore, in any platforms where int is 8 bytes (ILP64), this
> would automatically allow for 63-bit-sized stringinfos
On such platforms there's the next problem that we can't
send COPY rows through t
I just wrote:
> The big advantage of your v3 patch is that it can be backpatched without
> fear of breaking ABI, so I've struggled to maintain that property in my
> changes. We can further tweak in HEAD-only; for example change the API
> to use Size instead of int. I think that would be desirabl
Daniel Verite wrote:
> And in enlargeStringInfo() the patch adds this:
> /*
>* Maximum size for strings with allow_long=true.
>* It must not exceed INT_MAX, as the StringInfo routines
>* expect offsets into the buffer to fit into an int.
>*/
> const int
Alvaro Herrera wrote:
> I propose to rename allow_long to huge_ok. "Huge" is the terminology
> used by palloc anyway. I'd keep makeLongStringInfo() and
> initLongStringInfo() though as interface, because using Huge instead of
> Long there looks strange. Not wedded to that, though (parti
I propose to rename allow_long to huge_ok. "Huge" is the terminology
used by palloc anyway. I'd keep makeLongStringInfo() and
initLongStringInfo() though as interface, because using Huge instead of
Long there looks strange. Not wedded to that, though (particularly as
it's a bit inconsistent).
Daniel Verite wrote:
> Here's an updated patch. Compared to the previous version:
>
> - removed CopyStartSend (per comment #1 in review)
>
> - renamed flag to allow_long (comment #2)
>
> - resetStringInfo no longer resets the flag (comment #3).
>
> - allowLongStringInfo() is removed (comment #
Hi Tomas and Gerdan,
This is a gentle reminder.
you both are assigned as reviewers to the current patch in the 11-2016
commitfest.
But you haven't shared any reviews yet, can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.
Please Ignore
On Fri, Sep 30, 2016 at 10:12 PM, Daniel Verite wrote:
> Tomas Vondra wrote:
>
>> A few minor comments regarding the patch:
>>
>> 1) CopyStartSend seems pretty pointless - It only has one function call
>> in it, and is called on exactly one place (and all other places simply
>> call allowL
Tomas Vondra wrote:
> A few minor comments regarding the patch:
>
> 1) CopyStartSend seems pretty pointless - It only has one function call
> in it, and is called on exactly one place (and all other places simply
> call allowLongStringInfo directly). I'd get rid of this function and
>
Tomas Vondra wrote:
> 4) HandleParallelMessage needs a tweak, as it uses msg->len in a log
> message, but with '%d' and not '%ld' (as needed after changing the type
> to Size).
This could be %zu for the Size type. It's supported by elog().
However, it occurs to me that if we don't clai
On 09/03/2016 02:21 AM, Tomas Vondra wrote:
A few minor comments regarding the patch:
1) CopyStartSend seems pretty pointless - It only has one function call
in it, and is called on exactly one place (and all other places simply
call allowLongStringInfo directly). I'd get rid of this function a
Hi,
I've subscribed to review this patch in the current CF, so let me start
with a brief summary of the thread as it started more than a year ago.
In general the thread is about hitting the 1GB allocation size limit
(and 32-bit length in FE/BE protocol) in various ways:
1) attributes are s
On 24 March 2016 at 01:14, Daniel Verite wrote:
>
> It provides a useful mitigation to dump/reload databases having
> rows in the 1GB-2GB range, but it works under these limitations:
>
> - no single field has a text representation exceeding 1GB.
> - no row as text exceeds 2GB (\copy from fails b
On Sat, Apr 23, 2016 at 9:58 AM, Bruce Momjian wrote:
> On Thu, Mar 3, 2016 at 10:31:26AM +0900, Michael Paquier wrote:
>> On Thu, Mar 3, 2016 at 12:47 AM, Alvaro Herrera
>> wrote:
>> > Well, the CopyData message has an Int32 field for the message length.
>> > I don't know the FE/BE protocol ver
On Thu, Mar 3, 2016 at 10:31:26AM +0900, Michael Paquier wrote:
> On Thu, Mar 3, 2016 at 12:47 AM, Alvaro Herrera
> wrote:
> > Well, the CopyData message has an Int32 field for the message length.
> > I don't know the FE/BE protocol very well but I suppose each row
> > corresponds to one CopyData
Alvaro Herrera wrote:
> > tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + len);
> >
> > which fails because (HEAPTUPLESIZE + len) is again considered
> > an invalid size, the size being 1468006476 in my test.
>
> Um, it seems reasonable to make this one be a huge-zero-alloc:
>
> Me
Daniel Verite wrote:
> # \copy bigtext2 from '/var/tmp/bigtext.sql'
> ERROR: 54000: out of memory
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191
> more bytes.
> CONTEXT: COPY bigtext2, line 1
> LOCATION: enlargeStringInfo, stringinfo.c:278
To go past that p
Daniel Verite wrote:
> To go past that problem, I've tried tweaking the StringInfoData
> used for COPY FROM, like the original patch does in CopyOneRowTo.
>
> It turns out that it fails a bit later when trying to make a tuple
> from the big line, in heap_form_tuple():
>
> tuple = (HeapTuple)
Tom Lane wrote:
> BTW, is anyone checking the other side of this, ie "COPY IN" with equally
> wide rows? There doesn't seem to be a lot of value in supporting dump
> if you can't reload ...
Indeed, the lines bigger than 1GB can't be reloaded :(
My test: with a stock 9.5.1 plus Alvaro's
On Thu, Mar 3, 2016 at 12:47 AM, Alvaro Herrera
wrote:
> Well, the CopyData message has an Int32 field for the message length.
> I don't know the FE/BE protocol very well but I suppose each row
> corresponds to one CopyData message, or perhaps each column corresponds
> to one CopyData message. In
Tomas Vondra wrote:
> My guess is this is a problem at the protocol level - the 'd' message is
> CopyData, and all the messages use int32 to define length. So if there's
> a 2GB row, it's likely to overflow.
Yes. Besides, the full message includes a negative length:
> postgres=# \copy
Daniel Verite wrote:
> The cause of the crash turns out to be, in enlargeStringInfo():
>
> needed += str->len + 1; /* total space required now */
>
> needed is an int and str->len is an int64, so it overflows when the
> size has to grow beyond 2^31 bytes, fails to enlarge the buffer and
>
On 03/02/2016 04:23 PM, Tom Lane wrote:
Tomas Vondra writes:
On 03/02/2016 03:18 PM, Daniel Verite wrote:
However, getting it to the client with \copy big2 to 'file'
still produces the error in psql:
lost synchronization with server: got message type "d"
and leaves an empty file, so there ar
Tomas Vondra writes:
> On 03/02/2016 03:18 PM, Daniel Verite wrote:
>> However, getting it to the client with \copy big2 to 'file'
>> still produces the error in psql:
>> lost synchronization with server: got message type "d"
>> and leaves an empty file, so there are more problems to solve to
>> g
Hi,
On 03/02/2016 03:18 PM, Daniel Verite wrote:
I wrote:
postgres=# \copy big2 to /dev/null
lost synchronization with server: got message type "d", length -1568669676
The backend aborts with the following backtrace:
Program terminated with signal 6, Aborted.
#0 0x7f82ee68e165 i
I wrote:
> postgres=# \copy big2 to /dev/null
> lost synchronization with server: got message type "d", length -1568669676
>
> The backend aborts with the following backtrace:
>
> Program terminated with signal 6, Aborted.
> #0 0x7f82ee68e165 in raise () from /lib/x86_64-linux-gnu/l
"Daniel Verite" writes:
> I've tried adding another large field to see what happens if the whole row
> exceeds 2GB, and data goes to the client rather than to a file.
> My idea was to check if the client side was OK with that much data on
> a single COPY row, but it turns out that the server is no
I wrote:
> If splitting the table into 3 fields, each smaller than 512MB:
>
> postgres=# create table big2 as select
> substring(binarycol from 1 for 300*1024*1024) as b1,
> substring(binarycol from 1+300*1024*1024 for 300*1024*1024) as b2 ,
> substring(binarycol from 1+600*1024*1024 f
"Daniel Verite" writes:
> Alvaro Herrera wrote:
>> If others can try this patch to ensure it enables pg_dump to work on
>> their databases, it would be great.
> It doesn't seem to help if one field exceeds 1Gb, for instance when
> inflated by a bin->hex translation.
It's not going to be po
Alvaro Herrera wrote:
> If others can try this patch to ensure it enables pg_dump to work on
> their databases, it would be great.
It doesn't seem to help if one field exceeds 1Gb, for instance when
inflated by a bin->hex translation.
postgres=# create table big as
select pg_read_bin
A customer of ours was hit by this recently and I'd like to get it fixed
for good.
Robert Haas wrote:
> On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby wrote:
> > In any case, I don't think it would be terribly difficult to allow a bit
> > more than 1GB in a StringInfo. Might need to tweak palloc too;
On 4/7/15 10:29 PM, Michael Paquier wrote:
On Wed, Apr 8, 2015 at 11:53 AM, Robert Haas wrote:
On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby wrote:
In any case, I don't think it would be terribly difficult to allow a bit
more than 1GB in a StringInfo. Might need to tweak palloc too; ISTR there's
On Wed, Apr 8, 2015 at 11:53 AM, Robert Haas wrote:
> On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby wrote:
>> In any case, I don't think it would be terribly difficult to allow a bit
>> more than 1GB in a StringInfo. Might need to tweak palloc too; ISTR there's
>> some 1GB limits there too.
>
> The p
On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby wrote:
> In any case, I don't think it would be terribly difficult to allow a bit
> more than 1GB in a StringInfo. Might need to tweak palloc too; ISTR there's
> some 1GB limits there too.
The point is, those limits are there on purpose. Changing things
On 3/31/15 3:46 AM, Ronan Dunklau wrote:
>StringInfo uses int's to store length, so it could possibly be changed,
>but then you'd just error out due to MaxAllocSize.
>
>Now perhaps those could both be relaxed, but certainly not to the extent
>that you can shove an entire 1.6TB row into an output
Le lundi 30 mars 2015 18:45:41 Jim Nasby a écrit :
> On 3/30/15 5:46 AM, Ronan Dunklau wrote:
> > Hello hackers,
> >
> > I've tried my luck on pgsql-bugs before, with no success, so I report
> > these
> > problem here.
> >
> > The documentation mentions the following limits for sizes:
> >
> > Ma
On 3/30/15 5:46 AM, Ronan Dunklau wrote:
Hello hackers,
I've tried my luck on pgsql-bugs before, with no success, so I report these
problem here.
The documentation mentions the following limits for sizes:
Maximum Field Size 1 GB
Maximum Row Size1.6 TB
However, it seems like rows
Hello hackers,
I've tried my luck on pgsql-bugs before, with no success, so I report these
problem here.
The documentation mentions the following limits for sizes:
Maximum Field Size 1 GB
Maximum Row Size1.6 TB
However, it seems like rows bigger than 1GB can't be COPYed out:
ro=#
48 matches
Mail list logo