[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-12-21 Thread Ed Espino (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Espino updated HAWQ-1077:

Labels:   (was: ToBeClosed)

> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.1.0.0-incubating
>
>
> It hang when try to insert large data into append-only table with 
> compression. To be specific, the QE process spins at compression phase. Per 
> RCA, there is stack overwritten in append-only table ith snappy compression.
> The call stack is as following:
> {noformat}
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1077:
--
Description: 
It hang when try to insert large data into append-only table with compression. 
To be specific, the QE process spins at compression phase. Per RCA, there is 
stack overwritten in append-only table ith snappy compression.

The call stack is as following:
{noformat}
#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
{noformat}

  was:
It hang when try to insert large data into append-only table with compression. 
To be specific, the QE process spins at compression phase. Per RCA, there is 
stack overwritten in append-only with snappy compression.

The call stack is as following:
{noformat}
#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
{noformat}


> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> It hang when try to insert large data into append-only table with 
> compression. To be specific, the QE process spins at compression phase. Per 
> RCA, there is stack overwritten in append-only table ith snappy compression.
> The call stack is as following:
> {noformat}
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) a

[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1077:
--
Description: 
It hang when try to insert large data into append-only table with compression. 
To be specific, the QE process spins at compression phase. Per RCA, there is 
stack overwritten in append-only with snappy compression.

The call stack is as following:
{noformat}
#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
{noformat}

  was:
Found this issue during testing. The test tries to insert some large data into 
a table with ao compression, however it seems to never end. After quick check, 
we found that QE process spins and further gdb debugging shows that it is 
because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030


> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> It hang when try to insert large data into append-only table with 
> compression. To be specific, the QE process spins at compression phase. Per 
> RCA, there is stack overwritten in append-only with snappy compression.
> The call stack is as following:
> {noformat}
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentL

[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1077:

Fix Version/s: 2.0.1.0-incubating

> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> Found this issue during testing. The test tries to insert some large data 
> into a table with ao compression, however it seems to never end. After quick 
> check, we found that QE process spins and further gdb debugging shows that it 
> is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1077:
---
Description: 
Found this issue during testing. The test tries to insert some large data into 
a table with ao compression, however it seems to never end. After quick check, 
we found that QE process spins and further gdb debugging shows that it is 
because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030

  was:
Found this issue during testing. Tthe query has never ended during tests. After 
quick check, we found that QE process spins and further gdb debugging shows 
that it is because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030


> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Found this issue during testing. The test tries to insert some large data 
> into a table with ao compression, however it seems to never end. After quick 
> check, we found that QE process spins and further gdb debugging shows that it 
> is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> co

[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1077:
---
Summary: Table insert hangs due to stack overwrite which is caused by a bug 
in ao snappy code  (was: Query hangs due to stack overwrite which is caused by 
a bug in ao snappy code)

> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Found this issue during testing. Tthe query has never ended during tests. 
> After quick check, we found that QE process spins and further gdb debugging 
> shows that it is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)