[GitHub] incubator-hawq pull request #:

2016-09-26 Thread sansanichfb
Github user sansanichfb commented on the pull request:


https://github.com/apache/incubator-hawq/commit/9b7f90b744850ff83769c3a46e6d4daeb109cc68#commitcomment-19184685
  
@GodenYao I moved pre- and post- install scripts to virtual rpm section so 
I moved scripts respectively. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #:

2016-09-26 Thread GodenYao
Github user GodenYao commented on the pull request:


https://github.com/apache/incubator-hawq/commit/9b7f90b744850ff83769c3a46e6d4daeb109cc68#commitcomment-19184677
  
why change the folder path? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-771) Table and function can not be found by non-superuser in specified schema

2016-09-26 Thread Ming LI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming LI resolved HAWQ-771.
--
Resolution: Invalid

By default, users cannot access any objects in schemas they do not own. To 
allow that, the owner of the schema must grant the USAGE privilege on the 
schema. 

So please run below SQL before select that table in testrole:
grant usage on schema testschema to testrole;

> Table and function can not be found by non-superuser in specified schema
> 
>
> Key: HAWQ-771
> URL: https://issues.apache.org/jira/browse/HAWQ-771
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: backlog
>
> Attachments: function.out.bug, function.out.expected, function.sql, 
> table.out.bug, table.out.expected, table.sql
>
>
> With non-superuser, table and function can not be found in specified schema. 
> While:
> 1) they can be found in default schema, i.e., "$user", public
> 2) they can be found by superuser.
> This issue occurs in hawq 2.0 and postgres 9.x. See attached sql file for 
> reproduction steps and out file for expected/actual error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #939: HAWQ-991. Fix bug of yaml configuration fi...

2016-09-26 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/939#discussion_r80613855
  
--- Diff: tools/bin/hawqregister ---
@@ -729,6 +728,23 @@ class HawqRegister(object):
 for k, eof in enumerate(eofs[1:]):
 query += ',(%d, %d, %d, %d, %d)' % (self.firstsegno + 
k + 1, eof, -1, -1, -1)
--- End diff --

Thanks @kdunn-pivotal for the good advice. But I think upgrade from 
Python2.x to Python3.x is a big deal and since we are not familiar with 
Python3.x, current code is ok. For the syntax you mentioned here, you are 
right. TIL


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #939: HAWQ-991. Fix bug of yaml configuration fi...

2016-09-26 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/939#discussion_r80612789
  
--- Diff: tools/bin/hawqregister ---
@@ -603,9 +605,6 @@ class HawqRegister(object):
 
 self._check_files_and_table_in_same_hdfs_cluster(self.filepath, 
self.tabledir)
 
-if not self.yml:
-check_no_regex_filepath([self.filepath])
-self.files, self.sizes = self._get_files_in_hdfs(self.filepath)
 print 'New file(s) to be registered: ', self.files
--- End diff --

Here should print `self.newfiles` instead?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #937: HAWQ-1077. Query hangs due to stack overwrite whi...

2016-09-26 Thread liming01
Github user liming01 commented on the issue:

https://github.com/apache/incubator-hawq/pull/937
  
+1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1077:
--
Description: 
It hang when try to insert large data into append-only table with compression. 
To be specific, the QE process spins at compression phase. Per RCA, there is 
stack overwritten in append-only table ith snappy compression.

The call stack is as following:
{noformat}
#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
{noformat}

  was:
It hang when try to insert large data into append-only table with compression. 
To be specific, the QE process spins at compression phase. Per RCA, there is 
stack overwritten in append-only with snappy compression.

The call stack is as following:
{noformat}
#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
{noformat}


> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> It hang when try to insert large data into append-only table with 
> compression. To be specific, the QE process spins at compression phase. Per 
> RCA, there is stack overwritten in append-only table ith snappy compression.
> The call stack is as following:
> {noformat}
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) a

[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1077:
--
Description: 
It hang when try to insert large data into append-only table with compression. 
To be specific, the QE process spins at compression phase. Per RCA, there is 
stack overwritten in append-only with snappy compression.

The call stack is as following:
{noformat}
#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030
{noformat}

  was:
Found this issue during testing. The test tries to insert some large data into 
a table with ao compression, however it seems to never end. After quick check, 
we found that QE process spins and further gdb debugging shows that it is 
because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030


> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> It hang when try to insert large data into append-only table with 
> compression. To be specific, the QE process spins at compression phase. Per 
> RCA, there is stack overwritten in append-only with snappy compression.
> The call stack is as following:
> {noformat}
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentL

[GitHub] incubator-hawq pull request #941: HAWQ-1079. Fixed issue with symlink.

2016-09-26 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/941

HAWQ-1079. Fixed issue with symlink.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sansanichfb/incubator-hawq HAWQ-1079

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/941.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #941


commit e10b7594ed8be733c9cae1f28d37cfba0738
Author: Oleksandr Diachenko 
Date:   2016-09-27T00:05:07Z

HAWQ-1079. Fixed issue with symlink.

commit 2652e5401052a52b343447309ecb16ece8a00c55
Author: Oleksandr Diachenko 
Date:   2016-09-27T00:07:21Z

HAWQ-1079. Fixed issue with symlink.

commit 26c59750db50812911399d7c34ab171e3031ec66
Author: Oleksandr Diachenko 
Date:   2016-09-27T00:09:11Z

HAWQ-1079. Fixed issue with symlink.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1077:

Fix Version/s: 2.0.1.0-incubating

> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> Found this issue during testing. The test tries to insert some large data 
> into a table with ao compression, however it seems to never end. After quick 
> check, we found that QE process spins and further gdb debugging shows that it 
> is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1075) Restore default behavior of client side(PXF) checksum validation when reading blocks from HDFS

2016-09-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1075:

Fix Version/s: 2.0.1.0-incubating

> Restore default behavior of client side(PXF) checksum validation when reading 
> blocks from HDFS
> --
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> Currently HdfsTextSimple profile which is the optimized PXF profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> Background Information:
> PXF uses a 2 stage process to access HDFS data. 
> Stage 1, it fetches all the target blocks for the given file (along with 
> replica information). 
> Stage 2 (after HAWQ prepares an optimized access plan based on locality), PXF 
> agents reads the blocks in parallel.
> In almost all scenarios hadoop internally catches block corruption issues and 
> such blocks are never returned to any client requesting for block locations 
> (Stage 1). In certain scenarios such as a block corruption without change in 
> size, Stage1 can still return the location of the corrupted block as well, 
> and hence Stage 2 will need to perform an additional checksum check.
> With client side checksum check on read (default behavior), we are resilient 
> to such checksum errors on read as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1079) PXF service fails to start

2016-09-26 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-1079:
--
Fix Version/s: backlog

> PXF service fails to start
> --
>
> Key: HAWQ-1079
> URL: https://issues.apache.org/jira/browse/HAWQ-1079
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: backlog
>
>
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 132, in 
> Pxf().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 54, in start
> self.__execute_service_command("restart")
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 76, in __execute_service_command
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'service pxf-service 
> restart' returned 1. /var/pxf /
> $CATALINA_PID was set but the specified file does not exist. Is Tomcat 
> running? Stop aborted.
> /var/pxf /var/pxf /
> touch: cannot touch `/var/log/pxf/catalina.out': No such file or directory
> /var/pxf/pxf-service/bin/catalina.sh: line 387: /var/log/pxf/catalina.out: No 
> such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1079) PXF service fails to start

2016-09-26 Thread Oleksandr Diachenko (JIRA)
Oleksandr Diachenko created HAWQ-1079:
-

 Summary: PXF service fails to start
 Key: HAWQ-1079
 URL: https://issues.apache.org/jira/browse/HAWQ-1079
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Oleksandr Diachenko
Assignee: Lei Chang


{code}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py", 
line 132, in 
Pxf().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 280, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py", 
line 54, in start
self.__execute_service_command("restart")
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py", 
line 76, in __execute_service_command
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 71, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 93, in checked_call
tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'service pxf-service 
restart' returned 1. /var/pxf /
$CATALINA_PID was set but the specified file does not exist. Is Tomcat running? 
Stop aborted.
/var/pxf /var/pxf /
touch: cannot touch `/var/log/pxf/catalina.out': No such file or directory
/var/pxf/pxf-service/bin/catalina.sh: line 387: /var/log/pxf/catalina.out: No 
such file or directory
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1079) PXF service fails to start

2016-09-26 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko reassigned HAWQ-1079:
-

Assignee: Oleksandr Diachenko  (was: Lei Chang)

> PXF service fails to start
> --
>
> Key: HAWQ-1079
> URL: https://issues.apache.org/jira/browse/HAWQ-1079
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 132, in 
> Pxf().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 54, in start
> self.__execute_service_command("restart")
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 76, in __execute_service_command
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'service pxf-service 
> restart' returned 1. /var/pxf /
> $CATALINA_PID was set but the specified file does not exist. Is Tomcat 
> running? Stop aborted.
> /var/pxf /var/pxf /
> touch: cannot touch `/var/log/pxf/catalina.out': No such file or directory
> /var/pxf/pxf-service/bin/catalina.sh: line 387: /var/log/pxf/catalina.out: No 
> such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1075) Restore default behavior of client side(PXF) checksum validation when reading blocks from HDFS

2016-09-26 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1075:
---
Summary: Restore default behavior of client side(PXF) checksum validation 
when reading blocks from HDFS  (was: Make checksum verification configurable in 
PXF HdfsTextSimple profile)

> Restore default behavior of client side(PXF) checksum validation when reading 
> blocks from HDFS
> --
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently HdfsTextSimple profile which is the optimized profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> This configuration needs to be exposed as an option and by default client 
> side checksum check must occur in order to be resilient to any data 
> corruption issues which aren't caught internally by the datanode block 
> reporting mechanism (even fsck doesn't catch certain block corruption issues).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1075) Restore default behavior of client side(PXF) checksum validation when reading blocks from HDFS

2016-09-26 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15524520#comment-15524520
 ] 

Shivram Mani commented on HAWQ-1075:


The patch has been pushed both to master branch and HAWQ-1075 branch.
HAWQ-1075 branch is for users who have consumed hdb 2.0.0.0

> Restore default behavior of client side(PXF) checksum validation when reading 
> blocks from HDFS
> --
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently HdfsTextSimple profile which is the optimized PXF profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> Background Information:
> PXF uses a 2 stage process to access HDFS data. 
> Stage 1, it fetches all the target blocks for the given file (along with 
> replica information). 
> Stage 2 (after HAWQ prepares an optimized access plan based on locality), PXF 
> agents reads the blocks in parallel.
> In almost all scenarios hadoop internally catches block corruption issues and 
> such blocks are never returned to any client requesting for block locations 
> (Stage 1). In certain scenarios such as a block corruption without change in 
> size, Stage1 can still return the location of the corrupted block as well, 
> and hence Stage 2 will need to perform an additional checksum check.
> With client side checksum check on read (default behavior), we are resilient 
> to such checksum errors on read as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-1075) Restore default behavior of client side(PXF) checksum validation when reading blocks from HDFS

2016-09-26 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15521104#comment-15521104
 ] 

Shivram Mani edited comment on HAWQ-1075 at 9/26/16 11:40 PM:
--

Checksum verification is done implicitly when using dfs.open / dfs.read. 
Typically when clients read hdfs data, checksum verification is always 
implicit. This will not be a PXF configuration as this is a HDFS property 
(default is false, meaning always check checksum on read). PXF currently 
explicitly overrides this to true which needs to change.
Disabling client side checksum check is the user's choice based on frequency of 
CRC issues/data resiliency of hdfs blocks.
Performance impact will be very much inline with the performance difference 
with using any hadoop client to read blocks with and without checksum 
vadliation.


was (Author: shivram):
Checksum verification is done implicitly when using dfs.open / dfs.read. 
Typically when clients read hdfs data, checksum verification is always 
implicit. This will not be a PXF configuration as this is a HDFS property 
(default is false, meaning always check checksum on read). PXF currently 
explicitly overrides this to true which needs to change.
Disabling client side checksum check is the user's choice based on frequency of 
CRC issues/data resiliency of hdfs blocks.
Performance impact needs to be evaluated.

> Restore default behavior of client side(PXF) checksum validation when reading 
> blocks from HDFS
> --
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently HdfsTextSimple profile which is the optimized PXF profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> Background Information:
> PXF uses a 2 stage process to access HDFS data. 
> Stage 1, it fetches all the target blocks for the given file (along with 
> replica information). 
> Stage 2 (after HAWQ prepares an optimized access plan based on locality), PXF 
> agents reads the blocks in parallel.
> In almost all scenarios hadoop internally catches block corruption issues and 
> such blocks are never returned to any client requesting for block locations 
> (Stage 1). In certain scenarios such as a block corruption without change in 
> size, Stage1 can still return the location of the corrupted block as well, 
> and hence Stage 2 will need to perform an additional checksum check.
> With client side checksum check on read (default behavior), we are resilient 
> to such checksum errors on read as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1075) Restore default behavior of client side(PXF) checksum validation when reading blocks from HDFS

2016-09-26 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1075:
---
Description: 
Currently HdfsTextSimple profile which is the optimized PXF profile to read 
Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
explicitly set to true to avoid incurring any delays with checksum check while 
opening/reading the file/block. 

Background Information:
PXF uses a 2 stage process to access HDFS data. 
Stage 1, it fetches all the target blocks for the given file (along with 
replica information). 
Stage 2 (after HAWQ prepares an optimized access plan based on locality), PXF 
agents reads the blocks in parallel.

In almost all scenarios hadoop internally catches block corruption issues and 
such blocks are never returned to any client requesting for block locations 
(Stage 1). In certain scenarios such as a block corruption without change in 
size, Stage1 can still return the location of the corrupted block as well, and 
hence Stage 2 will need to perform an additional checksum check.

With client side checksum check on read (default behavior), we are resilient to 
such checksum errors on read as well.

  was:
Currently HdfsTextSimple profile which is the optimized profile to read 
Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
explicitly set to true to avoid incurring any delays with checksum check while 
opening/reading the file/block. 
This configuration needs to be exposed as an option and by default client side 
checksum check must occur in order to be resilient to any data corruption 
issues which aren't caught internally by the datanode block reporting mechanism 
(even fsck doesn't catch certain block corruption issues).


> Restore default behavior of client side(PXF) checksum validation when reading 
> blocks from HDFS
> --
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently HdfsTextSimple profile which is the optimized PXF profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> Background Information:
> PXF uses a 2 stage process to access HDFS data. 
> Stage 1, it fetches all the target blocks for the given file (along with 
> replica information). 
> Stage 2 (after HAWQ prepares an optimized access plan based on locality), PXF 
> agents reads the blocks in parallel.
> In almost all scenarios hadoop internally catches block corruption issues and 
> such blocks are never returned to any client requesting for block locations 
> (Stage 1). In certain scenarios such as a block corruption without change in 
> size, Stage1 can still return the location of the corrupted block as well, 
> and hence Stage 2 will need to perform an additional checksum check.
> With client side checksum check on read (default behavior), we are resilient 
> to such checksum errors on read as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #939: HAWQ-991. Fix bug of yaml configuration fi...

2016-09-26 Thread kdunn-pivotal
Github user kdunn-pivotal commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/939#discussion_r80594744
  
--- Diff: tools/bin/hawqregister ---
@@ -729,6 +728,23 @@ class HawqRegister(object):
 for k, eof in enumerate(eofs[1:]):
 query += ',(%d, %d, %d, %d, %d)' % (self.firstsegno + 
k + 1, eof, -1, -1, -1)
--- End diff --

This is minor, but I believe the `%` syntax is being 
[deprecated](https://www.python.org/dev/peps/pep-3101/) in Python 3.x. Maybe 
consider replacing it with the new `.format` syntax.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1078) Implement hawqsync-falcon DR utility.

2016-09-26 Thread Kyle R Dunn (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle R Dunn updated HAWQ-1078:
--
Attachment: hawq-dr-design.pdf

WIP design overview.

> Implement hawqsync-falcon DR utility.
> -
>
> Key: HAWQ-1078
> URL: https://issues.apache.org/jira/browse/HAWQ-1078
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Kyle R Dunn
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: hawq-dr-design.pdf
>
>
> HAWQ currently offers no DR functionality. This JIRA is for tracking the 
> design and development of a hawqsync-falcon utility, which uses a combination 
> of Falcon-based HDFS replication and custom automation in Python for allowing 
> both the HAWQ master catalog and corresponding HDFS data to be replicated to 
> a remote cluster for DR functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #940: HAWQ 1078. Implement hawqsync-falcon DR ut...

2016-09-26 Thread kdunn926
GitHub user kdunn926 opened a pull request:

https://github.com/apache/incubator-hawq/pull/940

HAWQ 1078. Implement hawqsync-falcon DR utility.

This is the initial commit for a Python utility to orchestrate a DR 
syncronization for HAWQ, based on Falcon HDFS replication and a cold backup of 
the active HAWQ master's MASTER_DATA_DIRECTORY.

A code review would be greatly appreciated, when someone has cycles. Active 
testing is currently underway in a production deployment.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kdunn926/incubator-hawq dr

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/940.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #940


commit 1ca0c75b8310b7aaad5a016d8d59c03bab865b8f
Author: Kyle Dunn 
Date:   2016-09-26T21:09:13Z

Initial commit




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1078) Implement hawqsync-falcon DR utility.

2016-09-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15524164#comment-15524164
 ] 

Kyle R Dunn commented on HAWQ-1078:
---

This is dependent on HAWQ-991. 

> Implement hawqsync-falcon DR utility.
> -
>
> Key: HAWQ-1078
> URL: https://issues.apache.org/jira/browse/HAWQ-1078
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Kyle R Dunn
>Assignee: Lei Chang
> Fix For: backlog
>
>
> HAWQ currently offers no DR functionality. This JIRA is for tracking the 
> design and development of a hawqsync-falcon utility, which uses a combination 
> of Falcon-based HDFS replication and custom automation in Python for allowing 
> both the HAWQ master catalog and corresponding HDFS data to be replicated to 
> a remote cluster for DR functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1078) Implement hawqsync-falcon DR utility.

2016-09-26 Thread Kyle R Dunn (JIRA)
Kyle R Dunn created HAWQ-1078:
-

 Summary: Implement hawqsync-falcon DR utility.
 Key: HAWQ-1078
 URL: https://issues.apache.org/jira/browse/HAWQ-1078
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Command Line Tools
Reporter: Kyle R Dunn
Assignee: Lei Chang
 Fix For: backlog


HAWQ currently offers no DR functionality. This JIRA is for tracking the design 
and development of a hawqsync-falcon utility, which uses a combination of 
Falcon-based HDFS replication and custom automation in Python for allowing both 
the HAWQ master catalog and corresponding HDFS data to be replicated to a 
remote cluster for DR functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #938: HAWQ-1067. Append hawq version number to p...

2016-09-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq/pull/938


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #938: HAWQ-1067. Append hawq version number to plr-hawq...

2016-09-26 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/938
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #939: HAWQ-991. Fix bug of yaml configuration fi...

2016-09-26 Thread zhangh43
GitHub user zhangh43 opened a pull request:

https://github.com/apache/incubator-hawq/pull/939

HAWQ-991. Fix bug of yaml configuration file contains only files unde…

…r table directory in --force mode.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangh43/incubator-hawq d07

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/939.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #939


commit cde0a79828df8fdea3874de1fdd6d5fcc286d044
Author: hzhang2 
Date:   2016-09-26T09:45:18Z

HAWQ-991. Fix bug of yaml configuration file contains only files under 
table directory in --force mode.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #938: HAWQ-1067. Append hawq version number to p...

2016-09-26 Thread radarwave
GitHub user radarwave opened a pull request:

https://github.com/apache/incubator-hawq/pull/938

HAWQ-1067. Append hawq version number to plr-hawq rpm pakcage name.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radarwave/incubator-hawq vplr

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/938.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #938


commit ab90f3b0d5474fd17329557f8f229262ff2272dc
Author: rlei 
Date:   2016-09-22T06:55:09Z

HAWQ-1067. Append hawq version number to plr-hawq rpm pakcage name.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1077:
---
Description: 
Found this issue during testing. The test tries to insert some large data into 
a table with ao compression, however it seems to never end. After quick check, 
we found that QE process spins and further gdb debugging shows that it is 
because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030

  was:
Found this issue during testing. Tthe query has never ended during tests. After 
quick check, we found that QE process spins and further gdb debugging shows 
that it is because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030


> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Found this issue during testing. The test tries to insert some large data 
> into a table with ao compression, however it seems to never end. After quick 
> check, we found that QE process spins and further gdb debugging shows that it 
> is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> co

[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1077:
---
Summary: Table insert hangs due to stack overwrite which is caused by a bug 
in ao snappy code  (was: Query hangs due to stack overwrite which is caused by 
a bug in ao snappy code)

> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Found this issue during testing. Tthe query has never ended during tests. 
> After quick check, we found that QE process spins and further gdb debugging 
> shows that it is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #937: HAWQ-1077. Query hangs due to stack overwr...

2016-09-26 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/937

HAWQ-1077. Query hangs due to stack overwrite which is caused by a bu…

…g in ao snappy code

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq compress

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/937.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #937


commit 75d13534cb91553fa734eeb3f83de09224584a54
Author: Paul Guo 
Date:   2016-09-26T08:42:14Z

HAWQ-1077. Query hangs due to stack overwrite which is caused by a bug in 
ao snappy code




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (HAWQ-1077) Query hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1077:
--

Assignee: Paul Guo  (was: Lei Chang)

> Query hangs due to stack overwrite which is caused by a bug in ao snappy code
> -
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Found this issue during testing. Tthe query has never ended during tests. 
> After quick check, we found that QE process spins and further gdb debugging 
> shows that it is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1077) Query hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1077:
---
Summary: Query hangs due to stack overwrite which is caused by a bug in ao 
snappy code  (was: QE process spins due to an bug in ao snappy code)

> Query hangs due to stack overwrite which is caused by a bug in ao snappy code
> -
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
>
> Found this issue during testing. Tthe query has never ended during tests. 
> After quick check, we found that QE process spins and further gdb debugging 
> shows that it is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1077) QE process spins due to an bug in ao snappy code

2016-09-26 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1077:
--

 Summary: QE process spins due to an bug in ao snappy code
 Key: HAWQ-1077
 URL: https://issues.apache.org/jira/browse/HAWQ-1077
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


Found this issue during testing. Tthe query has never ended during tests. After 
quick check, we found that QE process spins and further gdb debugging shows 
that it is because of a ao snappy code which leads to stack overwrite.

The stack is similar to this:

#0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
"\340\352\356\002", sourceLen=0,
compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
compressedBufferWithOverrrunLen=38253,
maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
compressor=0x5d9cdf ,
compressionState=0x2eee690) at gp_compress.c:56
#1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
(storageWrite=0x2eec558,
sourceData=0x33efa68 "ata value for text data typelarge data value for text 
data typelarge data value for text data typelarge data value for text data 
typelarge data value for text data typelarge data value for text data t"..., 
sourceLen=0,
executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
#2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
(storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
contentLen=3842464, executorBlockKind=2, rowCount=1) at 
cdbappendonlystoragewrite.c:1868
#3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
instup=0x33e7a70, tupleOid=0x7fffa7805094,
aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #927: HAWQ-1068. Fixed crash at calling get_ao_compress...

2016-09-26 Thread liming01
Github user liming01 commented on the issue:

https://github.com/apache/incubator-hawq/pull/927
  
@karthijrk, I just fixed get_ao_compression_ratio_name() and 
get_ao_compression_ratio_oid(), if other functions need to be fixed, please 
open defect for them. Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #936: HAWQ-1076. Fixed privileg check for sequence func...

2016-09-26 Thread ictmalili
Github user ictmalili commented on the issue:

https://github.com/apache/incubator-hawq/pull/936
  
LGTM. +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (HAWQ-1076) permission denied for using sequence with SELECT/USUAGE privilege

2016-09-26 Thread Ming LI (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522283#comment-15522283
 ] 

Ming LI edited comment on HAWQ-1076 at 9/26/16 7:05 AM:


Two problems fixed here:
1) Default statement only need when INSERT.
2) setval() need UPDATE privilege, while nextval() need USAGE or UPDATE 
privilege.

{code}
[ro_user] postgres=> select * from t1;
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant SELECT on table t1 to role1;
GRANT
[ro_user] postgres=> select * from t1;
c1 | c2
---+---
1 | 1
1 | 2
(2 rows)
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant INSERT on table t1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant USAGE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
INSERT 0 1
[ro_user] postgres=> select setval('seq1', 1, true) ;
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant UPDATE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> select setval('seq1', 1, true) ;
setval

1
(1 row)
{code}


was (Author: mli):
Two problems fixed here:
1) Default statement only need when INSERT.
2) setval() need UPDATE privilege, while nextval() need USAGE or UPDATE 
privilege.

{quotes}
[ro_user] postgres=> select * from t1;
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant SELECT on table t1 to role1;
GRANT
[ro_user] postgres=> select * from t1;
c1 | c2
---+---
1 | 1
1 | 2
(2 rows)
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant INSERT on table t1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant USAGE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
INSERT 0 1
[ro_user] postgres=> select setval('seq1', 1, true) ;
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant UPDATE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> select setval('seq1', 1, true) ;
setval

1
(1 row)
{/quotes}

> permission denied for using sequence with SELECT/USUAGE privilege
> -
>
> Key: HAWQ-1076
> URL: https://issues.apache.org/jira/browse/HAWQ-1076
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Ming LI
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Customer had a table with a column taking default value from a sequence. And 
> they want a role have readonly access to the table as well as the sequence. 
> However they have to grant ALL privilege on the sequence to the user for 
> running SELECT query. Otherwise it will fail with "ERROR:  permission denied 
> for sequence xxx".
> Following are the steps to reproduce the issue in house.
> 1. Create a table with column taking default value from a sequence. And grant 
> SELECT/USAGE privilege on the sequence to a user
> {code:java}
> [gpadmin@hdm1 ~]$ psql
> psql (8.2.15)
> Type "help" for help.
> gpadmin=# \d ns1.t1
>Append-Only Table "ns1.t1"
>  Column |  Type   |  Modifiers  
> +-+-
>  c1 | text| 
>  c2 | integer | not null default nextval('ns1.t1_c2_seq'::regclass)
> Compression Type: None
> Compression Level: 0
> Block Size: 32768
> Checksum: f
> Distributed randomly
> gpadmin=# grant SELECT,usage on sequence ns1.t1_c2_seq to ro_user;
> GRANT
> gpadmin=# select * from pg_class where relname='t1_c2_seq';
>   relname  | relnamespace | reltype | relowner | relam | relfilenode | 
> reltablespace | relpages | reltuples | reltoast
> relid | reltoastidxid | relaosegrelid | relaosegidxid | relhasindex | 
> relisshared | relkind | relstorage | relnatts | 
> relchecks | reltriggers | relukeys | relfkeys | relrefs | relhasoids | 
> relhaspkey | relhasrules | relhassubclass | rel
> frozenxid |  relacl  | reloptions 
> ---+--+-+--+---+-+---+--+---+-
> --+---+---+---+-+-+-++--+-
> --+-+--+--+-+++-++
> --+--+
>  t1_c2_seq |17638 |   17650 |   10 | 0 |   17649 |
>  0 |1 | 1 | 
> 0 | 0 | 0 | 0 | 

[jira] [Comment Edited] (HAWQ-1076) permission denied for using sequence with SELECT/USUAGE privilege

2016-09-26 Thread Ming LI (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522283#comment-15522283
 ] 

Ming LI edited comment on HAWQ-1076 at 9/26/16 7:04 AM:


Two problems fixed here:
1) Default statement only need when INSERT.
2) setval() need UPDATE privilege, while nextval() need USAGE or UPDATE 
privilege.

{quotes}
[ro_user] postgres=> select * from t1;
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant SELECT on table t1 to role1;
GRANT
[ro_user] postgres=> select * from t1;
c1 | c2
---+---
1 | 1
1 | 2
(2 rows)
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant INSERT on table t1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant USAGE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
INSERT 0 1
[ro_user] postgres=> select setval('seq1', 1, true) ;
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant UPDATE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> select setval('seq1', 1, true) ;
setval

1
(1 row)
{/quotes}


was (Author: mli):
Two problems fixed here:
1) Default statement only need when INSERT.
2) setval() need UPDATE privilege, while nextval() need USAGE or UPDATE 
privilege.

```
[ro_user] postgres=> select * from t1;
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant SELECT on table t1 to role1;
GRANT
[ro_user] postgres=> select * from t1;
c1 | c2
---+---
1 | 1
1 | 2
(2 rows)
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant INSERT on table t1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant USAGE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
INSERT 0 1
[ro_user] postgres=> select setval('seq1', 1, true) ;
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant UPDATE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> select setval('seq1', 1, true) ;
setval

1
(1 row)
```

> permission denied for using sequence with SELECT/USUAGE privilege
> -
>
> Key: HAWQ-1076
> URL: https://issues.apache.org/jira/browse/HAWQ-1076
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Ming LI
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Customer had a table with a column taking default value from a sequence. And 
> they want a role have readonly access to the table as well as the sequence. 
> However they have to grant ALL privilege on the sequence to the user for 
> running SELECT query. Otherwise it will fail with "ERROR:  permission denied 
> for sequence xxx".
> Following are the steps to reproduce the issue in house.
> 1. Create a table with column taking default value from a sequence. And grant 
> SELECT/USAGE privilege on the sequence to a user
> {code:java}
> [gpadmin@hdm1 ~]$ psql
> psql (8.2.15)
> Type "help" for help.
> gpadmin=# \d ns1.t1
>Append-Only Table "ns1.t1"
>  Column |  Type   |  Modifiers  
> +-+-
>  c1 | text| 
>  c2 | integer | not null default nextval('ns1.t1_c2_seq'::regclass)
> Compression Type: None
> Compression Level: 0
> Block Size: 32768
> Checksum: f
> Distributed randomly
> gpadmin=# grant SELECT,usage on sequence ns1.t1_c2_seq to ro_user;
> GRANT
> gpadmin=# select * from pg_class where relname='t1_c2_seq';
>   relname  | relnamespace | reltype | relowner | relam | relfilenode | 
> reltablespace | relpages | reltuples | reltoast
> relid | reltoastidxid | relaosegrelid | relaosegidxid | relhasindex | 
> relisshared | relkind | relstorage | relnatts | 
> relchecks | reltriggers | relukeys | relfkeys | relrefs | relhasoids | 
> relhaspkey | relhasrules | relhassubclass | rel
> frozenxid |  relacl  | reloptions 
> ---+--+-+--+---+-+---+--+---+-
> --+---+---+---+-+-+-++--+-
> --+-+--+--+-+++-++
> --+--+
>  t1_c2_seq |17638 |   17650 |   10 | 0 |   17649 |
>  0 |1 | 1 | 
> 0 | 0 | 0 | 0 | f 

[jira] [Commented] (HAWQ-1076) permission denied for using sequence with SELECT/USUAGE privilege

2016-09-26 Thread Ming LI (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522283#comment-15522283
 ] 

Ming LI commented on HAWQ-1076:
---

Two problems fixed here:
1) Default statement only need when INSERT.
2) setval() need UPDATE privilege, while nextval() need USAGE or UPDATE 
privilege.

```
[ro_user] postgres=> select * from t1;
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant SELECT on table t1 to role1;
GRANT
[ro_user] postgres=> select * from t1;
c1 | c2
---+---
1 | 1
1 | 2
(2 rows)
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant INSERT on table t1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant USAGE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
INSERT 0 1
[ro_user] postgres=> select setval('seq1', 1, true) ;
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant UPDATE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> select setval('seq1', 1, true) ;
setval

1
(1 row)
```

> permission denied for using sequence with SELECT/USUAGE privilege
> -
>
> Key: HAWQ-1076
> URL: https://issues.apache.org/jira/browse/HAWQ-1076
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Ming LI
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Customer had a table with a column taking default value from a sequence. And 
> they want a role have readonly access to the table as well as the sequence. 
> However they have to grant ALL privilege on the sequence to the user for 
> running SELECT query. Otherwise it will fail with "ERROR:  permission denied 
> for sequence xxx".
> Following are the steps to reproduce the issue in house.
> 1. Create a table with column taking default value from a sequence. And grant 
> SELECT/USAGE privilege on the sequence to a user
> {code:java}
> [gpadmin@hdm1 ~]$ psql
> psql (8.2.15)
> Type "help" for help.
> gpadmin=# \d ns1.t1
>Append-Only Table "ns1.t1"
>  Column |  Type   |  Modifiers  
> +-+-
>  c1 | text| 
>  c2 | integer | not null default nextval('ns1.t1_c2_seq'::regclass)
> Compression Type: None
> Compression Level: 0
> Block Size: 32768
> Checksum: f
> Distributed randomly
> gpadmin=# grant SELECT,usage on sequence ns1.t1_c2_seq to ro_user;
> GRANT
> gpadmin=# select * from pg_class where relname='t1_c2_seq';
>   relname  | relnamespace | reltype | relowner | relam | relfilenode | 
> reltablespace | relpages | reltuples | reltoast
> relid | reltoastidxid | relaosegrelid | relaosegidxid | relhasindex | 
> relisshared | relkind | relstorage | relnatts | 
> relchecks | reltriggers | relukeys | relfkeys | relrefs | relhasoids | 
> relhaspkey | relhasrules | relhassubclass | rel
> frozenxid |  relacl  | reloptions 
> ---+--+-+--+---+-+---+--+---+-
> --+---+---+---+-+-+-++--+-
> --+-+--+--+-+++-++
> --+--+
>  t1_c2_seq |17638 |   17650 |   10 | 0 |   17649 |
>  0 |1 | 1 | 
> 0 | 0 | 0 | 0 | f   | f   
> | S   | h  |9 | 
> 0 |   0 |0 |0 |   0 | f  | f  
> | f   | f  |
> 0 | {gpadmin=rwU/gpadmin,ro_user=rU/gpadmin} | 
> (1 row)
> gpadmin=# insert into ns1.t1(c1) values('abc');
> INSERT 0 1
> gpadmin=# select * from ns1.t1;
>  c1  | c2 
> -+
>  abc |  3
> (1 row)
> {code}
> 2. Connect to database as user with readonly access and run SELECT query 
> against the table. It will fail with "permission denied" error
> {code:java}
> [gpadmin@hdm1 ~]$ psql -U ro_user -d gpadmin
> psql (8.2.15)
> Type "help" for help.
> gpadmin=> select * from ns1.t1;
> ERROR:  permission denied for sequence t1_c2_seq
> {code}
> 3. grant ALL privilege on the sequence to that user, which makes it be able 
> to SELECT out data from the table
> {code:java}
> [gpadmin@hdm1 ~]$ psql
> gpadmin-# psql (8.2.15)
> gpadmin-# Type "help" for help.
> gpadmin-# 
> gpadmin=# grant update on sequence ns1.t1_c2_seq to ro_user;
> GRANT
> gpadmin=# select * from pg_class where relnam

[GitHub] incubator-hawq pull request #936: HAWQ-1076. Fixed privileg check for sequen...

2016-09-26 Thread liming01
GitHub user liming01 opened a pull request:

https://github.com/apache/incubator-hawq/pull/936

HAWQ-1076. Fixed privileg check for sequence function in column DEFAU…

…LT statement

Two problems fixed here:
1) Default statement only need when INSERT.
2) setval() need UPDATE privilege, while nextval() need USAGE or UPDATE 
privilege.
--
[ro_user] postgres=> select * from t1;
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant SELECT on table t1 to role1;
GRANT
[ro_user] postgres=> select * from t1;
c1 | c2
---+---
1 | 1
1 | 2
(2 rows)
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for relation t1
[gpadmin] postgres=# grant INSERT on table t1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant USAGE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> insert into t1 (c1) values(11);
INSERT 0 1
[ro_user] postgres=> select setval('seq1', 1, true) ;
ERROR: permission denied for sequence seq1
[gpadmin] postgres=# grant UPDATE on sequence seq1 to role1;
GRANT
[ro_user] postgres=> select setval('seq1', 1, true) ;
setval

1
(1 row)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liming01/incubator-hawq mli/HAWQ-1076

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/936.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #936


commit 2d4d52a1d6698a311e7b57f49f57a90b9e2706bf
Author: Ming LI 
Date:   2016-09-26T06:58:23Z

HAWQ-1076. Fixed privileg check for sequence function in column DEFAULT 
statement




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---