[jira] [Closed] (HAWQ-1509) Support TDE read function

2017-08-07 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy closed HAWQ-1509.
-

> Support TDE read function
> -
>
> Key: HAWQ-1509
> URL: https://issues.apache.org/jira/browse/HAWQ-1509
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> Currently, we have already supported TDE write.
> Then will support TDE read in this JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread amyrazz44
Github user amyrazz44 closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1274


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1310) Reformat resource_negotiator()

2017-08-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117827#comment-16117827
 ] 

Yi Jin commented on HAWQ-1310:
--

Assigned to Amy to fix it recently per her active request. Thanks Amy.

> Reformat resource_negotiator()
> --
>
> Key: HAWQ-1310
> URL: https://issues.apache.org/jira/browse/HAWQ-1310
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1310) Reformat resource_negotiator()

2017-08-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1310:


Assignee: Amy  (was: Yi Jin)

> Reformat resource_negotiator()
> --
>
> Key: HAWQ-1310
> URL: https://issues.apache.org/jira/browse/HAWQ-1310
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1400) Add a small sleeping period in feature test utility before dropping test database

2017-08-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1400.


> Add a small sleeping period in feature test utility before dropping test 
> database
> -
>
> Key: HAWQ-1400
> URL: https://issues.apache.org/jira/browse/HAWQ-1400
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> This improvement is to raise the stability of feature test. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1400) Add a small sleeping period in feature test utility before dropping test database

2017-08-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1400.
--
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

This fix has been delivered.

> Add a small sleeping period in feature test utility before dropping test 
> database
> -
>
> Key: HAWQ-1400
> URL: https://issues.apache.org/jira/browse/HAWQ-1400
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> This improvement is to raise the stability of feature test. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1511) Add TDE-related properties into hdfs-client.xml

2017-08-07 Thread Hongxu Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongxu Ma reassigned HAWQ-1511:
---

Assignee: Amy  (was: Hongxu Ma)

> Add TDE-related properties into hdfs-client.xml
> ---
>
> Key: HAWQ-1511
> URL: https://issues.apache.org/jira/browse/HAWQ-1511
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> include:
> * dfs.encryption.key.provider.uri
> * hadoop.security.crypto.buffer.size



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq issue #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on the issue:

https://github.com/apache/incubator-hawq/pull/1274
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131811002
  
--- Diff: depends/libhdfs3/test/function/TestCInterface.cpp ---
@@ -2107,3 +2108,225 @@ TEST_F(TestCInterface, TestConcurrentWrite_Failure) 
{
 int retval = hdfsCloseFile(fs, fout1);
 ASSERT_TRUE(retval == 0);
 }
+
+/*all TDE read cases*/
+
+//helper function
+static void generate_file(const char *file_path, int file_size) {
+char buffer[1024];
+Hdfs::FillBuffer(buffer, sizeof(buffer), 0);
+
+int todo = file_size;
+FILE *f = fopen(file_path, "w");
+assert(f != NULL);
+while (todo > 0) {
+int batch = file_size;
+if (batch > sizeof(buffer))
+batch = sizeof(buffer);
+int rc = fwrite(buffer, 1, batch, f);
+//assert(rc == batch);
+todo -= rc;
+}
+fclose(f);
+}
+
+int diff_buf2filecontents(const char *file_path, const char *buf, int 
offset,
+int len) {
+char *local_buf = (char *) malloc(len);
+
+FILE *f = fopen(file_path, "r");
+assert(f != NULL);
+fseek(f, offset, SEEK_SET);
+
+int todo = len;
+int off = 0;
+while (todo > 0) {
+int rc = fread(local_buf + off, 1, todo, f);
+todo -= rc;
+off += rc;
+}
+fclose(f);
+
+int ret = strncmp(buf, local_buf, len);
+free(local_buf);
+return ret;
+}
+
+TEST(TestCInterfaceTDE, TestReadWithTDE_Basic_Success) {
+hdfsFS fs = NULL;
+setenv("LIBHDFS3_CONF", "function-test.xml", 1);
+struct hdfsBuilder * bld = hdfsNewBuilder();
+assert(bld != NULL);
+hdfsBuilderSetNameNode(bld, "default");
+fs = hdfsBuilderConnect(bld);
+ASSERT_TRUE(fs != NULL);
+
+//create a normal file
+char cmd[128];
+const char *file_name = "tde_read_file";
+int file_size = 1024;
+generate_file(file_name, file_size);
+
+//put file to TDE encryption zone
+system("hadoop fs -rmr /TDEBasicRead");
+system("hadoop key create keytde4basicread");
+system("hadoop fs -mkdir /TDEBasicRead");
+ASSERT_EQ(0,
+hdfsCreateEncryptionZone(fs, "/TDEBasicRead", 
"keytde4basicread"));
+sprintf(cmd, "hdfs dfs -put `pwd`/%s /TDEBasicRead/", file_name);
+system(cmd);
+
+int offset = 0;
+int rc = 0;
+char buf[1024];
+int to_read = 5;
+char file_path[128];
+sprintf(file_path, "/TDEBasicRead/%s", file_name);
+hdfsFile fin = hdfsOpenFile(fs, file_path, O_RDONLY, 0, 0, 0);
+
+//case1: read from beginning
+offset = 0;
+rc = hdfsRead(fs, fin, buf, to_read);
+ASSERT_GT(rc, 0);
+ASSERT_TRUE(diff_buf2filecontents(file_name, buf, offset, rc) == 0);
+
+//case2: read after seek
+offset = 123;
+hdfsSeek(fs, fin, offset);
+rc = hdfsRead(fs, fin, buf, to_read);
+ASSERT_GT(rc, 0);
+ASSERT_TRUE(diff_buf2filecontents(file_name, buf, offset, rc) == 0);
+
+//case3: multi read
+offset = 456;
+hdfsSeek(fs, fin, offset);
+rc = hdfsRead(fs, fin, buf, to_read);
+ASSERT_GT(rc, 0);
+int rc2 = hdfsRead(fs, fin, buf + rc, to_read);
+ASSERT_GT(rc2, 0);
+ASSERT_TRUE(diff_buf2filecontents(file_name, buf, offset, rc + rc2) == 
0);
+//clean up
+int retval = hdfsCloseFile(fs, fin);
+ASSERT_TRUE(retval == 0);
+system("hadoop fs -rmr /TDEBasicRead");
+system("hadoop key delete keytde4basicread -f");
+}
+
+TEST(TestCInterfaceTDE, TestReadWithTDE_Advanced_Success) {
+hdfsFS fs = NULL;
+setenv("LIBHDFS3_CONF", "function-test.xml", 1);
+struct hdfsBuilder * bld = hdfsNewBuilder();
+assert(bld != NULL);
+hdfsBuilderSetNameNode(bld, "default");
+fs = hdfsBuilderConnect(bld);
+ASSERT_TRUE(fs != NULL);
+
+//create a big file
+char cmd[128];
+const char *file_name = "tde_read_bigfile";
+int file_size = 150 * 1024 * 1024; //150M
+generate_file(file_name, file_size);
+
+//put file to TDE encryption zone
+system("hadoop fs -rmr /TDEAdvancedRead");
+system("hadoop key create keytde4advancedread");
+system("hadoop fs -mkdir /TDEAdvancedRead");
+ASSERT_EQ(0,
+hdfsCreateEncryptionZone(fs, "/TDEAdvancedRead",
+"keytde4advancedread"));
+sprintf(cmd, "hdfs dfs -put `pwd`/%s /TDEAdvancedRead/", file_name);
+system(cmd);
+
+int offset = 0;
+int rc = 0;
+char *buf = (char *) malloc(8 * 1024 * 1024); //8M
+int 

[jira] [Comment Edited] (HAWQ-1511) Add TDE-related properties into hdfs-client.xml

2017-08-07 Thread Hongxu Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117759#comment-16117759
 ] 

Hongxu Ma edited comment on HAWQ-1511 at 8/8/17 2:37 AM:
-

Currently, I think default value of all properties (except 
dfs.encryption.key.provider.uri) is enough for user.


was (Author: hongxu ma):
Currently, I think default value of all properties (except 
{code:java}
dfs.encryption.key.provider.uri
{code}
) is enough for user.

> Add TDE-related properties into hdfs-client.xml
> ---
>
> Key: HAWQ-1511
> URL: https://issues.apache.org/jira/browse/HAWQ-1511
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: 2.3.0.0-incubating
>
>
> include:
> * dfs.encryption.key.provider.uri
> * hadoop.security.crypto.buffer.size



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1511) Add TDE-related properties into hdfs-client.xml

2017-08-07 Thread Hongxu Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117759#comment-16117759
 ] 

Hongxu Ma commented on HAWQ-1511:
-

Currently, I think default value of all properties (except 
{code:java}
dfs.encryption.key.provider.uri
{code}
) is enough for user.

> Add TDE-related properties into hdfs-client.xml
> ---
>
> Key: HAWQ-1511
> URL: https://issues.apache.org/jira/browse/HAWQ-1511
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: 2.3.0.0-incubating
>
>
> include:
> * dfs.encryption.key.provider.uri
> * hadoop.security.crypto.buffer.size



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131807343
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -119,16 +119,20 @@ namespace Hdfs {
return -1;
}
 
+   is_init = true;
// Calculate iv and counter in order to init cipher context 
with cipher method. Default value is 0.
-   resetStreamOffset(crypto_method, stream_offset);
+   if ((resetStreamOffset(crypto_method, stream_offset)) < 0)
+   return -1;
--- End diff --

set `is_init = false` here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131807637
  
--- Diff: depends/libhdfs3/test/function/TestCInterface.cpp ---
@@ -2107,3 +2108,225 @@ TEST_F(TestCInterface, TestConcurrentWrite_Failure) 
{
 int retval = hdfsCloseFile(fs, fout1);
 ASSERT_TRUE(retval == 0);
 }
+
+/*all TDE read cases*/
+
+//helper function
+static void generate_file(const char *file_path, int file_size) {
+char buffer[1024];
+Hdfs::FillBuffer(buffer, sizeof(buffer), 0);
+
+int todo = file_size;
+FILE *f = fopen(file_path, "w");
+assert(f != NULL);
+while (todo > 0) {
+int batch = file_size;
+if (batch > sizeof(buffer))
+batch = sizeof(buffer);
+int rc = fwrite(buffer, 1, batch, f);
+//assert(rc == batch);
+todo -= rc;
+}
+fclose(f);
+}
+
+int diff_buf2filecontents(const char *file_path, const char *buf, int 
offset,
+int len) {
+char *local_buf = (char *) malloc(len);
+
+FILE *f = fopen(file_path, "r");
+assert(f != NULL);
+fseek(f, offset, SEEK_SET);
+
+int todo = len;
+int off = 0;
+while (todo > 0) {
+int rc = fread(local_buf + off, 1, todo, f);
+todo -= rc;
+off += rc;
+}
+fclose(f);
+
+int ret = strncmp(buf, local_buf, len);
+free(local_buf);
+return ret;
+}
+
+TEST(TestCInterfaceTDE, TestReadWithTDE_Basic_Success) {
+hdfsFS fs = NULL;
+setenv("LIBHDFS3_CONF", "function-test.xml", 1);
+struct hdfsBuilder * bld = hdfsNewBuilder();
+assert(bld != NULL);
+hdfsBuilderSetNameNode(bld, "default");
+fs = hdfsBuilderConnect(bld);
+ASSERT_TRUE(fs != NULL);
+
+//create a normal file
+char cmd[128];
+const char *file_name = "tde_read_file";
+int file_size = 1024;
+generate_file(file_name, file_size);
+
+//put file to TDE encryption zone
+system("hadoop fs -rmr /TDEBasicRead");
+system("hadoop key create keytde4basicread");
+system("hadoop fs -mkdir /TDEBasicRead");
+ASSERT_EQ(0,
+hdfsCreateEncryptionZone(fs, "/TDEBasicRead", 
"keytde4basicread"));
+sprintf(cmd, "hdfs dfs -put `pwd`/%s /TDEBasicRead/", file_name);
+system(cmd);
+
+int offset = 0;
+int rc = 0;
+char buf[1024];
+int to_read = 5;
+char file_path[128];
+sprintf(file_path, "/TDEBasicRead/%s", file_name);
+hdfsFile fin = hdfsOpenFile(fs, file_path, O_RDONLY, 0, 0, 0);
+
+//case1: read from beginning
+offset = 0;
+rc = hdfsRead(fs, fin, buf, to_read);
+ASSERT_GT(rc, 0);
+ASSERT_TRUE(diff_buf2filecontents(file_name, buf, offset, rc) == 0);
+
+//case2: read after seek
+offset = 123;
+hdfsSeek(fs, fin, offset);
+rc = hdfsRead(fs, fin, buf, to_read);
+ASSERT_GT(rc, 0);
+ASSERT_TRUE(diff_buf2filecontents(file_name, buf, offset, rc) == 0);
+
+//case3: multi read
+offset = 456;
+hdfsSeek(fs, fin, offset);
+rc = hdfsRead(fs, fin, buf, to_read);
+ASSERT_GT(rc, 0);
+int rc2 = hdfsRead(fs, fin, buf + rc, to_read);
+ASSERT_GT(rc2, 0);
+ASSERT_TRUE(diff_buf2filecontents(file_name, buf, offset, rc + rc2) == 
0);
+//clean up
+int retval = hdfsCloseFile(fs, fin);
+ASSERT_TRUE(retval == 0);
+system("hadoop fs -rmr /TDEBasicRead");
+system("hadoop key delete keytde4basicread -f");
+}
+
+TEST(TestCInterfaceTDE, TestReadWithTDE_Advanced_Success) {
+hdfsFS fs = NULL;
+setenv("LIBHDFS3_CONF", "function-test.xml", 1);
+struct hdfsBuilder * bld = hdfsNewBuilder();
+assert(bld != NULL);
+hdfsBuilderSetNameNode(bld, "default");
+fs = hdfsBuilderConnect(bld);
+ASSERT_TRUE(fs != NULL);
+
+//create a big file
+char cmd[128];
+const char *file_name = "tde_read_bigfile";
+int file_size = 150 * 1024 * 1024; //150M
+generate_file(file_name, file_size);
+
+//put file to TDE encryption zone
+system("hadoop fs -rmr /TDEAdvancedRead");
+system("hadoop key create keytde4advancedread");
+system("hadoop fs -mkdir /TDEAdvancedRead");
+ASSERT_EQ(0,
+hdfsCreateEncryptionZone(fs, "/TDEAdvancedRead",
+"keytde4advancedread"));
+sprintf(cmd, "hdfs dfs -put `pwd`/%s /TDEAdvancedRead/", file_name);
+system(cmd);
+
+int offset = 0;
+int rc = 0;
+char *buf = (char *) malloc(8 * 1024 * 1024); //8M
+int 

[jira] [Closed] (HAWQ-1166) More friendly error message after hawqregister with original table

2017-08-07 Thread Xiang Sheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Sheng closed HAWQ-1166.
-
Resolution: Won't Fix

> More friendly error message after hawqregister with original table
> --
>
> Key: HAWQ-1166
> URL: https://issues.apache.org/jira/browse/HAWQ-1166
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Command Line Tools
>Reporter: hongwu
>Assignee: Xiang Sheng
> Fix For: 2.3.0.0-incubating
>
>
> After hawqregister, if users access the original table, he will get message 
> below. Although this is a undefined behavior, should we write some record 
> into catalog table to return instructive error message in this case?
> {code}
> ERROR:  file open error in file 
> 'hdfs://localhost:8020/hawq_default/16385/16387/16733/1' for relation 't': No 
> such file or directory  (seg0 localhost:4 pid=43107)
> DETAIL:  
> File does not exist: /hawq_default/16385/16387/16733/1
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1828)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1712)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:587)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq issue #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread linwen
Github user linwen commented on the issue:

https://github.com/apache/incubator-hawq/pull/1274
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1333) Change access mode of source files for HAWQ

2017-08-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117561#comment-16117561
 ] 

Yi Jin commented on HAWQ-1333:
--

Thanks Amy, it is nice to have it delivered in Aug. Hope this expectation is 
not pushing you.

> Change access mode of source files for HAWQ  
> -
>
> Key: HAWQ-1333
> URL: https://issues.apache.org/jira/browse/HAWQ-1333
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> There are several source files's access mode is 755 in HAWQ, e.g.  *.c *.cpp 
> *.h files. In order to guarantee the security, will change the source files' 
> access mode to 644. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1275: HAWQ-1333. Change access mode of source f...

2017-08-07 Thread amyrazz44
GitHub user amyrazz44 opened a pull request:

https://github.com/apache/incubator-hawq/pull/1275

HAWQ-1333. Change access mode of source files for HAWQ.

@radarwave  @linwen feel free to review this pr, thank you so much.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amyrazz44/incubator-hawq fixAccessMode

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1275.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1275


commit 685d58af6dbaeb024840c6d3ffa75a2af00a85b9
Author: amyrazz44 
Date:   2017-08-07T10:13:57Z

HAWQ-1333. Change access mode of source files for HAWQ.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1333) Change access mode of source files for HAWQ

2017-08-07 Thread Amy (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116260#comment-16116260
 ] 

Amy commented on HAWQ-1333:
---

Will fix this ASAP, thank you.

> Change access mode of source files for HAWQ  
> -
>
> Key: HAWQ-1333
> URL: https://issues.apache.org/jira/browse/HAWQ-1333
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> There are several source files's access mode is 755 in HAWQ, e.g.  *.c *.cpp 
> *.h files. In order to guarantee the security, will change the source files' 
> access mode to 644. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1510) Add TDE-related functionality into hawq command line tools

2017-08-07 Thread Hongxu Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongxu Ma updated HAWQ-1510:

Description: 
1, hawq init
the only way to enable tde in hawq:
user should give a key name(already created by hadoop key command) parameter 
when execuate the init command, it makes the whole hawq_default directory as an 
encryption zone.

note:
cannot support transfer the existed(and non-empty) hawq_default directory into 
an encryption zone.

-2, hawq state-
show the encryption zone info if user enable tde in hawq.

3, hawq register 
cannot register file in different encryption zones / un-encryption zones.

4, hawq extract
give user a warning of the table data is stored in encryption zone if user 
enable tde in hawq.


  was:
1, hawq init
the only way to enable tde in hawq:
user should give a key name(already created by hadoop key command) parameter 
when execuate the init command, it makes the whole hawq_default directory as an 
encryption zone.

note:
cannot support transfer the existed(and non-empty) hawq_default directory into 
an encryption zone.

-2, hawq state
show the encryption zone info if user enable tde in hawq.-

3, hawq register 
cannot register file in different encryption zones / un-encryption zones.

4, hawq extract
give user a warning of the table data is stored in encryption zone if user 
enable tde in hawq.



> Add TDE-related functionality into hawq command line tools
> --
>
> Key: HAWQ-1510
> URL: https://issues.apache.org/jira/browse/HAWQ-1510
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Command Line Tools
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: 2.3.0.0-incubating
>
>
> 1, hawq init
> the only way to enable tde in hawq:
> user should give a key name(already created by hadoop key command) parameter 
> when execuate the init command, it makes the whole hawq_default directory as 
> an encryption zone.
> note:
> cannot support transfer the existed(and non-empty) hawq_default directory 
> into an encryption zone.
> -2, hawq state-
> show the encryption zone info if user enable tde in hawq.
> 3, hawq register 
> cannot register file in different encryption zones / un-encryption zones.
> 4, hawq extract
> give user a warning of the table data is stored in encryption zone if user 
> enable tde in hawq.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1510) Add TDE-related functionality into hawq command line tools

2017-08-07 Thread Hongxu Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongxu Ma updated HAWQ-1510:

Description: 
1, hawq init
the only way to enable tde in hawq:
user should give a key name(already created by hadoop key command) parameter 
when execuate the init command, it makes the whole hawq_default directory as an 
encryption zone.

note:
cannot support transfer the existed(and non-empty) hawq_default directory into 
an encryption zone.

-2, hawq state-
-show the encryption zone info if user enable tde in hawq.-

3, hawq register 
cannot register file in different encryption zones / un-encryption zones.

4, hawq extract
give user a warning of the table data is stored in encryption zone if user 
enable tde in hawq.


  was:
1, hawq init
the only way to enable tde in hawq:
user should give a key name(already created by hadoop key command) parameter 
when execuate the init command, it makes the whole hawq_default directory as an 
encryption zone.

note:
cannot support transfer the existed(and non-empty) hawq_default directory into 
an encryption zone.

-2, hawq state-
show the encryption zone info if user enable tde in hawq.

3, hawq register 
cannot register file in different encryption zones / un-encryption zones.

4, hawq extract
give user a warning of the table data is stored in encryption zone if user 
enable tde in hawq.



> Add TDE-related functionality into hawq command line tools
> --
>
> Key: HAWQ-1510
> URL: https://issues.apache.org/jira/browse/HAWQ-1510
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Command Line Tools
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: 2.3.0.0-incubating
>
>
> 1, hawq init
> the only way to enable tde in hawq:
> user should give a key name(already created by hadoop key command) parameter 
> when execuate the init command, it makes the whole hawq_default directory as 
> an encryption zone.
> note:
> cannot support transfer the existed(and non-empty) hawq_default directory 
> into an encryption zone.
> -2, hawq state-
> -show the encryption zone info if user enable tde in hawq.-
> 3, hawq register 
> cannot register file in different encryption zones / un-encryption zones.
> 4, hawq extract
> give user a warning of the table data is stored in encryption zone if user 
> enable tde in hawq.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131587069
  
--- Diff: depends/libhdfs3/src/client/InputStreamImpl.cpp ---
@@ -734,9 +759,17 @@ void InputStreamImpl::seekInternal(int64_t pos) {
 }
 
 try {
-if (blockReader && pos > cursor && pos < endOfCurBlock) {
+if (blockReader && pos > cursor && pos < endOfCurBlock && (pos - 
cursor) < blockReader->available()) {
 blockReader->skip(pos - cursor);
 cursor = pos;
+if (cryptoCodec) {
+int ret = 
cryptoCodec->resetStreamOffset(CryptoMethod::DECRYPT,
+cursor);
+if (ret < 0) {
+THROW(HdfsIOException, "init CryptoCodec failed, 
file:%s",
--- End diff --

warning message: "**reset** ... failed"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131585776
  
--- Diff: depends/libhdfs3/src/client/InputStreamImpl.cpp ---
@@ -626,6 +645,12 @@ int32_t InputStreamImpl::readInternal(char * buf, 
int32_t size) {
 
 continue;
 }
+std::string bufDecode;
+if (fileStatus.isFileEncrypted()) {
+/* Decrypt buffer if the file is encrypted. */
+bufDecode = cryptoCodec->cipher_wrap(buf, size);
+memcpy(buf, bufDecode.c_str(), size);
--- End diff --

must use `retval` instead of `size` here, because `readOneBlock()` doesn't 
always give "size" bytes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131583088
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -119,33 +119,38 @@ namespace Hdfs {
return -1;
}
 
-   //calculate new IV when appending a existed file
+   // Calculate iv and counter in order to init cipher context 
with cipher method. Default value is 0.
+   resetStreamOffset(crypto_method, stream_offset);
--- End diff --

check return value: if `resetStreamOffset()` failed, return -1 here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131583257
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -119,33 +119,38 @@ namespace Hdfs {
return -1;
}
 
-   //calculate new IV when appending a existed file
+   // Calculate iv and counter in order to init cipher context 
with cipher method. Default value is 0.
+   resetStreamOffset(crypto_method, stream_offset);
+
+   LOG(DEBUG3, "CryptoCodec init success, length of the decrypted 
key is : %llu, crypto method is : %d", AlgorithmBlockSize, crypto_method);
+   is_init = true;
+   return 1;
+
+   }
+
+   int CryptoCodec::resetStreamOffset(CryptoMethod crypto_method, int64_t 
stream_offset) {
+   // Calculate new IV when appending an existed file.
--- End diff --

check `is_init`: if not initialized yet, return -1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread wengyanqing
Github user wengyanqing commented on the issue:

https://github.com/apache/incubator-hawq/pull/1274
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread wengyanqing
Github user wengyanqing commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1274#discussion_r131581898
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -119,33 +119,38 @@ namespace Hdfs {
return -1;
}
 
-   //calculate new IV when appending a existed file
+   // Calculate iv and counter in order to init cipher context 
with cipher method. Default value is 0.
+   resetStreamOffset(crypto_method, stream_offset);
+
+   LOG(DEBUG3, "CryptoCodec init success, length of the decrypted 
key is : %llu, crypto method is : %d", AlgorithmBlockSize, crypto_method);
+   is_init = true;
+   return 1;
+
+   }
+
+   int CryptoCodec::resetStreamOffset(CryptoMethod crypto_method, int64_t 
stream_offset) {
--- End diff --

The function prototype defines that it could return 1, 0, -1, but there is 
no value 0 return. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread amyrazz44
Github user amyrazz44 commented on the issue:

https://github.com/apache/incubator-hawq/pull/1274
  
@wengyanqing @linwen feel free to review this pr, thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1274: HAWQ-1509. Support TDE read function.

2017-08-07 Thread amyrazz44
GitHub user amyrazz44 opened a pull request:

https://github.com/apache/incubator-hawq/pull/1274

HAWQ-1509. Support TDE read function.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amyrazz44/incubator-hawq TDEReadPart

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1274.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1274


commit 90f1c4ded4321adfa98a691a0b1b47f727316738
Author: amyrazz44 
Date:   2017-08-07T04:53:36Z

HAWQ-1509. Support TDE read function.

commit 45ddd1f471bceae25a9f16d2f38442e4898344c9
Author: interma 
Date:   2017-08-07T05:44:45Z

HAWQ-1509. Add TDE read test cases.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---