[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread wcl14
Github user wcl14 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126858617
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -0,0 +1,163 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "CryptoCodec.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+/**
+ * Construct a CryptoCodec instance.
+ * @param encryptionInfo the encryption info of file.
+ * @param kcp a KmsClientProvider instance to get key from kms server.
+ * @param bufSize crypto buffer size.
+ */
+CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
std::shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
+{
+   
+   /* Init global status. */
+   ERR_load_crypto_strings();
+   OpenSSL_add_all_algorithms();
+   OPENSSL_config(NULL);
+
+   /* Create cipher context. */
+   encryptCtx = EVP_CIPHER_CTX_new();  
+   cipher = NULL;  
+
+}
+
+/**
+ * Destroy a CryptoCodec instance.
+ */
+CryptoCodec::~CryptoCodec()
+{
+   if (encryptCtx) 
+   EVP_CIPHER_CTX_free(encryptCtx);
+}
+
+/**
+ * Get decrypted key from kms.
+ */
+std::string CryptoCodec::getDecryptedKeyFromKms()
+{
+   ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
+   std::string key = map.get("material");
+
+   int rem = key.length() % 4;
+   if (rem) {
+   rem = 4 - rem;
+   while (rem != 0) {
+   key = key + "=";
+   rem--;
+   }
+   }
--- End diff --

Why still align key as 4 bytes? Should it be encoded as based 64?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread wcl14
Github user wcl14 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126856385
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -0,0 +1,163 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "CryptoCodec.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+/**
+ * Construct a CryptoCodec instance.
+ * @param encryptionInfo the encryption info of file.
+ * @param kcp a KmsClientProvider instance to get key from kms server.
+ * @param bufSize crypto buffer size.
+ */
+CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
std::shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
+{
+   
+   /* Init global status. */
+   ERR_load_crypto_strings();
+   OpenSSL_add_all_algorithms();
+   OPENSSL_config(NULL);
+
+   /* Create cipher context. */
+   encryptCtx = EVP_CIPHER_CTX_new();  
+   cipher = NULL;  
+
+}
+
+/**
+ * Destroy a CryptoCodec instance.
+ */
+CryptoCodec::~CryptoCodec()
+{
+   if (encryptCtx) 
+   EVP_CIPHER_CTX_free(encryptCtx);
+}
+
+/**
+ * Get decrypted key from kms.
+ */
+std::string CryptoCodec::getDecryptedKeyFromKms()
+{
+   ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
+   std::string key = map.get("material");
+
+   int rem = key.length() % 4;
+   if (rem) {
+   rem = 4 - rem;
+   while (rem != 0) {
+   key = key + "=";
+   rem--;
+   }
+   }
+
+   std::replace(key.begin(), key.end(), '-', '+');
+   std::replace(key.begin(), key.end(), '_', '/');
--- End diff --

Can you give the comment for why replacing them?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread wcl14
Github user wcl14 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126857880
  
--- Diff: depends/libhdfs3/src/client/HttpClient.cpp ---
@@ -0,0 +1,337 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "HttpClient.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+#define CURL_SETOPT(handle, option, optarg, fmt, ...) \
+res = curl_easy_setopt(handle, option, optarg); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, ##__VA_ARGS__); \
+}
+
+#define CURL_SETOPT_ERROR1(handle, option, optarg, fmt) \
+CURL_SETOPT(handle, option, optarg, fmt, curl_easy_strerror(res));
+
+#define CURL_SETOPT_ERROR2(handle, option, optarg, fmt) \
+CURL_SETOPT(handle, option, optarg, fmt, curl_easy_strerror(res), \
+errorString().c_str())
+
+#define CURL_PERFORM(handle, fmt) \
+res = curl_easy_perform(handle); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, curl_easy_strerror(res), 
errorString().c_str()); \
+}
+
+
+#define CURL_GETOPT_ERROR2(handle, option, optarg, fmt) \
+res = curl_easy_getinfo(handle, option, optarg); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, curl_easy_strerror(res), 
errorString().c_str()); \
+}
+
+#define CURL_GET_RESPONSE(handle, code, fmt) \
+CURL_GETOPT_ERROR2(handle, CURLINFO_RESPONSE_CODE, code, fmt);
+
+HttpClient::HttpClient() : curl(NULL), list(NULL) {
+
+}
+
+/**
+ * Construct a HttpClient instance.
+ * @param url a url which is the address to send the request to the 
corresponding http server.
+ */
+HttpClient::HttpClient(const std::string ) {
+   curl = NULL;
+   list = NULL;
+   this->url = url;
+}
+
+/**
+ * Destroy a HttpClient instance.
+ */
+HttpClient::~HttpClient()
+{
+   destroy();
+}
+
+/**
+ * Receive error string from curl.
+ */
+std::string HttpClient::errorString() {
+   if (strlen(errbuf) == 0)
+   return "";
+   return errbuf;
+}
+
+/**
+ * Curl call back function to receive the reponse messages.
+ * @return return the size of reponse messages. 
+ */
+size_t HttpClient::CurlWriteMemoryCallback(void *contents, size_t size, 
size_t nmemb, void *userp)
+{
+  size_t realsize = size * nmemb;
+ if (userp == NULL || contents == NULL) {
+ return 0;
+ }
+  ((std::string *)userp)->append((const char *)contents, realsize);
+ LOG(DEBUG2, "HttpClient : Http response is : %s", ((std::string 
*)userp)->c_str());
+  return realsize;
+}
+
+/**
+ * Init curl handler and set curl options.
+ */
+void HttpClient::init() {
+   if (!initialized)
+   {
+   initialized = true;
+   if (curl_global_init(CURL_GLOBAL_ALL)) {
+   THROW(HdfsIOException, "Cannot initialize curl client 
for KMS");
+   }
+   }
+
+   curl = curl_easy_init();
+   if (!curl) {
+   THROW(HdfsIOException, "Cannot initialize curl handle for KMS");
+   }
+   
+CURL_SETOPT_ERROR1(curl, CURLOPT_ERRORBUFFER, errbuf,
+"Cannot initialize curl error buffer for KMS: %s");
+
+errbuf[0] = 0;
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_NOPROGRESS, 1,
+"Cannot initialize no progress in HttpClient: %s: %s");
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_VERBOSE, 0,
+"Cannot initialize no verbose in HttpClient: %s: %s");
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_COOKIEFILE, "",
+"Cannot initialize cookie behavior in HttpClient: 

[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126864478
  
--- Diff: depends/libhdfs3/src/client/KmsClientProvider.cpp ---
@@ -0,0 +1,318 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "KmsClientProvider.h"
+#include "Logger.h"
+#include 
+#include 
+#include 
+using namespace Hdfs::Internal;
+using boost::property_tree::read_json;
+using boost::property_tree::write_json;
+
+namespace Hdfs {
+
+/**
+ * Convert ptree format to json format
+ */
+std::string KmsClientProvider::toJson(const ptree )
+{
+   std::ostringstream buf;
+   try {
+   write_json(buf, data, false);
+   std::string json = buf.str();
+   return json;
+   } catch (...) {
+   THROW(HdfsIOException, "KmsClientProvider : Write json 
failed.");
+   }   
+}
+
+/**
+ * Convert json format to ptree format
+ */
+ptree KmsClientProvider::fromJson(const std::string )
+{
+   ptree pt2;
+   try {
+   std::istringstream is(data);
+   read_json(is, pt2);
+   return pt2;
+   } catch (...) {
+   THROW(HdfsIOException, "KmsClientProvider : Read json failed.");
+   }
+}
+
+/**
+ * Encode string to base64. 
+ */
+std::stringKmsClientProvider::base64Encode(const std::string )
+{
+   char * buffer = NULL;
+   size_t len = 0;
+   int rc = 0;
+   std::string result;
+
+   LOG(DEBUG1, "KmsClientProvider : Encode data is %s", data.c_str());
+
+   if (GSASL_OK != (rc = gsasl_base64_to(data.c_str(), data.size(), 
, ))) {
--- End diff --

Must use `data.data()` instead of `data.c_str()`, please check whether the 
similar issues existed in other place.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126863117
  
--- Diff: depends/libhdfs3/src/client/HttpClient.h ---
@@ -0,0 +1,155 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_CLIENT_HTTPCLIENT_H_
+#define _HDFS_LIBHDFS3_CLIENT_HTTPCLIENT_H_
+
+#include 
+#include 
+#include 
+#include "Exception.h"
+#include "ExceptionInternal.h"
+
+typedef enum httpMethod {
+HTTP_GET = 0,
+   HTTP_POST = 1,
+   HTTP_DELETE = 2,
+   HTTP_PUT = 3
+} httpMethod;
+
+namespace Hdfs {
+
+class HttpClient {
+public:
--- End diff --

I think we should add a unit test for HttpClient, because libcurl have many 
tricky behaviors, maybe we forgot some exceptional path.
Just a reminder, let's do it in future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126862923
  
--- Diff: depends/libhdfs3/mock/MockHttpClient.h ---
@@ -0,0 +1,53 @@
+/
+ * 2014 - 
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_MOCK_HTTPCLIENT_H_
+#define _HDFS_LIBHDFS3_MOCK_HTTPCLIENT_H_
+
+#include "gmock/gmock.h"
+
+#include "client/HttpClient.h"
+#include "client/KmsClientProvider.h"
+#include 
+
+using boost::property_tree::ptree;
+
+class MockHttpClient: public Hdfs::HttpClient {
--- End diff --

I think we should add a unit test for HttpClient, because libcurl has many 
tricky behaviors, maybe we forgot some exceptional path. 
Just a reminder, let's do it in future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126856856
  
--- Diff: depends/libhdfs3/CMake/FindSSL.cmake ---
@@ -0,0 +1,26 @@
+# - Try to find the Open ssl library (ssl)
+#
+# Once done this will define
+#
+#  SSL_FOUND - System has gnutls
+#  SSL_INCLUDE_DIR - The gnutls include directory
+#  SSL_LIBRARIES - The libraries needed to use gnutls
+#  SSL_DEFINITIONS - Compiler switches required for using gnutls
+
+
+IF (SSL_INCLUDE_DIR AND SSL_LIBRARIES)
+   # in cache already
+   SET(SSL_FIND_QUIETLY TRUE)
+ENDIF (SSL_INCLUDE_DIR AND SSL_LIBRARIES)
+
+FIND_PATH(SSL_INCLUDE_DIR openssl/opensslv.h)
+
+FIND_LIBRARY(SSL_LIBRARIES crypto)
+
+INCLUDE(FindPackageHandleStandardArgs)
+
+# handle the QUIETLY and REQUIRED arguments and set SSL_FOUND to TRUE if
+# all listed variables are TRUE
+FIND_PACKAGE_HANDLE_STANDARD_ARGS(SSL DEFAULT_MSG SSL_LIBRARIES 
SSL_INCLUDE_DIR)
+
+MARK_AS_ADVANCED(SSL_INCLUDE_DIR SSL_LIBRARIES)
--- End diff --

Can we leverage the code for ssl and curl in configure?
By the way, now we should auto-enable with-openssl if with-libhdfs3 is 
enabled.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (HAWQ-1493) Integrate Ranger lookup JAAS configuration in ranger-admin plugin jar

2017-07-11 Thread Hongxu Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongxu Ma closed HAWQ-1493.
---
Resolution: Fixed

fixed

> Integrate Ranger lookup JAAS configuration in ranger-admin plugin jar
> -
>
> Key: HAWQ-1493
> URL: https://issues.apache.org/jira/browse/HAWQ-1493
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: 2.3.0.0-incubating
>
>
> For support ranger lookup a kerberized HAWQ, we modify java.security file and 
> add a .java.login.config file into ranger account now.
> But both of the two files are global, have influenced other program, so we 
> need integrate the JAAS configuration in a private scope.
> After investigation, `setProperty("java.security.auth.login.config")` in 
> ranger-admin plugin code is a good solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq issue #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support TDE wri...

2017-07-11 Thread linwen
Github user linwen commented on the issue:

https://github.com/apache/incubator-hawq/pull/1265
  
Please unify the indent. We should avoid use both "space" and "tab" for 
indent in one source file. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1261: HAWQ-1490. Added new dummy plugin for offline px...

2017-07-11 Thread shivzone
Github user shivzone commented on the issue:

https://github.com/apache/incubator-hawq/pull/1261
  
@denalex i've added pxf-api jar to the pxf-private.classpath file. I 
haven't modified the hdp or the bigtop versions of the classpath file


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1261: HAWQ-1490. Added new dummy plugin for off...

2017-07-11 Thread shivzone
Github user shivzone commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1261#discussion_r126821382
  
--- Diff: 
pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/examples/DemoResolver.java ---
@@ -0,0 +1,59 @@
+package org.apache.hawq.pxf.api.examples;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.ReadResolver;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+import java.util.LinkedList;
+import java.util.List;
+import static org.apache.hawq.pxf.api.io.DataType.INTEGER;
+import static org.apache.hawq.pxf.api.io.DataType.VARCHAR;
+
+/**
+ * Class that defines the deserializtion of one record brought from the 
external input data.
+ *
+ * Dummy implementation
+ */
+public class DemoResolver extends Plugin implements ReadResolver {
+/**
+ * Constructs the DemoResolver
+ *
+ * @param metaData
+ */
+public DemoResolver(InputData metaData) {
+super(metaData);
+}
+
+@Override
+public List getFields(OneRow row) throws Exception {
+List output = new LinkedList();
+Object data = row.getData();
+
+/* break up the row into fields */
+String[] fields = ((String) data).split(",");
--- End diff --

Yes, it has now changed to make it flexible


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1261: HAWQ-1490. Added new dummy plugin for off...

2017-07-11 Thread shivzone
Github user shivzone commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1261#discussion_r126821163
  
--- Diff: 
pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/examples/DemoTextResolver.java
 ---
@@ -0,0 +1,55 @@
+package org.apache.hawq.pxf.api.examples;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.ReadResolver;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+
+import java.util.LinkedList;
+import java.util.List;
+
+import static org.apache.hawq.pxf.api.io.DataType.INTEGER;
+import static org.apache.hawq.pxf.api.io.DataType.VARCHAR;
+
+/**
+ * Class that defines the deserializtion of one record brought from the 
external input data.
+ *
+ * Dummy implementation
+ */
+public class DemoTextResolver extends Plugin implements ReadResolver {
+/**
+ * Constructs the DemoResolver
+ *
+ * @param metaData
+ */
+public DemoTextResolver(InputData metaData) {
+super(metaData);
+}
+@Override
+public List getFields(OneRow row) throws Exception {
+List output = new LinkedList();
+Object data = row.getData();
+output.add(new OneField(VARCHAR.getOID(), data));
--- End diff --

Text resolver only sends data as one field.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2017-07-11 Thread fangpei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082438#comment-16082438
 ] 

fangpei commented on HAWQ-1494:
---

To Ed Espino: I see, thanks!

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA t where  ( aaa = '32010662229'  or aaa = '3201066230'  or 
> aaa = '3201022783'  or aaa = '3201026304' 

[jira] [Updated] (HAWQ-1498) Segments keep open file descriptors for deleted files

2017-07-11 Thread Ed Espino (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Espino updated HAWQ-1498:

Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> Segments keep open file descriptors for deleted files
> -
>
> Key: HAWQ-1498
> URL: https://issues.apache.org/jira/browse/HAWQ-1498
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Harald Bögeholz
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> I have been running some large computations in HAWQ using psql on the master. 
> These computations created temporary tables and dropped them again. 
> Nevertheless free disk space in HDFS decreased by much more than it should. 
> While the psql session on the master was still open I investigated on one of 
> the slave machines.
> HDFS is stored on /mds:
> {noformat}
> [root@mds-hdp-04 ~]# ls -l /mds
> total 36
> drwxr-xr-x. 3 root  root4096 Jun 14 04:23 falcon
> drwxr-xr-x. 3 root  root4096 Jun 14 04:42 hdfs
> drwx--. 2 root  root   16384 Jun  8 02:48 lost+found
> drwxr-xr-x. 5 storm hadoop  4096 Jun 14 04:45 storm
> drwxr-xr-x. 4 root  root4096 Jun 14 04:43 yarn
> drwxr-xr-x. 2 zookeeper hadoop  4096 Jun 14 04:39 zookeeper
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks  Used Available Use% Mounted on
> /dev/vdc   515928320 314560220 175137316  65% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> {noformat}
> Note that there is a more than 200 GB difference between the disk space used 
> according to df and the sum of all files on that file system according to du.
> I have found the culprit to be several postgres processes running as gpadmin 
> and holding open file descriptors to deleted files. Here are the first few:
> {noformat}
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> postgres 665334 gpadmin   18r   REG 253,32 134217728 0  9438234 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482
>  (deleted)
> postgres 665334 gpadmin   34r   REG 253,32 24488 0  9438114 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398
>  (deleted)
> postgres 665334 gpadmin   35r   REG 253,32   199 0  9438115 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398_187044.meta
>  (deleted)
> postgres 665334 gpadmin   37r   REG 253,32 134217728 0  9438208 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446
>  (deleted)
> postgres 665334 gpadmin   38r   REG 253,32   1048583 0  9438209 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446_187092.meta
>  (deleted)
> postgres 665334 gpadmin   39r   REG 253,32   1048583 0  9438235 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482_187128.meta
>  (deleted)
> postgres 665334 gpadmin   40r   REG 253,32 134217728 0  9438262 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555
>  (deleted)
> postgres 665334 gpadmin   41r   REG 253,32   1048583 0  9438263 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555_187201.meta
>  (deleted)
> postgres 665334 gpadmin   42r   REG 253,32 134217728 0  9438285 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602
>  (deleted)
> postgres 665334 gpadmin   43r   REG 253,32   1048583 0  9438286 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602_187248.meta
>  (deleted)
> {noformat}
> As soon I close the psql session on the master the disk space is freed on the 
> slaves:
> {noformat}
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vdc   515928320 89992720 399704816  19% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> {noformat}
> I believe this to be a bug. At least for me it looks like a very undesirable 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2017-07-11 Thread Ed Espino (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082326#comment-16082326
 ] 

Ed Espino commented on HAWQ-1494:
-

Fangpei,

This issue was moved for consideration to the 2.3.0.0-incubating release as a 
result of the 2.2.0.0-incubating release has already been voted on by the dev 
community. With the 2.3.0.0-incubating version, it will be reviewed by dev 
community for inclusion in the next release (2.3.0.0-incubating).

Regards,
-=ed espino

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = 

[GitHub] incubator-hawq issue #1227: HAWQ-1448. Fixed postmaster process hung at recv...

2017-07-11 Thread wengyanqing
Github user wengyanqing commented on the issue:

https://github.com/apache/incubator-hawq/pull/1227
  
Agree with Radar's comment. LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1227: HAWQ-1448. Fixed postmaster process hung at recv...

2017-07-11 Thread radarwave
Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq/pull/1227
  
When master is down,  it's no matter to stop segment in 'smart' or 'fast' 
mode. And I see this fix only affect 'hawq stop cluster', which means master is 
already down before stoping segments. So I think this fix is good enough to me.

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread wengyanqing
Github user wengyanqing commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126638288
  
--- Diff: depends/libhdfs3/src/client/HttpClient.cpp ---
@@ -0,0 +1,337 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "HttpClient.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+#define CURL_SETOPT(handle, option, optarg, fmt, ...) \
+res = curl_easy_setopt(handle, option, optarg); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, ##__VA_ARGS__); \
+}
+
+#define CURL_SETOPT_ERROR1(handle, option, optarg, fmt) \
+CURL_SETOPT(handle, option, optarg, fmt, curl_easy_strerror(res));
+
+#define CURL_SETOPT_ERROR2(handle, option, optarg, fmt) \
+CURL_SETOPT(handle, option, optarg, fmt, curl_easy_strerror(res), \
+errorString().c_str())
+
+#define CURL_PERFORM(handle, fmt) \
+res = curl_easy_perform(handle); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, curl_easy_strerror(res), 
errorString().c_str()); \
+}
+
+
+#define CURL_GETOPT_ERROR2(handle, option, optarg, fmt) \
+res = curl_easy_getinfo(handle, option, optarg); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, curl_easy_strerror(res), 
errorString().c_str()); \
+}
+
+#define CURL_GET_RESPONSE(handle, code, fmt) \
+CURL_GETOPT_ERROR2(handle, CURLINFO_RESPONSE_CODE, code, fmt);
+
+HttpClient::HttpClient() : curl(NULL), list(NULL) {
+
+}
+
+/**
+ * Construct a HttpClient instance.
+ * @param url a url which is the address to send the request to the 
corresponding http server.
+ */
+HttpClient::HttpClient(const std::string ) {
+   curl = NULL;
+   list = NULL;
+   this->url = url;
+}
+
+/**
+ * Destroy a HttpClient instance.
+ */
+HttpClient::~HttpClient()
+{
+   destroy();
+}
+
+/**
+ * Receive error string from curl.
+ */
+std::string HttpClient::errorString() {
+   if (strlen(errbuf) == 0)
+   return "";
+   return errbuf;
+}
+
+/**
+ * Curl call back function to receive the reponse messages.
+ * @return return the size of reponse messages. 
+ */
+size_t HttpClient::CurlWriteMemoryCallback(void *contents, size_t size, 
size_t nmemb, void *userp)
+{
+  size_t realsize = size * nmemb;
+ if (userp == NULL || contents == NULL) {
+ return 0;
+ }
+  ((std::string *)userp)->append((const char *)contents, realsize);
+ LOG(DEBUG2, "HttpClient : Http response is : %s", ((std::string 
*)userp)->c_str());
+  return realsize;
+}
+
+/**
+ * Init curl handler and set curl options.
+ */
+void HttpClient::init() {
+   if (!initialized)
+   {
+   initialized = true;
+   if (curl_global_init(CURL_GLOBAL_ALL)) {
+   THROW(HdfsIOException, "Cannot initialize curl client 
for KMS");
+   }
+   }
+
+   curl = curl_easy_init();
+   if (!curl) {
+   THROW(HdfsIOException, "Cannot initialize curl handle for KMS");
+   }
+   
+CURL_SETOPT_ERROR1(curl, CURLOPT_ERRORBUFFER, errbuf,
+"Cannot initialize curl error buffer for KMS: %s");
+
+errbuf[0] = 0;
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_NOPROGRESS, 1,
+"Cannot initialize no progress in HttpClient: %s: %s");
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_VERBOSE, 0,
+"Cannot initialize no verbose in HttpClient: %s: %s");
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_COOKIEFILE, "",
+"Cannot initialize cookie behavior in 

[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread wengyanqing
Github user wengyanqing commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126637139
  
--- Diff: depends/libhdfs3/src/client/HttpClient.cpp ---
@@ -0,0 +1,337 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "HttpClient.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+#define CURL_SETOPT(handle, option, optarg, fmt, ...) \
+res = curl_easy_setopt(handle, option, optarg); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, ##__VA_ARGS__); \
+}
+
+#define CURL_SETOPT_ERROR1(handle, option, optarg, fmt) \
+CURL_SETOPT(handle, option, optarg, fmt, curl_easy_strerror(res));
+
+#define CURL_SETOPT_ERROR2(handle, option, optarg, fmt) \
+CURL_SETOPT(handle, option, optarg, fmt, curl_easy_strerror(res), \
+errorString().c_str())
+
+#define CURL_PERFORM(handle, fmt) \
+res = curl_easy_perform(handle); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, curl_easy_strerror(res), 
errorString().c_str()); \
+}
+
+
+#define CURL_GETOPT_ERROR2(handle, option, optarg, fmt) \
+res = curl_easy_getinfo(handle, option, optarg); \
+if (res != CURLE_OK) { \
+THROW(HdfsIOException, fmt, curl_easy_strerror(res), 
errorString().c_str()); \
+}
+
+#define CURL_GET_RESPONSE(handle, code, fmt) \
+CURL_GETOPT_ERROR2(handle, CURLINFO_RESPONSE_CODE, code, fmt);
+
+HttpClient::HttpClient() : curl(NULL), list(NULL) {
+
+}
+
+/**
+ * Construct a HttpClient instance.
+ * @param url a url which is the address to send the request to the 
corresponding http server.
+ */
+HttpClient::HttpClient(const std::string ) {
+   curl = NULL;
+   list = NULL;
+   this->url = url;
+}
+
+/**
+ * Destroy a HttpClient instance.
+ */
+HttpClient::~HttpClient()
+{
+   destroy();
+}
+
+/**
+ * Receive error string from curl.
+ */
+std::string HttpClient::errorString() {
+   if (strlen(errbuf) == 0)
+   return "";
+   return errbuf;
+}
+
+/**
+ * Curl call back function to receive the reponse messages.
+ * @return return the size of reponse messages. 
+ */
+size_t HttpClient::CurlWriteMemoryCallback(void *contents, size_t size, 
size_t nmemb, void *userp)
+{
+  size_t realsize = size * nmemb;
+ if (userp == NULL || contents == NULL) {
+ return 0;
+ }
+  ((std::string *)userp)->append((const char *)contents, realsize);
+ LOG(DEBUG2, "HttpClient : Http response is : %s", ((std::string 
*)userp)->c_str());
+  return realsize;
+}
+
+/**
+ * Init curl handler and set curl options.
+ */
+void HttpClient::init() {
+   if (!initialized)
+   {
+   initialized = true;
+   if (curl_global_init(CURL_GLOBAL_ALL)) {
+   THROW(HdfsIOException, "Cannot initialize curl client 
for KMS");
+   }
+   }
+
+   curl = curl_easy_init();
+   if (!curl) {
+   THROW(HdfsIOException, "Cannot initialize curl handle for KMS");
+   }
+   
+CURL_SETOPT_ERROR1(curl, CURLOPT_ERRORBUFFER, errbuf,
+"Cannot initialize curl error buffer for KMS: %s");
+
+errbuf[0] = 0;
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_NOPROGRESS, 1,
+"Cannot initialize no progress in HttpClient: %s: %s");
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_VERBOSE, 0,
+"Cannot initialize no verbose in HttpClient: %s: %s");
+
+CURL_SETOPT_ERROR2(curl, CURLOPT_COOKIEFILE, "",
+"Cannot initialize cookie behavior in 

[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread amyrazz44
GitHub user amyrazz44 opened a pull request:

https://github.com/apache/incubator-hawq/pull/1265

HAWQ-1500. HAWQ-1501. HAWQ-1502. Support TDE write function.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amyrazz44/incubator-hawq TDEWrite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1265.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1265


commit 1c41023612b799afa95f4408f0b20c516c19a791
Author: amyrazz44 
Date:   2017-07-11T07:43:18Z

HAWQ-1500. Support TDE by adding a common class HttpClient to get response 
from KMS.

commit 5d3b0ad6e974ccb693735342181c4c6939b13603
Author: amyrazz44 
Date:   2017-07-11T07:51:32Z

HAWQ-1501. Support TDE by adding KmsClientProvider class to interact with 
KMS server.

commit a2c62c5c11b9e05b8c37360d23965d5d18c34e73
Author: amyrazz44 
Date:   2017-07-11T07:57:26Z

HAWQ-1502. Support TDE write function.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support TDE wri...

2017-07-11 Thread amyrazz44
Github user amyrazz44 commented on the issue:

https://github.com/apache/incubator-hawq/pull/1265
  
@linwen @interma  Please feel free to review this pr. Thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1254: HAWQ-1373 - Added feature to reload GUC values u...

2017-07-11 Thread radarwave
Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq/pull/1254
  
Thanks for @outofmem0ry 's contribution, I have squashed and merged this 
commit. Please close this PR.

Welcome to do more contributions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1254: HAWQ-1373 - Added feature to reload GUC values u...

2017-07-11 Thread linwen
Github user linwen commented on the issue:

https://github.com/apache/incubator-hawq/pull/1254
  
+1 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---