[jira] [Resolved] (HAWQ-1273) incorrect references in gplogfilter command line help

2017-07-26 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1273.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Fixed by @outofmem0ry. Closing.

> incorrect references in gplogfilter command line help
> -
>
> Key: HAWQ-1273
> URL: https://issues.apache.org/jira/browse/HAWQ-1273
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Radar Lei
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> the gplogfilter command line help text and examples include incorrect 
> references to:
> - MASTER_DATA_DIRECTORY environment variable
> - the hawq ssh command (gpssh)
> - the location of greenplum_path.sh file (/usr/local/greenplum-db)
> - segment log files (/gpdata/*/gp*log)
> there may be others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1273) incorrect references in gplogfilter command line help

2017-07-26 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1273.
---

> incorrect references in gplogfilter command line help
> -
>
> Key: HAWQ-1273
> URL: https://issues.apache.org/jira/browse/HAWQ-1273
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Radar Lei
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> the gplogfilter command line help text and examples include incorrect 
> references to:
> - MASTER_DATA_DIRECTORY environment variable
> - the hawq ssh command (gpssh)
> - the location of greenplum_path.sh file (/usr/local/greenplum-db)
> - segment log files (/gpdata/*/gp*log)
> there may be others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1269: HAWQ-1506. Support multi-append a file wi...

2017-07-26 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1269#discussion_r129487114
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -25,154 +25,188 @@
 
 using namespace Hdfs::Internal;
 
-namespace Hdfs {
-
-/**
- * Construct a CryptoCodec instance.
- * @param encryptionInfo the encryption info of file.
- * @param kcp a KmsClientProvider instance to get key from kms server.
- * @param bufSize crypto buffer size.
- */
-CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
-{
-
-/* Init global status. */
-ERR_load_crypto_strings();
-OpenSSL_add_all_algorithms();
-OPENSSL_config(NULL);
-
-/* Create cipher context. */
-encryptCtx = EVP_CIPHER_CTX_new();
-cipher = NULL;
-
-}
-
-/**
- * Destroy a CryptoCodec instance.
- */
-CryptoCodec::~CryptoCodec()
-{
-if (encryptCtx)
-EVP_CIPHER_CTX_free(encryptCtx);
-}
-
-/**
- * Get decrypted key from kms.
- */
-std::string CryptoCodec::getDecryptedKeyFromKms()
-{
-ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
-std::string key;
-try {
-key = map.get < std::string > ("material");
-} catch (...) {
-THROW(HdfsIOException, "CryptoCodec : Can not get key from kms.");
-}
-
-int rem = key.length() % 4;
-if (rem) {
-rem = 4 - rem;
-while (rem != 0) {
-key = key + "=";
-rem--;
-}
-}
-
-std::replace(key.begin(), key.end(), '-', '+');
-std::replace(key.begin(), key.end(), '_', '/');
-
-LOG(INFO, "CryptoCodec : getDecryptedKeyFromKms material is :%s", 
key.c_str());
-
-key = KmsClientProvider::base64Decode(key);
-return key;
-
-   
-}
-
-/**
- * Common encode/decode buffer method.
- * @param buffer the buffer to be encode/decode.
- * @param size the size of buffer.
- * @param enc true is for encode, false is for decode.
- * @return return the encode/decode buffer.
- */
-std::string CryptoCodec::endecInternal(const char * buffer, int64_t size, 
bool enc)
-{
-std::string key = encryptionInfo->getKey();
-std::string iv = encryptionInfo->getIv();
-LOG(INFO,
-"CryptoCodec : endecInternal info. key:%s, iv:%s, buffer:%s, 
size:%d, is_encode:%d.",
-key.c_str(), iv.c_str(), buffer, size, enc);
-   
-/* Get decrypted key from KMS */
-key = getDecryptedKeyFromKms();
-
-/* Select cipher method based on the key length. */
-if (key.length() == KEY_LENGTH_256) {
-cipher = EVP_aes_256_ctr();
-} else if (key.length() == KEY_LENGTH_128) {
-cipher = EVP_aes_128_ctr();
-} else {
-THROW(InvalidParameter, "CryptoCodec : Invalid key length.");
-}
-
-/* Init cipher context with cipher method, encrypted key and IV from 
KMS. */
-int encode = enc ? 1 : 0;
-if (!EVP_CipherInit_ex(encryptCtx, cipher, NULL,
-(const unsigned char *) key.c_str(),
-(const unsigned char *) iv.c_str(), encode)) {
-LOG(WARNING, "EVP_CipherInit_ex failed");
-}
-LOG(DEBUG3, "EVP_CipherInit_ex successfully");
-EVP_CIPHER_CTX_set_padding(encryptCtx, 0);
-
-/* Encode/decode buffer within cipher context. */
-std::string result;
-result.resize(size);
-int offset = 0;
-int remaining = size;
-int len = 0;
-/* If the encode/decode buffer size larger than crypto buffer size, 
encode/decode buffer one by one. */
-while (remaining > bufSize) {
-if (!EVP_CipherUpdate(encryptCtx, (unsigned char *) 
[offset],
-, (const unsigned char *) buffer + offset, bufSize)) {
-std::string err = ERR_lib_error_string(ERR_get_error());
-THROW(HdfsIOException, "CryptoCodec : Cannot encrypt AES data 
%s",
-err.c_str());
-}
-offset += len;
-remaining -= len;
-LOG(DEBUG3,
-"CryptoCodec : EVP_CipherUpdate successfully, result:%s, 
len:%d",
-result.c_str(), len);
-}
-if (remaining) {
-if (!EVP_CipherUpdate(encryptCtx, (unsigned char *) 
[offset],
-, (const unsigned char *) buffer + offset, remaining)) 
{
-std::string err = ERR_lib_error_string(ERR_get_error());
-THROW(HdfsIOException, "CryptoCodec : Cannot encrypt AES data 
%s",
-err.c_str());

[jira] [Updated] (HAWQ-1506) Support multi-append a file within encryption zone

2017-07-26 Thread Hongxu Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongxu Ma updated HAWQ-1506:

Description: 
Currently, multi-append (*serializable*) can cause write the incorrect content.
Reproduction method:
# Open a file with O_APPEND flag in encryption zone directory.
# Call hdfsWrite() twice.
# Then read all file contents => only the first write content is correct.

So we need to fix it.

  was:
Currently, multi-append can cause write 
Reproduction method:
# Open a file with O_APPEND flag in encryption zone directory.
# hdfsWrite() it multi-times.
# Then read all file contents => only the first write content is correct.

So we need to fix it.


> Support multi-append a file within encryption zone
> --
>
> Key: HAWQ-1506
> URL: https://issues.apache.org/jira/browse/HAWQ-1506
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: 2.3.0.0-incubating
>
>
> Currently, multi-append (*serializable*) can cause write the incorrect 
> content.
> Reproduction method:
> # Open a file with O_APPEND flag in encryption zone directory.
> # Call hdfsWrite() twice.
> # Then read all file contents => only the first write content is correct.
> So we need to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)