[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-03 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944244#comment-16944244
 ] 

Surendra Singh Lilhore commented on HDFS-14754:
---

[~weichiu], I run it while committing without fix and it was failing. 

May be because of some other Jira fix make it passing always. I will check and 
tell you.

> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14754-addendum.001.patch, HDFS-14754.001.patch, 
> HDFS-14754.002.patch, HDFS-14754.003.patch, HDFS-14754.004.patch, 
> HDFS-14754.005.patch, HDFS-14754.006.patch, HDFS-14754.007.patch, 
> HDFS-14754.008.patch, HDFS-14754.branch-3.1.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=323201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323201
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 04/Oct/19 05:49
Start Date: 04/Oct/19 05:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-538247315
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 921 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1009 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 26 | hadoop-ozone: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 774 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 23 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2431 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1589 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 74a8dbbb1c89 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 844b766 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=323200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323200
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 04/Oct/19 05:45
Start Date: 04/Oct/19 05:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-538246119
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 942 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1030 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 16 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2522 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1589 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fb4933b9414e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 844b766 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Updated] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-03 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2247:

Description: 
As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.

  was:
As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.


> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }

[jira] [Updated] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-03 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2247:

Description: 
As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.

  was:
As part of HDDS-2174 we are deleting Encryption Key on delete file operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.


> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> 

[jira] [Updated] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-03 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2247:

Description: 
As part of HDDS-2174 we are deleting Encryption Key on delete file operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.

  was:
[~aengineer] - As part of HDDS-2174 we are deleting Encryption Key on delete 
file operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.


> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As part of HDDS-2174 we are deleting Encryption Key on delete file operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> 

[jira] [Commented] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-03 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944241#comment-16944241
 ] 

Dinesh Chitlangia commented on HDDS-2247:
-

FYI [~aengineer]

> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As part of HDDS-2174 we are deleting Encryption Key on delete file operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> {code}
> In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
> deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
> before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2247:
---

 Summary: Delete FileEncryptionInfo from KeyInfo when a Key is 
deleted
 Key: HDDS-2247
 URL: https://issues.apache.org/jira/browse/HDDS-2247
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


[~aengineer] - As part of HDDS-2174 we are deleting Encryption Key on delete 
file operation.
However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
writing file in a GDPR enforced Bucket.

{code:java}
final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
if (feInfo != null) {
  KeyProvider.KeyVersion decrypted = getDEK(feInfo);
  final CryptoOutputStream cryptoOut =
  new CryptoOutputStream(keyOutputStream,
  OzoneKMSUtil.getCryptoCodec(conf, feInfo),
  decrypted.getMaterial(), feInfo.getIV());
  return new OzoneOutputStream(cryptoOut);
} else {
  try{
GDPRSymmetricKey gk;
Map openKeyMetadata =
openKey.getKeyInfo().getMetadata();
if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
  gk = new GDPRSymmetricKey(
  openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
  openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
  );
  gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
  return new OzoneOutputStream(
  new CipherOutputStream(keyOutputStream, gk.getCipher()));
}
  }catch (Exception ex){
throw new IOException(ex);
  }
{code}

In such scenario, when KMS is enabled & GDPR enforced on a bucket, if we user 
deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-03 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-2245:
--

Assignee: kevin su

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14637) Namenode may not replicate blocks to meet the policy after enabling upgradeDomain

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14637:
---
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Cherry picked the commit to branch-3.2 and branch-3.1 with 
trivial conflicts in test code.

Thanks [~sodonnell] and [~ayushtkn]!

> Namenode may not replicate blocks to meet the policy after enabling 
> upgradeDomain
> -
>
> Key: HDFS-14637
> URL: https://issues.apache.org/jira/browse/HDFS-14637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14637.001.patch, HDFS-14637.002.patch, 
> HDFS-14637.003.patch, HDFS-14637.004.patch, HDFS-14637.005.patch, 
> HDFS-14637.branch-3.1.patch, HDFS-14637.branch-3.2.patch
>
>
> After changing the network topology or placement policy on a cluster and 
> restarting the namenode, the namenode will scan all blocks on the cluster at 
> startup, and check if they meet the current placement policy. If they do not, 
> they are added to the replication queue and the namenode will arrange for 
> them to be replicated to ensure the placement policy is used.
> If you start with a cluster with no UpgradeDomain, and then enable 
> UpgradeDomain, then on restart the NN does notice all the blocks violate the 
> placement policy and it adds them to the replication queue. I believe there 
> are some issues in the logic that prevents the blocks from replicating 
> depending on the setup:
> With UD enabled, but no racks configured, and possible on a 2 rack cluster, 
> the queued replication work never makes any progress, as in 
> blockManager.validateReconstructionWork(), it checks to see if the new 
> replica increases the number of racks, and if it does not, it skips it and 
> tries again later.
> {code:java}
> DatanodeStorageInfo[] targets = rw.getTargets();
> if ((numReplicas.liveReplicas() >= requiredRedundancy) &&
> (!isPlacementPolicySatisfied(block)) ) {
>   if (!isInNewRack(rw.getSrcNodes(), targets[0].getDatanodeDescriptor())) {
> // No use continuing, unless a new rack in this case
> return false;
>   }
>   // mark that the reconstruction work is to replicate internal block to a
>   // new rack.
>   rw.setNotEnoughRack();
> }
> {code}
> Additionally, in blockManager.scheduleReconstruction() is there some logic 
> that sets the number of new replicas required to one, if the live replicas >= 
> requiredReduncancy:
> {code:java}
> int additionalReplRequired;
> if (numReplicas.liveReplicas() < requiredRedundancy) {
>   additionalReplRequired = requiredRedundancy - numReplicas.liveReplicas()
>   - pendingNum;
> } else {
>   additionalReplRequired = 1; // Needed on a new rack
> }{code}
> With UD, it is possible for 2 new replicas to be needed to meet the block 
> placement policy, if all existing replicas are on nodes with the same domain. 
> For traditional '2 rack redundancy', only 1 new replica would ever have been 
> needed in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14637) Namenode may not replicate blocks to meet the policy after enabling upgradeDomain

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14637:
---
Attachment: HDFS-14637.branch-3.2.patch
HDFS-14637.branch-3.1.patch

> Namenode may not replicate blocks to meet the policy after enabling 
> upgradeDomain
> -
>
> Key: HDFS-14637
> URL: https://issues.apache.org/jira/browse/HDFS-14637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14637.001.patch, HDFS-14637.002.patch, 
> HDFS-14637.003.patch, HDFS-14637.004.patch, HDFS-14637.005.patch, 
> HDFS-14637.branch-3.1.patch, HDFS-14637.branch-3.2.patch
>
>
> After changing the network topology or placement policy on a cluster and 
> restarting the namenode, the namenode will scan all blocks on the cluster at 
> startup, and check if they meet the current placement policy. If they do not, 
> they are added to the replication queue and the namenode will arrange for 
> them to be replicated to ensure the placement policy is used.
> If you start with a cluster with no UpgradeDomain, and then enable 
> UpgradeDomain, then on restart the NN does notice all the blocks violate the 
> placement policy and it adds them to the replication queue. I believe there 
> are some issues in the logic that prevents the blocks from replicating 
> depending on the setup:
> With UD enabled, but no racks configured, and possible on a 2 rack cluster, 
> the queued replication work never makes any progress, as in 
> blockManager.validateReconstructionWork(), it checks to see if the new 
> replica increases the number of racks, and if it does not, it skips it and 
> tries again later.
> {code:java}
> DatanodeStorageInfo[] targets = rw.getTargets();
> if ((numReplicas.liveReplicas() >= requiredRedundancy) &&
> (!isPlacementPolicySatisfied(block)) ) {
>   if (!isInNewRack(rw.getSrcNodes(), targets[0].getDatanodeDescriptor())) {
> // No use continuing, unless a new rack in this case
> return false;
>   }
>   // mark that the reconstruction work is to replicate internal block to a
>   // new rack.
>   rw.setNotEnoughRack();
> }
> {code}
> Additionally, in blockManager.scheduleReconstruction() is there some logic 
> that sets the number of new replicas required to one, if the live replicas >= 
> requiredReduncancy:
> {code:java}
> int additionalReplRequired;
> if (numReplicas.liveReplicas() < requiredRedundancy) {
>   additionalReplRequired = requiredRedundancy - numReplicas.liveReplicas()
>   - pendingNum;
> } else {
>   additionalReplRequired = 1; // Needed on a new rack
> }{code}
> With UD, it is possible for 2 new replicas to be needed to meet the block 
> placement policy, if all existing replicas are on nodes with the same domain. 
> For traditional '2 rack redundancy', only 1 new replica would ever have been 
> needed in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14637) Namenode may not replicate blocks to meet the policy after enabling upgradeDomain

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944232#comment-16944232
 ] 

Hudson commented on HDFS-14637:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17464 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17464/])
HDFS-14637. Namenode may not replicate blocks to meet the policy after 
(weichiu: rev c99a12167ff9566012ef32104a3964887d62c899)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusDefault.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusDefault.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithNodeGroup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatusWithUpgradeDomain.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementStatusWithUpgradeDomain.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementStatus.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/BlockPlacementPolicyAlwaysSatisfied.java


> Namenode may not replicate blocks to meet the policy after enabling 
> upgradeDomain
> -
>
> Key: HDFS-14637
> URL: https://issues.apache.org/jira/browse/HDFS-14637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14637.001.patch, HDFS-14637.002.patch, 
> HDFS-14637.003.patch, HDFS-14637.004.patch, HDFS-14637.005.patch
>
>
> After changing the network topology or placement policy on a cluster and 
> restarting the namenode, the namenode will scan all blocks on the cluster at 
> startup, and check if they meet the current placement policy. If they do not, 
> they are added to the replication queue and the namenode will arrange for 
> them to be replicated to ensure the placement policy is used.
> If you start with a cluster with no UpgradeDomain, and then enable 
> UpgradeDomain, then on restart the NN does notice all the blocks violate the 
> placement policy and it adds them to the replication queue. I believe there 
> are some issues in the logic that prevents the blocks from replicating 
> depending on the setup:
> With UD enabled, but no racks configured, and possible on a 2 rack cluster, 
> the queued replication work never makes any progress, as in 
> blockManager.validateReconstructionWork(), it checks to see if the new 
> replica increases the number of racks, and if it does not, it skips it and 
> tries again later.
> {code:java}
> DatanodeStorageInfo[] targets = rw.getTargets();
> if ((numReplicas.liveReplicas() >= requiredRedundancy) &&
> (!isPlacementPolicySatisfied(block)) ) {
>   if (!isInNewRack(rw.getSrcNodes(), targets[0].getDatanodeDescriptor())) {
> // No use continuing, unless a new rack in this case
> return false;
>   }
>   // mark that the reconstruction work is to replicate internal block to a
>   // new rack.
>   rw.setNotEnoughRack();
> }
> {code}
> Additionally, in blockManager.scheduleReconstruction() is there some logic 
> that sets the number of new replicas required to one, if the live replicas >= 
> requiredReduncancy:
> {code:java}
> int additionalReplRequired;
> if (numReplicas.liveReplicas() < requiredRedundancy) {
>   additionalReplRequired = requiredRedundancy - numReplicas.liveReplicas()
>   - pendingNum;
> } else {
>   additionalReplRequired = 1; // Needed on a new rack
> }{code}
> With UD, it is possible for 2 new replicas to be needed to meet the block 
> placement policy, if all existing replicas are on nodes with the same domain. 
> For traditional '2 rack redundancy', only 1 new replica would ever have been 
> needed in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-2140) Add robot test for GDPR feature

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?focusedWorklogId=323190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323190
 ]

ASF GitHub Bot logged work on HDDS-2140:


Author: ASF GitHub Bot
Created on: 04/Oct/19 05:27
Start Date: 04/Oct/19 05:27
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1542: 
HDDS-2140. Add robot test for GDPR feature
URL: https://github.com/apache/hadoop/pull/1542#discussion_r331345076
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot
 ##
 @@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation   Smoketest Ozone GDPR Feature
+Library OperatingSystem
+Library BuiltIn
+Resource../commonlib.robot
+
+*** Variables ***
+${volume}   testvol
+
+*** Test Cases ***
+Test GDPR(disabled) without explicit options
+Execute ozone sh volume create /${volume} 
--quota 100TB
 
 Review comment:
   Introduced random volume name in recent commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323190)
Time Spent: 1h 20m  (was: 1h 10m)

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2244:


Assignee: Bharat Viswanadham  (was: Nanda kumar)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2246) Reduce runtime of TestBlockOutputStreamWithFailures

2019-10-03 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2246:
-

 Summary: Reduce runtime of TestBlockOutputStreamWithFailures
 Key: HDDS-2246
 URL: https://issues.apache.org/jira/browse/HDDS-2246
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar


{{TestBlockOutputStreamWithFailures}} is taking 10 minutes to run, we should 
reduce the runtime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-?focusedWorklogId=323188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323188
 ]

ASF GitHub Bot logged work on HDDS-:


Author: ASF GitHub Bot
Created on: 04/Oct/19 05:21
Start Date: 04/Oct/19 05:21
Worklog Time Spent: 10m 
  Work Description: szetszwo commented on issue #1578: HDDS- Add a 
method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
URL: https://github.com/apache/hadoop/pull/1578#issuecomment-538239632
 
 
   The checkstyle warnings in ChecksumByteBuffer are absurd so that we will 
ignore them.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323188)
Time Spent: 40m  (was: 0.5h)

> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-03 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2245:
--
Labels: newbie  (was: )

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Priority: Major
>  Labels: newbie
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-03 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2245:
-

 Summary: Use dynamic ports for SCM in TestSecureOzoneCluster
 Key: HDDS-2245
 URL: https://issues.apache.org/jira/browse/HDDS-2245
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar


{{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-03 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944220#comment-16944220
 ] 

Tsz-wo Sze commented on HDDS-:
--

> o_20191001.patch: adds PureJavaCrc32ByteBuffer and 
> PureJavaCrc32CByteBuffer.

I should have mentioned that the crc code and tables are copied from 
org.apache.hadoop.util.PureJavaCrc32.

> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-?focusedWorklogId=323186=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323186
 ]

ASF GitHub Bot logged work on HDDS-:


Author: ASF GitHub Bot
Created on: 04/Oct/19 05:07
Start Date: 04/Oct/19 05:07
Worklog Time Spent: 10m 
  Work Description: jnp commented on issue #1578: HDDS- Add a method to 
update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
URL: https://github.com/apache/hadoop/pull/1578#issuecomment-538236213
 
 
   +1 for the patch
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323186)
Time Spent: 0.5h  (was: 20m)

> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=323184=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323184
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 04/Oct/19 05:00
Start Date: 04/Oct/19 05:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1589: 
HDDS-2244. Use new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589
 
 
   Use new ReadWriteLock added in HDDS-2223.
   
   Existing tests should cover this.
   
   Ran a few Integration tests.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323184)
Remaining Estimate: 0h
Time Spent: 10m

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2244:
-
Description: Use new ReadWriteLock added in HDDS-2223.  (was: Currently 
{{LockManager}} is using exclusive lock, instead we should support 
{{ReadWrite}} lock.)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2244:


 Summary: Use new ReadWrite lock in OzoneManager
 Key: HDDS-2244
 URL: https://issues.apache.org/jira/browse/HDDS-2244
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Nanda kumar
 Fix For: 0.5.0


Currently {{LockManager}} is using exclusive lock, instead we should support 
{{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14889) Ability to check if a block has a replica on provided storage

2019-10-03 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944211#comment-16944211
 ] 

Virajith Jalaparti commented on HDFS-14889:
---

Thanks for working on this [~ashvin] and the review [~elgoiri]!

> Ability to check if a block has a replica on provided storage
> -
>
> Key: HDFS-14889
> URL: https://issues.apache.org/jira/browse/HDFS-14889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, 
> there is no easy way to distinguish a {{Block}} belonging to an external 
> provided storage volume from a block belonging to the local cluster. This 
> task simplifies this. 
> {{isProvided}} block will be useful in hybrid scenarios when the local 
> cluster will host both kinds of blocks. For e.g. policy for management of 
> replica/cached-blocks will be different from that of regular blocks. As of 
> this task {{isProvided}} is not invoked anywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14889) Ability to check if a block has a replica on provided storage

2019-10-03 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti resolved HDFS-14889.
---
Resolution: Fixed

> Ability to check if a block has a replica on provided storage
> -
>
> Key: HDFS-14889
> URL: https://issues.apache.org/jira/browse/HDFS-14889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, 
> there is no easy way to distinguish a {{Block}} belonging to an external 
> provided storage volume from a block belonging to the local cluster. This 
> task simplifies this. 
> {{isProvided}} block will be useful in hybrid scenarios when the local 
> cluster will host both kinds of blocks. For e.g. policy for management of 
> replica/cached-blocks will be different from that of regular blocks. As of 
> this task {{isProvided}} is not invoked anywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14889) Ability to check if a block has a replica on provided storage

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944209#comment-16944209
 ] 

Hudson commented on HDFS-14889:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17463 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17463/])
HDFS-14889. Ability to check if a block has a replica on provided (virajith: 
rev 844b766da535894b792892b38de6bc2500eca57f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> Ability to check if a block has a replica on provided storage
> -
>
> Key: HDFS-14889
> URL: https://issues.apache.org/jira/browse/HDFS-14889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, 
> there is no easy way to distinguish a {{Block}} belonging to an external 
> provided storage volume from a block belonging to the local cluster. This 
> task simplifies this. 
> {{isProvided}} block will be useful in hybrid scenarios when the local 
> cluster will host both kinds of blocks. For e.g. policy for management of 
> replica/cached-blocks will be different from that of regular blocks. As of 
> this task {{isProvided}} is not invoked anywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-10-03 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944208#comment-16944208
 ] 

Lisheng Sun commented on HDFS-6524:
---

[~elgoiri]

In current code TestDFSClientRetries#testFailuresArePerOperation and 
TestDFSClientRetries#testDFSClientRetriesOnBusyBlocks  have a replication 
factor of 1.

After updated patch TestDFSClientRetries#testFailuresArePerOperation have a 
replication of 1 and TestDFSClientRetries#testDFSClientRetriesOnBusyBlocks  
have replication factor of 3.

Both of these two replication factor are tested.

Please correct me if I was wrong. Thanks a lot [~elgoiri].

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005(2).patch, 
> HDFS-6524.005.patch, HDFS-6524.006.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=323183=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323183
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 04/Oct/19 04:40
Start Date: 04/Oct/19 04:40
Worklog Time Spent: 10m 
  Work Description: christeoh commented on issue #1582: HDDS-2217. Removed 
redundant LOG4J lines from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-538229895
 
 
   ci/acceptance appears to be no failing tests.
   ci/integration appears to be failing on seemingly unrelated ratis issues?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323183)
Time Spent: 2h 10m  (was: 2h)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2140) Add robot test for GDPR feature

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?focusedWorklogId=323182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323182
 ]

ASF GitHub Bot logged work on HDDS-2140:


Author: ASF GitHub Bot
Created on: 04/Oct/19 04:27
Start Date: 04/Oct/19 04:27
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1542: HDDS-2140. 
Add robot test for GDPR feature
URL: https://github.com/apache/hadoop/pull/1542#issuecomment-538226560
 
 
   > Unrelated to this patch (as this patch tests the CLI arguments) but I am 
wondering how the core GDPR feature can be tested. I mean how can we be sure 
that the data is _really_ unreadable (grep to the chunk files for a specific 
strings??). To be honest, I have no idea, but putting this interesting question 
to here ;-)
   
   Recap: GDPR talk in Vegas ;)
   - When putting a key in a GDPR enforced bucket, Ozone will create a 
symmetric key and Client will use that to encrypt and write to key.
   - This encryption key is stored in KeyInfo Metadata
   - When reading the key, the encryption key is fetched from KeyInfo Metadata 
and used to decrypt the key.
   
   After our Vegas conference, we modified the delete path (HDDS-2174):
   - When user asks Ozone to delete a Key, we first delete the encryption key 
details from KeyInfo Metadata, then we move the KeyInfo to DeletedTable in OM.
   - Since the encryption key is lost, there is no way you can read that 
data(except if you restore a back/snapshot of your entire system from before 
deletion, which will also be address in version 2)
   - HDDS-2174 included a test to confirm the key metadata in DeletedTable does 
not have the GDPR Encryption Key details. Thereby, even if you get your hands 
on chunks, you will still read encrypted junk :)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323182)
Time Spent: 1h 10m  (was: 1h)

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2230?focusedWorklogId=323180=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323180
 ]

ASF GitHub Bot logged work on HDDS-2230:


Author: ASF GitHub Bot
Created on: 04/Oct/19 04:18
Start Date: 04/Oct/19 04:18
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1585: HDDS-2230. 
Invalid entries in ozonesecure-mr config
URL: https://github.com/apache/hadoop/pull/1585#discussion_r331336022
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
 ##
 @@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-version: "3"
+version: "3.5"
 
 Review comment:
   Docker Compose file [version 
3.5](https://docs.docker.com/compose/compose-file/compose-versioning/#version-35)
 is the first to allow `name` for networks.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323180)
Time Spent: 1h 10m  (was: 1h)

> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDDS-2230.001.patch, HDDS-2230.002.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2230?focusedWorklogId=323179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323179
 ]

ASF GitHub Bot logged work on HDDS-2230:


Author: ASF GitHub Bot
Created on: 04/Oct/19 04:15
Start Date: 04/Oct/19 04:15
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1585: HDDS-2230. 
Invalid entries in ozonesecure-mr config
URL: https://github.com/apache/hadoop/pull/1585#discussion_r331335642
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
 ##
 @@ -23,17 +23,23 @@ services:
   args:
 buildno: 1
 hostname: kdc
+networks:
+  - ozone
 
 Review comment:
   Default network does not work, since it's name is `ozonesecure-mr_default`, 
which triggers `URISyntaxException` due to `_`.  Adding an explicit name avoids 
the `_default` suffix.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323179)
Time Spent: 1h  (was: 50m)

> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDDS-2230.001.patch, HDDS-2230.002.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2225) SCM fails to start in most unsecure environments due to leftover secure config

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2225?focusedWorklogId=323178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323178
 ]

ASF GitHub Bot logged work on HDDS-2225:


Author: ASF GitHub Bot
Created on: 04/Oct/19 04:11
Start Date: 04/Oct/19 04:11
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1568: HDDS-2225. SCM 
fails to start in most unsecure environments due to leftover secure config
URL: https://github.com/apache/hadoop/pull/1568#issuecomment-538222650
 
 
   @anuengineer Thanks for taking a look at this.
   
   > So now we have removed the mount and gen config?
   
   The new solution only removes the offending container (spark), which is not 
required by the test at all.  Volume mount and config generation for the other 
containers is not changed.
   
   > I am presuming that +1s were given for the earlier solution, but with 
force push I am not able to see the older changes.
   
   Both earlier solution (ff3671022a267d765d7d631cb5b6e57d46ced12d) and new one 
(5caa23a390197d4b2d4dbb738ac850d02378edc0) are visible in the list of commits.  
The second commit just reverts the first attempt.  The force push was needed 
only to rebase on current trunk.  I agree, it makes understanding the 
conversation harder.  I'll try a plain merge next time.
   
   Earlier +1s were for the first solution, while @elek's [latest 
comment](https://github.com/apache/hadoop/pull/1568#issuecomment-537440726) is 
for the second one.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323178)
Time Spent: 1h 50m  (was: 1h 40m)

> SCM fails to start in most unsecure environments due to leftover secure config
> --
>
> Key: HDDS-2225
> URL: https://issues.apache.org/jira/browse/HDDS-2225
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Intermittent failure of {{ozone-recon}} and some other acceptance tests where 
> SCM container is not available is caused by leftover secure config in 
> {{core-site.xml}}.
> Initially the config file is 
> [empty|https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdds/common/src/main/conf/core-site.xml].
>   Various test environments populate it with different settings.  The problem 
> happens when a test does not specify any config for {{core-site.xml}}, in 
> which case the previous test's config file is retained.
> {code}
> scm_1   | 2019-10-01 19:42:05 WARN  WebAppContext:531 - Failed startup of 
> context 
> o.e.j.w.WebAppContext@1cc680e{/,file:///tmp/jetty-0.0.0.0-9876-scm-_-any-1272594486261557815.dir/webapp/,UNAVAILABLE}{/scm}
> scm_1   | javax.servlet.ServletException: javax.servlet.ServletException: 
> Keytab does not exist: /etc/security/keytabs/HTTP.keytab
> scm_1   | at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
> ...
> scm_1   | at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:791)
> ...
> scm_1   | Unable to initialize WebAppContext
> scm_1   | 2019-10-01 19:42:05 INFO  StorageContainerManagerStarter:51 - 
> SHUTDOWN_MSG:
> scm_1   | /
> scm_1   | SHUTDOWN_MSG: Shutting down StorageContainerManager at 
> 8724df7131bb/192.168.128.6
> scm_1   | /
> {code}
> The problem is intermittent due to ordering of test cases being different in 
> different runs.  If a secure test is run earlier, more tests are affected.  
> If secure tests are run last, the issue does not happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2234) rat.sh fails due to ozone-recon-web/build files

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2234?focusedWorklogId=323176=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323176
 ]

ASF GitHub Bot logged work on HDDS-2234:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:54
Start Date: 04/Oct/19 03:54
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1580: HDDS-2234. rat.sh 
fails due to ozone-recon-web/build files
URL: https://github.com/apache/hadoop/pull/1580#issuecomment-538219116
 
 
   Thanks @anuengineer for reporting this issue and reviewing/committing the 
fix.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323176)
Time Spent: 1h  (was: 50m)

> rat.sh fails due to ozone-recon-web/build files
> ---
>
> Key: HDDS-2234
> URL: https://issues.apache.org/jira/browse/HDDS-2234
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Anu Engineer
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hadoop-ozone-recon
> [INFO] Build failures were ignored.
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/index.html
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/main.96eebd44.chunk.css
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/main.5bb53989.chunk.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/precache-manifest.1d05d7a103ee9d6b280ef7adfcab3c01.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/service-worker.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2231) test-single.sh cannot copy results

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2231?focusedWorklogId=323174=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323174
 ]

ASF GitHub Bot logged work on HDDS-2231:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:53
Start Date: 04/Oct/19 03:53
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1575: HDDS-2231. 
test-single.sh cannot copy results
URL: https://github.com/apache/hadoop/pull/1575#issuecomment-538218935
 
 
   Thanks @anuengineer for the review/commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323174)
Time Spent: 50m  (was: 40m)

> test-single.sh cannot copy results
> --
>
> Key: HDDS-2231
> URL: https://issues.apache.org/jira/browse/HDDS-2231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Previously {{result}} directory was created by simply {{source}}-ing 
> {{testlib.sh}}, but HDDS-2185 changed it to avoid lost results.  
> {{test-single.sh}} needs to be adjusted accordingly.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
> $ docker-compose up -d --scale datanode=3
> $ ../test-single.sh scm basic/basic.robot
> ...
> invalid output path: directory 
> "hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone/result" does not 
> exist
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1643) Send hostName also part of OMRequest

2019-10-03 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-1643:
--

Assignee: YiSheng Lien

> Send hostName also part of OMRequest
> 
>
> Key: HDDS-1643
> URL: https://issues.apache.org/jira/browse/HDDS-1643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>
> This Jira is created based on the comment from [~eyang] on HDDS-1600 jira.
> [~bharatviswa] can hostname be used as part of OM request? For running in 
> docker container, virtual private network address may not be routable or 
> exposed to outside world. Using IP to identify the source client location may 
> not be enough. It would be nice to have ability support hostname based 
> request too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14890) HDFS is not starting in Windows

2019-10-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944192#comment-16944192
 ] 

Íñigo Goiri commented on HDFS-14890:


The fix seems reasonable.
[~hirik] can you test it? Otherwise I can give it a try.
I think the current test (executed from Windows) should be enough to validate 
it works.

> HDFS is not starting in Windows
> ---
>
> Key: HDFS-14890
> URL: https://issues.apache.org/jira/browse/HDFS-14890
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: Windows 10.
>Reporter: hirik
>Priority: Blocker
> Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14890) HDFS is not starting in Windows

2019-10-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14890:
---
Status: Patch Available  (was: Open)

> HDFS is not starting in Windows
> ---
>
> Key: HDFS-14890
> URL: https://issues.apache.org/jira/browse/HDFS-14890
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: Windows 10.
>Reporter: hirik
>Priority: Blocker
> Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944189#comment-16944189
 ] 

Hudson commented on HDDS-2223:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17462 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17462/])
HDDS-2223. Support ReadWrite lock in LockManager. (#1564) (github: rev 
9700e2003aa1b7e2c4072a2a08d8827acc5aa779)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/lock/TestLockManager.java


> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=323165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323165
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:15
Start Date: 04/Oct/19 03:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1564: HDDS-2223. 
Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538211164
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 50 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 920 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1010 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 774 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2480 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1564 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8a0c59964b6c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1dde3ef |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323164=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323164
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:14
Start Date: 04/Oct/19 03:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538210876
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 94 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | -1 | mvninstall | 28 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 943 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1028 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | -1 | mvninstall | 33 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 791 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2537 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d5b4cdc3b883 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1dde3ef |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323163=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323163
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:12
Start Date: 04/Oct/19 03:12
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538210556
 
 
   @avijayanhwx findbugs violations are related to this change, can you take a 
look at it?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323163)
Time Spent: 1h 40m  (was: 1.5h)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323161=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323161
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:09
Start Date: 04/Oct/19 03:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538210066
 
 
   @anuengineer Rat failures are real, but they are not related to this PR.
   They were introduced in HDDS-2193 and fixed in HDDS-1146.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323161)
Time Spent: 1.5h  (was: 1h 20m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944183#comment-16944183
 ] 

Hudson commented on HDDS-2198:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17461/])
HDDS-2198. SCM should not consider containers in CLOSING state to come (github: 
rev cdaa480dbfd8cc0f0d358f17047c8aa97299cb35)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/ContainerSafeModeRule.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/safemode/TestSCMSafeModeManager.java


> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2240) Command line tool for OM HA

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2240?focusedWorklogId=323159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323159
 ]

ASF GitHub Bot logged work on HDDS-2240:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:04
Start Date: 04/Oct/19 03:04
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1586: 
HDDS-2240. Command line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r331326857
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone
 ##
 @@ -55,6 +55,7 @@ function hadoop_usage
   hadoop_add_subcommand "version" client "print the version"
   hadoop_add_subcommand "dtutil" client "operations related to delegation 
tokens"
   hadoop_add_subcommand "upgrade" client "HDFS to Ozone in-place upgrade tool"
+  hadoop_add_subcommand "omha" client "OM HA tool"
 
 Review comment:
   NIT: Have we named this command `omha` because we have plans in future to 
also add similar command for SCM?
   If not, how about naming the command as `haadmin` , `omadmin`, `admin`.
   Not a deal breaker but was just thinking if we can keep it similar to hdfs.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323159)
Time Spent: 1h  (was: 50m)

> Command line tool for OM HA
> ---
>
> Key: HDDS-2240
> URL: https://issues.apache.org/jira/browse/HDDS-2240
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A command line tool (*ozone omha*) to get information related to OM HA. 
> This Jira proposes to add the _getServiceState_ option for OM HA which lists 
> all the OMs in the service and their corresponding Ratis server roles 
> (LEADER/ FOLLOWER). 
> We can later add more options to this tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2223:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=323158=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323158
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 04/Oct/19 03:02
Start Date: 04/Oct/19 03:02
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1564: 
HDDS-2223. Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323158)
Time Spent: 3h 40m  (was: 3.5h)

> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2240) Command line tool for OM HA

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2240?focusedWorklogId=323157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323157
 ]

ASF GitHub Bot logged work on HDDS-2240:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:58
Start Date: 04/Oct/19 02:58
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1586: 
HDDS-2240. Command line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r331325033
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHA.java
 ##
 @@ -0,0 +1,76 @@
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.hdds.cli.GenericCli;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.tracing.TracingUtil;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.security.UserGroupInformation;
+import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.ratis.protocol.ClientId;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+/**
+ * A command line tool for making calls in OM HA protocols.
+ */
+@Command(name = "ozone omha",
+hidden = true, description = "Command line tool for OM HA.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true)
+public class OzoneManagerHA extends GenericCli {
+  private OzoneConfiguration conf;
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerHA.class);
+
+  public static void main(String[] args) throws Exception {
+TracingUtil.initTracing("OzoneManager");
+new OzoneManagerHA().run(args);
+  }
+
+  private OzoneManagerHA() {
+super();
+  }
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * initialized from the command line.
+   */
+  @CommandLine.Command(name = "--getservicestate",
+  customSynopsis = "ozone om [global options] --getservicestate " +
+  "--serviceId=",
+  hidden = false,
+  description = "Get the Ratis server state of all OMs belonging to given" 
+
+  " OM Service ID",
+  mixinStandardHelpOptions = true,
+  versionProvider = HddsVersionProvider.class)
+  public void getRoleInfoOm(@CommandLine.Option(names = { "--serviceId" },
+  description = "The OM Service ID of the OMs to get the server states 
for",
+  paramLabel = "id") String serviceId)
+  throws Exception {
+conf = createOzoneConfiguration();
+Map serviceStates = getServiceStates(conf, serviceId);
+for (String nodeId : serviceStates.keySet()) {
+  System.out.println(nodeId + " : " + serviceStates.get(nodeId));
+}
 
 Review comment:
   It will be better to use entrySet() instead of keySet() for performance as 
it would avoid the lookup on L61. Although this method will not be doing 
extensive amount of work, I believe it is still worth making this change.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323157)
Time Spent: 50m  (was: 40m)

> Command line tool for OM HA
> ---
>
> Key: HDDS-2240
> URL: https://issues.apache.org/jira/browse/HDDS-2240
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> A command line tool (*ozone omha*) to get information related to OM HA. 
> This Jira proposes to add the _getServiceState_ option for OM HA which lists 
> all the OMs in the service and their corresponding Ratis server roles 
> (LEADER/ FOLLOWER). 
> We can later add more options to this tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2240) Command line tool for OM HA

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2240?focusedWorklogId=323155=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323155
 ]

ASF GitHub Bot logged work on HDDS-2240:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:53
Start Date: 04/Oct/19 02:53
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1586: 
HDDS-2240. Command line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r331325219
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHA.java
 ##
 @@ -0,0 +1,76 @@
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.hdds.cli.GenericCli;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.tracing.TracingUtil;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.security.UserGroupInformation;
+import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.ratis.protocol.ClientId;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+/**
+ * A command line tool for making calls in OM HA protocols.
+ */
+@Command(name = "ozone omha",
+hidden = true, description = "Command line tool for OM HA.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true)
+public class OzoneManagerHA extends GenericCli {
 
 Review comment:
   Declare this class as final.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323155)
Time Spent: 40m  (was: 0.5h)

> Command line tool for OM HA
> ---
>
> Key: HDDS-2240
> URL: https://issues.apache.org/jira/browse/HDDS-2240
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> A command line tool (*ozone omha*) to get information related to OM HA. 
> This Jira proposes to add the _getServiceState_ option for OM HA which lists 
> all the OMs in the service and their corresponding Ratis server roles 
> (LEADER/ FOLLOWER). 
> We can later add more options to this tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-10-03 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2198:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2240) Command line tool for OM HA

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2240?focusedWorklogId=323154=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323154
 ]

ASF GitHub Bot logged work on HDDS-2240:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:51
Start Date: 04/Oct/19 02:51
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1586: 
HDDS-2240. Command line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r331325033
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHA.java
 ##
 @@ -0,0 +1,76 @@
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.hdds.cli.GenericCli;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.tracing.TracingUtil;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.security.UserGroupInformation;
+import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.ratis.protocol.ClientId;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+/**
+ * A command line tool for making calls in OM HA protocols.
+ */
+@Command(name = "ozone omha",
+hidden = true, description = "Command line tool for OM HA.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true)
+public class OzoneManagerHA extends GenericCli {
+  private OzoneConfiguration conf;
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerHA.class);
+
+  public static void main(String[] args) throws Exception {
+TracingUtil.initTracing("OzoneManager");
+new OzoneManagerHA().run(args);
+  }
+
+  private OzoneManagerHA() {
+super();
+  }
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * initialized from the command line.
+   */
+  @CommandLine.Command(name = "--getservicestate",
+  customSynopsis = "ozone om [global options] --getservicestate " +
+  "--serviceId=",
+  hidden = false,
+  description = "Get the Ratis server state of all OMs belonging to given" 
+
+  " OM Service ID",
+  mixinStandardHelpOptions = true,
+  versionProvider = HddsVersionProvider.class)
+  public void getRoleInfoOm(@CommandLine.Option(names = { "--serviceId" },
+  description = "The OM Service ID of the OMs to get the server states 
for",
+  paramLabel = "id") String serviceId)
+  throws Exception {
+conf = createOzoneConfiguration();
+Map serviceStates = getServiceStates(conf, serviceId);
+for (String nodeId : serviceStates.keySet()) {
+  System.out.println(nodeId + " : " + serviceStates.get(nodeId));
+}
 
 Review comment:
   It will be better to use entrySet() instead of keySet() for performance. 
Although this method will not be doing extensive amount of work, I believe it 
is still worth making this change.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323154)
Time Spent: 0.5h  (was: 20m)

> Command line tool for OM HA
> ---
>
> Key: HDDS-2240
> URL: https://issues.apache.org/jira/browse/HDDS-2240
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> A command line tool (*ozone omha*) to get information related to OM HA. 
> This Jira proposes to add the _getServiceState_ option for OM HA which lists 
> all the OMs in the service and their corresponding Ratis server roles 
> (LEADER/ FOLLOWER). 
> We can later add more options to this tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?focusedWorklogId=323153=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323153
 ]

ASF GitHub Bot logged work on HDDS-2198:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:51
Start Date: 04/Oct/19 02:51
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1540: 
HDDS-2198. SCM should not consider containers in CLOSING state to come out of 
safemode.
URL: https://github.com/apache/hadoop/pull/1540
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323153)
Time Spent: 1h 20m  (was: 1h 10m)

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?focusedWorklogId=323152=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323152
 ]

ASF GitHub Bot logged work on HDDS-2198:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:50
Start Date: 04/Oct/19 02:50
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1540: HDDS-2198. SCM 
should not consider containers in CLOSING state to come out of safemode.
URL: https://github.com/apache/hadoop/pull/1540#issuecomment-538206445
 
 
   Failures are not related to this change. I will merge this shortly.
   
   Thanks @bharatviswa504 for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323152)
Time Spent: 1h 10m  (was: 1h)

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=323151=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323151
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:35
Start Date: 04/Oct/19 02:35
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1564: 
HDDS-2223. Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#discussion_r331322814
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java
 ##
 @@ -25,42 +25,146 @@
 
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.function.Consumer;
 
 /**
  * Manages the locks on a given resource. A new lock is created for each
  * and every unique resource. Uniqueness of resource depends on the
  * {@code equals} implementation of it.
  */
-public class LockManager {
+public class LockManager {
 
   private static final Logger LOG = LoggerFactory.getLogger(LockManager.class);
 
-  private final Map activeLocks = new ConcurrentHashMap<>();
+  private final Map activeLocks = new ConcurrentHashMap<>();
   private final GenericObjectPool lockPool =
   new GenericObjectPool<>(new PooledLockFactory());
 
   /**
-   * Creates new LockManager instance.
+   * Creates new LockManager instance with the given Configuration.
*
* @param conf Configuration object
*/
-  public LockManager(Configuration conf) {
-int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY,
+  public LockManager(final Configuration conf) {
+final int maxPoolSize = 
conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY,
 HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT);
 lockPool.setMaxTotal(maxPoolSize);
   }
 
-
   /**
* Acquires the lock on given resource.
*
* If the lock is not available then the current thread becomes
* disabled for thread scheduling purposes and lies dormant until the
* lock has been acquired.
+   *
+   * @param resource on which the lock has to be acquired
+   * @deprecated Use {@link LockManager#writeLock} instead
+   */
+  public void lock(final R resource) {
+   writeLock(resource);
+  }
+
+  /**
+   * Releases the lock on given resource.
+   *
+   * @param resource for which the lock has to be released
+   * @deprecated Use {@link LockManager#writeUnlock} instead
+   */
+  public void unlock(final R resource) {
+   writeUnlock(resource);
+  }
+
+  /**
+   * Acquires the read lock on given resource.
+   *
+   * Acquires the read lock on resource if the write lock is not held by
+   * another thread and returns immediately.
+   *
+   * If the write lock on resource is held by another thread then
+   * the current thread becomes disabled for thread scheduling
+   * purposes and lies dormant until the read lock has been acquired.
+   *
+   * @param resource on which the read lock has to be acquired
+   */
+  public void readLock(final R resource) {
+acquire(resource, ActiveLock::readLock);
+  }
+
+  /**
+   * Releases the read lock on given resource.
+   *
+   * @param resource for which the read lock has to be released
+   * @throws IllegalMonitorStateException if the current thread does not
+   *  hold this lock
+   */
+  public void readUnlock(final R resource) throws IllegalMonitorStateException 
{
+release(resource, ActiveLock::readUnlock);
+  }
+
+  /**
+   * Acquires the write lock on given resource.
+   *
+   * Acquires the write lock on resource if neither the read nor write lock
+   * are held by another thread and returns immediately.
+   *
+   * If the current thread already holds the write lock then the
+   * hold count is incremented by one and the method returns
+   * immediately.
+   *
+   * If the lock is held by another thread then the current
+   * thread becomes disabled for thread scheduling purposes and
+   * lies dormant until the write lock has been acquired.
+   *
+   * @param resource on which the lock has to be acquired
*/
-  public void lock(T resource) {
-activeLocks.compute(resource, (k, v) -> {
-  ActiveLock lock;
+  public void writeLock(final R resource) {
+acquire(resource, ActiveLock::writeLock);
+  }
+
+  /**
+   * Releases the write lock on given resource.
+   *
+   * @param resource for which the lock has to be released
+   * @throws IllegalMonitorStateException if the current thread does not
+   *  hold this lock
+   */
+  public void writeUnlock(final R resource) throws 
IllegalMonitorStateException {
+release(resource, ActiveLock::writeUnlock);
+  }
+
+  /**
+   * Acquires the lock on given resource using the provided lock function.
+   *
+   * @param resource on which the lock has to be acquired
+   * @param lockFn function to 

[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=323150=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323150
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:34
Start Date: 04/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1564: HDDS-2223. 
Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538203299
 
 
   The test failures are not related.
   I will merge the PR shortly.
   Thanks @arp7 @bharatviswa504 for the reviews.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323150)
Time Spent: 3h 20m  (was: 3h 10m)

> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2230?focusedWorklogId=323147=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323147
 ]

ASF GitHub Bot logged work on HDDS-2230:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:24
Start Date: 04/Oct/19 02:24
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1585: HDDS-2230. 
Invalid entries in ozonesecure-mr config
URL: https://github.com/apache/hadoop/pull/1585#discussion_r331321334
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
 ##
 @@ -23,17 +23,23 @@ services:
   args:
 buildno: 1
 hostname: kdc
+networks:
+  - ozone
 
 Review comment:
   Does default network work for this case? why do we explicitly change the 
network name?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323147)
Time Spent: 50m  (was: 40m)

> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDDS-2230.001.patch, HDDS-2230.002.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2230?focusedWorklogId=323145=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323145
 ]

ASF GitHub Bot logged work on HDDS-2230:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:23
Start Date: 04/Oct/19 02:23
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1585: HDDS-2230. 
Invalid entries in ozonesecure-mr config
URL: https://github.com/apache/hadoop/pull/1585#discussion_r331321174
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
 ##
 @@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-version: "3"
+version: "3.5"
 
 Review comment:
   Do we track the docker-compose versions for this change?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323145)
Time Spent: 40m  (was: 0.5h)

> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDDS-2230.001.patch, HDDS-2230.002.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323144=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323144
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 04/Oct/19 02:09
Start Date: 04/Oct/19 02:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538198616
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 937 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1024 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 788 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2512 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 32b6298e6c9c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1dde3ef |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14850:
---
Fix Version/s: 3.2.2
   3.1.4

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch, 
> HDFS-14850.003.patch, HDFS-14850.004(2).patch, HDFS-14850.004.patch, 
> HDFS-14850.005.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2243) Fix docs for Running with HDFS section

2019-10-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2243:
---

 Summary: Fix docs for Running with HDFS section
 Key: HDDS-2243
 URL: https://issues.apache.org/jira/browse/HDDS-2243
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.4.0
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


HDDS-4 says Fixed Version {{0.4.0}} but the 0.4.0 docs say Ozone won't work 
with secure clusters until HDDS-4 is fixed.

https://hadoop.apache.org/ozone/docs/0.4.0-alpha/runningwithhdfs.html





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2243) Fix docs for Running with HDFS section

2019-10-03 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2243:

Component/s: documentation

> Fix docs for Running with HDFS section
> --
>
> Key: HDDS-2243
> URL: https://issues.apache.org/jira/browse/HDDS-2243
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> HDDS-4 says Fixed Version {{0.4.0}} but the 0.4.0 docs say Ozone won't work 
> with secure clusters until HDDS-4 is fixed.
> https://hadoop.apache.org/ozone/docs/0.4.0-alpha/runningwithhdfs.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14527) Stop all DataNodes may result in NN terminate

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14527:
---
Fix Version/s: 3.1.4

> Stop all DataNodes may result in NN terminate
> -
>
> Key: HDFS-14527
> URL: https://issues.apache.org/jira/browse/HDFS-14527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14527.001.patch, HDFS-14527.002.patch, 
> HDFS-14527.003.patch, HDFS-14527.004.patch, HDFS-14527.005.patch
>
>
> If we stop all datanodes of cluster, BlockPlacementPolicyDefault#chooseTarget 
> may get ArithmeticException when calling #getMaxNodesPerRack, which throws 
> the runtime exception out to BlockManager's ReplicationMonitor thread and 
> then terminate the NN.
> The root cause is that BlockPlacementPolicyDefault#chooseTarget not hold the 
> global lock, and if all DataNodes are dead between 
> {{clusterMap.getNumberOfLeaves()}} and {{getMaxNodesPerRack}} then it meet 
> {{ArithmeticException}} while invoke {{getMaxNodesPerRack}}.
> {code:java}
>   private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,
> Node writer,
> List chosenStorage,
> boolean returnChosenNodes,
> Set excludedNodes,
> long blocksize,
> final BlockStoragePolicy storagePolicy,
> EnumSet addBlockFlags,
> EnumMap sTypes) {
> if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
>   return DatanodeStorageInfo.EMPTY_ARRAY;
> }
> ..
> int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);
> ..
> }
> {code}
> Some detailed log show as following.
> {code:java}
> 2019-05-31 12:29:21,803 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getMaxNodesPerRack(BlockPlacementPolicyDefault.java:282)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:228)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:132)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:4533)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$1800(BlockManager.java:4493)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1954)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1830)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4453)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4388)
> at java.lang.Thread.run(Thread.java:745)
> 2019-05-31 12:29:21,805 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> To be honest, this is not serious bug and not reprod easily, since if we stop 
> all Datanodes and only keep NameNode lives, HDFS could be not offer service 
> normally and we could only retrieve directory. It may be one corner case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14624) When decommissioning a node, log remaining blocks to replicate periodically

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14624:
---
Fix Version/s: 3.2.2
   3.1.4

> When decommissioning a node, log remaining blocks to replicate periodically
> ---
>
> Key: HDFS-14624
> URL: https://issues.apache.org/jira/browse/HDFS-14624
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14624.001.patch, HDFS-14624.002.patch, 
> HDFS-14624.003.patch
>
>
> When a node is marked for decommission, there is a monitor thread which runs 
> every 30 seconds by default, and checks if the node still has pending blocks 
> to be replicated before the node can complete replication.
> There are two existing debug level messages logged in the monitor thread, 
> DatanodeAdminManager$Monitor.check(), which log the correct information 
> already, first as the pending blocks are replicated:
> {code:java}
> LOG.debug("Node {} still has {} blocks to replicate "
> + "before it is a candidate to finish {}.",
> dn, blocks.size(), dn.getAdminState());{code}
> And then after the initial set of blocks has completed and a rescan happens:
> {code:java}
> LOG.debug("Node {} {} healthy."
> + " It needs to replicate {} more blocks."
> + " {} is still in progress.", dn,
> isHealthy ? "is": "isn't", blocks.size(), dn.getAdminState());{code}
> I would like to propose moving these messages to INFO level so it is easier 
> to monitor decommission progress over time from the Namenode log.
> Based on the default settings, this would result in at most 1 log message per 
> node being decommissioned every 30 seconds. The reason this is at the most, 
> is because the monitor thread stops after checking after 500K blocks and 
> therefore in practice it could be as little as 1 log message per 30 seconds, 
> even if many DNs are being decommissioned at the same time.
> Note that the namenode webUI does display the above information, but having 
> this in the NN logs would allow progress to be tracked more easily.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14499) Misleading REM_QUOTA value with snapshot and trash feature enabled for a directory

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14499:
---
Fix Version/s: 3.2.2
   3.1.4

> Misleading REM_QUOTA value with snapshot and trash feature enabled for a 
> directory
> --
>
> Key: HDFS-14499
> URL: https://issues.apache.org/jira/browse/HDFS-14499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14499.000.patch, HDFS-14499.001.patch, 
> HDFS-14499.002.patch
>
>
> This is the flow of steps where we see a discrepancy between REM_QUOTA and 
> new file operation failure. REM_QUOTA shows a value of  1 but file creation 
> operation does not succeed.
> {code:java}
> hdfs@c3265-node3 root$ hdfs dfs -mkdir /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -setQuota 2 /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -allowSnapshot /dir1
> Allowing snaphot on /dir1 succeeded
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> hdfs@c3265-node3 root$ hdfs dfs -createSnapshot /dir1 snap1
> Created snapshot /dir1/.snapshot/snap1
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 0 none inf 1 1 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -rm /dir1/file1
> 19/03/26 11:20:25 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://smajetinn/dir1/file1' to trash at: 
> hdfs://smajetinn/user/hdfs/.Trash/Current/dir1/file11553599225772
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 1 none inf 1 0 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> touchz: The NameSpace quota (directories and files) of directory /dir1 is 
> exceeded: quota=2 file count=3{code}
> The issue here, is that the count command takes only files and directories 
> into account not the inode references. When trash is enabled, the deletion of 
> files inside a directory actually does a rename operation as a result of 
> which an inode reference is maintained in the deleted list of the snapshot 
> diff which is taken into account while computing the namespace quota, but 
> count command (getContentSummary()) ,just takes into account just the files 
> and directories, not the referenced entity for calculating the REM_QUOTA. The 
> referenced entity is taken into account for space quota only.
> InodeReference.java:
> ---
> {code:java}
>  @Override
> public final ContentSummaryComputationContext computeContentSummary(
> int snapshotId, ContentSummaryComputationContext summary) {
>   final int s = snapshotId < lastSnapshotId ? snapshotId : lastSnapshotId;
>   // only count storagespace for WithName
>   final QuotaCounts q = computeQuotaUsage(
>   summary.getBlockStoragePolicySuite(), getStoragePolicyID(), false, 
> s);
>   summary.getCounts().addContent(Content.DISKSPACE, q.getStorageSpace());
>   summary.getCounts().addTypeSpaces(q.getTypeSpaces());
>   return summary;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14113) EC : Add Configuration to restrict UserDefined Policies

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14113:
---
Fix Version/s: 3.2.2
   3.1.4

> EC : Add Configuration to restrict UserDefined Policies
> ---
>
> Key: HDFS-14113
> URL: https://issues.apache.org/jira/browse/HDFS-14113
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14113-01.patch, HDFS-14113-02.patch, 
> HDFS-14113-03.patch
>
>
> By default addition of erasure coding policies is enabled for users.We need 
> to add configuration whether to allow addition of new User Defined policies 
> or not.Which can be configured in for of a Boolean value at the server side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext

2019-10-03 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14262:
---
Fix Version/s: 3.2.2
   3.1.4
   2.10.0

Committed this to branches 3.2, 3.1, and 2.10. To ensure 
{{GlobalStateIdContext}} is up to date on all versions.

> [SBN read] Unclear Log.WARN message in GlobalStateIdContext
> ---
>
> Key: HDFS-14262
> URL: https://issues.apache.org/jira/browse/HDFS-14262
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14262.001.patch, HDFS-14262.002.patch
>
>
> The check clientStateId > serverStateId during active HA status might never 
> occur and the log message is pretty unclear, should it throw an Exception 
> instead?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14124) EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14124:
---
Attachment: HDFS-14124.branch-3.1.patch

> EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs
> -
>
> Key: HDFS-14124
> URL: https://issues.apache.org/jira/browse/HDFS-14124
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, httpfs, webhdfs
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-14124-01.patch, HDFS-14124-02.patch, 
> HDFS-14124-03.patch, HDFS-14124-04.patch, HDFS-14124-04.patch, 
> HDFS-14124.branch-3.1.patch
>
>
> EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14124) EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14124:
---
Fix Version/s: 3.1.4

> EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs
> -
>
> Key: HDFS-14124
> URL: https://issues.apache.org/jira/browse/HDFS-14124
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, httpfs, webhdfs
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-14124-01.patch, HDFS-14124-02.patch, 
> HDFS-14124-03.patch, HDFS-14124-04.patch, HDFS-14124-04.patch, 
> HDFS-14124.branch-3.1.patch
>
>
> EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14124) EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs

2019-10-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944129#comment-16944129
 ] 

Wei-Chiu Chuang commented on HDFS-14124:


Pushed to branch-3.1. There's just a trivial conflict in the doc. Attached  
[^HDFS-14124.branch-3.1.patch]  for posterity.

> EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs
> -
>
> Key: HDFS-14124
> URL: https://issues.apache.org/jira/browse/HDFS-14124
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, httpfs, webhdfs
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-14124-01.patch, HDFS-14124-02.patch, 
> HDFS-14124-03.patch, HDFS-14124-04.patch, HDFS-14124-04.patch, 
> HDFS-14124.branch-3.1.patch
>
>
> EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-10-03 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14660:
---
Fix Version/s: 3.2.2
   3.1.4
   2.10.0

Just committed this to branches 3.2, 3.1, and 2.10.

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14494) Move Server logging of StatedId inside receiveRequestState()

2019-10-03 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14494:
---
Fix Version/s: 3.2.2
   3.1.4
   2.10.0

Just committed this to branches 3.2, 3.1, and 2.10. This is important because 
otherwise HDFS-14822 fix is incomplete.

> Move Server logging of StatedId inside receiveRequestState()
> 
>
> Key: HDFS-14494
> URL: https://issues.apache.org/jira/browse/HDFS-14494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Shweta
>Priority: Major
>  Labels: newbie++
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14494.001.patch
>
>
> HDFS-14270 introduced logging of the client and server StateIds in trace 
> level. Unfortunately one of the arguments 
> {{alignmentContext.getLastSeenStateId()}} holds a lock on FSEdits, which is 
> called even if trace logging level is disabled. I propose to move logging 
> message inside {{GlobalStateIdContext.receiveRequestState()}} where 
> {{clientStateId}} and {{serverStateId}} already calculated and can be easily 
> printed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944123#comment-16944123
 ] 

Hudson commented on HDDS-2200:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17458 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17458/])
HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. 
(aengineer: rev b7cb8fe07c25f31caae89d6406be54c505343f3c)
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconContainerDBProvider.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/recovery/ReconOmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/TestReconUtils.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestOzoneManagerServiceProviderImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestReconContainerDBProvider.java
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconUtils.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/recovery/TestReconOmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java


> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323070
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:41
Start Date: 03/Oct/19 23:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1528: 
HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r331295902
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
 ##
 @@ -112,11 +115,20 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 IOException exception = null;
 OmKeyInfo omKeyInfo = null;
 OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
 
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 try {
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+  // Native authorizer requires client id as part of keyname to check
+  // write ACL on key. Add client id to key name if ozone native
+  // authorizer is configured.
+  Configuration config = new OzoneConfiguration();
+  String keyNameForAclCheck = keyName;
+  if (OmUtils.isNativeAuthorizerEnabled(config)) {
 
 Review comment:
   Can you add a little explanation of why this special case is needed in code 
comments?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323070)
Time Spent: 5h  (was: 4h 50m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323069=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323069
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:40
Start Date: 03/Oct/19 23:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1528: 
HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r331295902
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
 ##
 @@ -112,11 +115,20 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 IOException exception = null;
 OmKeyInfo omKeyInfo = null;
 OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
 
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 try {
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+  // Native authorizer requires client id as part of keyname to check
+  // write ACL on key. Add client id to key name if ozone native
+  // authorizer is configured.
+  Configuration config = new OzoneConfiguration();
+  String keyNameForAclCheck = keyName;
+  if (OmUtils.isNativeAuthorizerEnabled(config)) {
 
 Review comment:
   Can you add a little explanation of why this special case is needed in code 
comments?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323069)
Time Spent: 4h 50m  (was: 4h 40m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323068
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:40
Start Date: 03/Oct/19 23:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1528: 
HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r331295902
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
 ##
 @@ -112,11 +115,20 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 IOException exception = null;
 OmKeyInfo omKeyInfo = null;
 OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
 
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 try {
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+  // Native authorizer requires client id as part of keyname to check
+  // write ACL on key. Add client id to key name if ozone native
+  // authorizer is configured.
+  Configuration config = new OzoneConfiguration();
+  String keyNameForAclCheck = keyName;
+  if (OmUtils.isNativeAuthorizerEnabled(config)) {
 
 Review comment:
   Can you add a little explanation of why this special case is needed?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323068)
Time Spent: 4h 40m  (was: 4.5h)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323056
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:37
Start Date: 03/Oct/19 23:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538169169
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 22 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 920 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 695 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2305 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 09b9504ba0c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323055
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:35
Start Date: 03/Oct/19 23:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1528: 
HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r331294890
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
 ##
 @@ -112,11 +115,20 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 IOException exception = null;
 OmKeyInfo omKeyInfo = null;
 OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
 
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 try {
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+  // Native authorizer requires client id as part of keyname to check
+  // write ACL on key. Add client id to key name if ozone native
+  // authorizer is configured.
+  Configuration config = new OzoneConfiguration();
 
 Review comment:
   Here we should not construct config, should n't we pick the config from 
OzoneManager?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323055)
Time Spent: 4.5h  (was: 4h 20m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323054=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323054
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:34
Start Date: 03/Oct/19 23:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538168696
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 845 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 944 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 709 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2342 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dbf2530a1ece 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Resolved] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2020.

Fix Version/s: 0.4.1
   Resolution: Fixed

Committed to both 0.4.1 and trunk

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1986) Fix listkeys API

2019-10-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-1986:


Assignee: Bharat Viswanadham

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323053=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323053
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:31
Start Date: 03/Oct/19 23:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538168043
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 87 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 952 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1041 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 18 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 796 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2535 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 683c3789963a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76605f1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323048=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323048
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:27
Start Date: 03/Oct/19 23:27
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538167213
 
 
   Can you please check if the rat failures are real? thx
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323048)
Time Spent: 1h 20m  (was: 1h 10m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2200.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk. Thanks for the contribution.

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=323045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323045
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:24
Start Date: 03/Oct/19 23:24
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1577: HDDS-2200 : Recon 
does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577#issuecomment-538166613
 
 
   Thank you for the contribution. @vivekratnavel  Thanks for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323045)
Time Spent: 1h 10m  (was: 1h)

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=323046=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323046
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:24
Start Date: 03/Oct/19 23:24
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1577: HDDS-2200 
: Recon does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323046)
Time Spent: 1h 20m  (was: 1h 10m)

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14187:
---
Fix Version/s: 3.2.2
   3.1.4

> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2019-10-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944111#comment-16944111
 ] 

Wei-Chiu Chuang commented on HDFS-14064:


Pushed to branch-3.1 There's just a trivial conflict. Attached patch for 
posterity.  [^HDFS-14064.branch-3.1.patch] 

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, webhdfs
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, 
> HDFS-14064-05.patch, HDFS-14064.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14064:
---
Fix Version/s: 3.1.4

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, webhdfs
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, 
> HDFS-14064-05.patch, HDFS-14064.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14064:
---
Attachment: HDFS-14064.branch-3.1.patch

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, webhdfs
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, 
> HDFS-14064-05.patch, HDFS-14064.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14064:
---
Component/s: webhdfs
 erasure-coding

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, webhdfs
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, 
> HDFS-14064-05.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=323020=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323020
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:02
Start Date: 03/Oct/19 23:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1564: HDDS-2223. 
Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538161681
 
 
   I am fine with the changes. One minor comment to add some comments in the 
code of locking order reason for acquire/release.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323020)
Time Spent: 3h 10m  (was: 3h)

> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=323019=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323019
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 03/Oct/19 23:02
Start Date: 03/Oct/19 23:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1564: HDDS-2223. 
Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538161681
 
 
   I am fine with the changes. One minor comment to add some comments in the 
code of locking order.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323019)
Time Spent: 3h  (was: 2h 50m)

> Support ReadWrite lock in LockManager
> -
>
> Key: HDDS-2223
> URL: https://issues.apache.org/jira/browse/HDDS-2223
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently {{LockManager}} is using exclusive lock, instead we should support 
> {{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-10-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944096#comment-16944096
 ] 

Wei-Chiu Chuang commented on HDFS-14849:


Cherrypicked the commit to branch-3.2 without conflicts.
There is a trivial conflict for branch-3.1. So attached a patch  
[^HDFS-14849.branch-3.1.patch] for posterity.

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> HDFS-14849.branch-3.1.patch, fsck-file.png, liveBlockIndices.png, 
> scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14849:
---
Fix Version/s: 3.1.4

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> HDFS-14849.branch-3.1.patch, fsck-file.png, liveBlockIndices.png, 
> scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323018=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323018
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 22:55
Start Date: 03/Oct/19 22:55
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538160179
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323018)
Time Spent: 20m  (was: 10m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-10-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14849:
---
Attachment: HDFS-14849.branch-3.1.patch

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> HDFS-14849.branch-3.1.patch, fsck-file.png, liveBlockIndices.png, 
> scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1986:
-
Labels: pull-request-available  (was: )

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323015=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323015
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 03/Oct/19 22:54
Start Date: 03/Oct/19 22:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1588: 
HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588
 
 
   Implement listKeys API.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323015)
Remaining Estimate: 0h
Time Spent: 10m

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14856) Add ability to import file ACLs from remote store

2019-10-03 Thread Ashvin Agrawal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944090#comment-16944090
 ] 

Ashvin Agrawal commented on HDFS-14856:
---

Thanks [~virajith] [~elgoiri] for the review and suggestions !

> Add ability to import file ACLs from remote store
> -
>
> Key: HDFS-14856
> URL: https://issues.apache.org/jira/browse/HDFS-14856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, the 
> external store scanner, {{FsTreeWalk,}} ignores any ACLs on the data. In a 
> secure HDFS setup where external storage system and HDFS belong to the same 
> security domain, uniform enforcement of the authorization policies may be 
> desired. This task aims to extend the ability of the external store scanner 
> to support this use case. When configured, the scanner should attempt to 
> fetch ACLs and provide it to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1720) Add ability to configure RocksDB logs for Ozone Manager

2019-10-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944091#comment-16944091
 ] 

Hudson commented on HDDS-1720:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17457 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17457/])
HDDS-1720 : Add ability to configure RocksDB logs for Ozone Manager. 
(aengineer: rev 76605f17dd15a48bc40c1b2fe6c8d0c2f4631959)
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/utils/db/TestDBStoreBuilder.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RocksDBConfiguration.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/DBStoreBuilder.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRocksDBLogging.java


> Add ability to configure RocksDB logs for Ozone Manager
> ---
>
> Key: HDDS-1720
> URL: https://issues.apache.org/jira/browse/HDDS-1720
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> While doing performance testing, it was seen that there was no way to get 
> RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
> useful mechanism to understand the health of Rocksdb while investigating 
> large clusters. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1720) Add ability to configure RocksDB logs for Ozone Manager

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1720:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the trunk

> Add ability to configure RocksDB logs for Ozone Manager
> ---
>
> Key: HDDS-1720
> URL: https://issues.apache.org/jira/browse/HDDS-1720
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> While doing performance testing, it was seen that there was no way to get 
> RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
> useful mechanism to understand the health of Rocksdb while investigating 
> large clusters. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >