[GitHub] [hadoop] hadoop-yetus commented on issue #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1055: HDDS-1705. Recon: Add 
estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#issuecomment-509489089
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 480 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 859 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 311 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 505 | trunk passed |
   | -0 | patch | 360 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 441 | the patch passed |
   | +1 | compile | 262 | the patch passed |
   | +1 | javac | 262 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 90 | hadoop-ozone generated 3 new + 9 unchanged - 0 fixed = 
12 total (was 9) |
   | +1 | findbugs | 522 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 238 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1490 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6505 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1055 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5c04d74ad9f5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5d30e4 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/3/testReport/ |
   | Max. process+thread count | 5359 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16417) abfs can't access storage account without password

2019-07-08 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880929#comment-16880929
 ] 

Masatake Iwasaki commented on HADOOP-16417:
---

[~jlpedrosa] you should use mailing list before filing a JIRA if you have a 
question about configuration problems.

The stack trace shows that shared key was not given by configuration property 
{{fs.azure.account.key.YOUR_ACCOUNT_NAME.dfs.core.windows.net}} .

While latest release (3.2.0) does not provide enough documentation for ABFS, 
you can read the trunk one.
 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md]

> abfs can't access storage account without password
> --
>
> Key: HADOOP-16417
> URL: https://issues.apache.org/jira/browse/HADOOP-16417
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Jose Luis Pedrosa
>Priority: Minor
>
> It does not seem possible to access storage accounts without passwords using 
> abfs, but it is possible using wasb.
>  
> This sample code (Spark based) to illustrate, the following code using 
> abfs_path with throw an exception
> {noformat}
> Exception in thread "main" java.lang.IllegalArgumentException: Invalid 
> account key.
> at 
> org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials.(SharedKeyCredentials.java:70)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:812)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:149)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:108)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> {noformat}
>   While using the wasb_path will work normally,
> {code:java}
> import org.apache.spark.api.java.function.FilterFunction;
> import org.apache.spark.sql.RuntimeConfig;
> import org.apache.spark.sql.SparkSession;
> import org.apache.spark.sql.Dataset;
> import org.apache.spark.sql.Row;
> public class SimpleApp {
> static String blob_account_name = "azureopendatastorage";
> static String blob_container_name = "gfsweatherdatacontainer";
> static String blob_relative_path = "GFSWeather/GFSProcessed";
> static String blob_sas_token = "";
> static String abfs_path = 
> "abfs://"+blob_container_name+"@"+blob_account_name+".dfs.core.windows.net/"+blob_relative_path;
> static String wasbs_path = "wasbs://"+blob_container_name + 
> "@"+blob_account_name+".blob.core.windows.net/" + blob_relative_path;
> public static void main(String[] args) {
>
> SparkSession spark = SparkSession.builder().appName("NOAAGFS 
> Run").getOrCreate();
> configureAzureHadoopConnetor(spark);
> RuntimeConfig conf = spark.conf();
> 
> conf.set("fs.azure.account.key."+blob_account_name+".dfs.core.windows.net", 
> blob_sas_token);
> 
> conf.set("fs.azure.account.key."+blob_account_name+".blob.core.windows.net", 
> blob_sas_token);
> System.out.println("Creating parquet dataset");
> Dataset logData = spark.read().parquet(abfs_path);
> System.out.println("Creating temp view");
> logData.createOrReplaceTempView("source");
> System.out.println("SQL");
> spark.sql("SELECT * FROM source LIMIT 10").show();
> spark.stop();
> }
> public static void configureAzureHadoopConnetor(SparkSession session) {
> RuntimeConfig conf = session.conf();
> 
> conf.set("fs.AbstractFileSystem.wasb.impl","org.apache.hadoop.fs.azure.Wasb");
> 
> conf.set("fs.AbstractFileSystem.wasbs.impl","org.apache.hadoop.fs.azure.Wasbs");
> 
> conf.set("fs.wasb.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem");
> 
> conf.set("fs.wasbs.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure");
> conf.set("fs.azure.secure.mode", false);
> conf.set("fs.abfs.impl",  
> "org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem");
> conf.set("fs.abfss.impl", 
> "org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem");
> 
> conf.set("fs.AbstractFileSystem.abfs.impl","org.apache.hadoop.fs.azurebfs.Abfs");
> 
> conf.set("fs.AbstractFileSystem.abfss.impl","org.apache.hadoop.fs.azurebfs.Abfss");
> // Works in 

[GitHub] [hadoop] mukul1987 commented on issue #1062: HDDS-1718. Increase Ratis Leader election timeout default.

2019-07-08 Thread GitBox
mukul1987 commented on issue #1062: HDDS-1718. Increase Ratis Leader election 
timeout default.
URL: https://github.com/apache/hadoop/pull/1062#issuecomment-509478840
 
 
   Thanks for updating the patch @swagle. +1, the patch looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
arp7 merged pull request #1055: HDDS-1705. Recon: Add estimatedTotalCount to 
the response of ...
URL: https://github.com/apache/hadoop/pull/1055
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
arp7 commented on issue #1055: HDDS-1705. Recon: Add estimatedTotalCount to the 
response of ...
URL: https://github.com/apache/hadoop/pull/1055#issuecomment-509478699
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket 
key and prefix to authorize access. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#issuecomment-509477725
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 122 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 83 | Maven dependency ordering for branch |
   | +1 | mvninstall | 565 | trunk passed |
   | +1 | compile | 278 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 887 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 342 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 550 | trunk passed |
   | -0 | patch | 401 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 476 | the patch passed |
   | +1 | compile | 305 | the patch passed |
   | +1 | cc | 305 | the patch passed |
   | +1 | javac | 305 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | -1 | whitespace | 0 | The patch has 14 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 788 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 630 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 310 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1943 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 7846 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/973 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc shellcheck shelldocs |
   | uname | Linux 51bb496d47e6 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 738c093 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/14/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/14/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/14/testReport/ |
   | Max. process+thread count | 5327 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozonefs hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/14/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] vivekratnavel commented on issue #1064: HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-08 Thread GitBox
vivekratnavel commented on issue #1064: HDDS-1585. Add LICENSE.txt and 
NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#issuecomment-509475535
 
 
   @anuengineer @elek @swagle Please review when you find time


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1064: HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-08 Thread GitBox
vivekratnavel commented on issue #1064: HDDS-1585. Add LICENSE.txt and 
NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064#issuecomment-509475459
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #1064: HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-08 Thread GitBox
vivekratnavel opened a new pull request #1064: HDDS-1585. Add LICENSE.txt and 
NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064
 
 
   This PR adds all copyright notices and licenses of third party dependencies 
of recon web to LICENSE file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket 
key and prefix to authorize access. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#issuecomment-509472779
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 460 | trunk passed |
   | +1 | compile | 257 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 745 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 314 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 503 | trunk passed |
   | -0 | patch | 363 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 448 | the patch passed |
   | +1 | compile | 279 | the patch passed |
   | +1 | cc | 279 | the patch passed |
   | +1 | javac | 279 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | -1 | whitespace | 0 | The patch has 14 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 658 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | the patch passed |
   | +1 | findbugs | 588 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 250 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1027 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6093 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/973 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc shellcheck shelldocs |
   | uname | Linux 1f9db5efca93 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 738c093 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/13/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/13/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/13/testReport/ |
   | Max. process+thread count | 4649 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozonefs hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/13/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, 

[GitHub] [hadoop] anuengineer merged pull request #1050: HDDS-1550. MiniOzoneCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-07-08 Thread GitBox
anuengineer merged pull request #1050: HDDS-1550. MiniOzoneCluster is not 
shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1050
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1063: HDDS-1775. Make OM KeyDeletingService compatible with HA model

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1063: HDDS-1775. Make OM KeyDeletingService 
compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-509471795
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 477 | trunk passed |
   | +1 | compile | 244 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 495 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 429 | the patch passed |
   | +1 | compile | 249 | the patch passed |
   | +1 | cc | 249 | the patch passed |
   | +1 | javac | 249 | the patch passed |
   | -0 | checkstyle | 36 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 638 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | +1 | findbugs | 518 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 241 | hadoop-hdds in the patch passed. |
   | -1 | unit | 162 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4919 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 41ac9836c91f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 738c093 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/1/testReport/ |
   | Max. process+thread count | 1319 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1055: HDDS-1705. Recon: Add 
estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#issuecomment-509470032
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 483 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 818 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   | -0 | patch | 346 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 426 | the patch passed |
   | +1 | compile | 243 | the patch passed |
   | +1 | javac | 243 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 621 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 82 | hadoop-ozone generated 3 new + 9 unchanged - 0 fixed = 
12 total (was 9) |
   | +1 | findbugs | 522 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 239 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1238 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6126 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1055 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 817035eb9c67 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 738c093 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/2/testReport/ |
   | Max. process+thread count | 5318 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1055/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] mukul1987 merged pull request #1047: HDDS-1750. Add block allocation metrics for pipelines in SCM

2019-07-08 Thread GitBox
mukul1987 merged pull request #1047: HDDS-1750. Add block allocation metrics 
for pipelines in SCM
URL: https://github.com/apache/hadoop/pull/1047
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301356812
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 ##
 @@ -878,6 +878,22 @@
   public static final String DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY = 
"dfs.image.transfer.chunksize";
   public static final int DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT = 64 * 1024;
 
+  public static final String DFS_IMAGE_PARALLEL_LOAD_KEY =
+  "dfs.image.parallel.load";
+  public static final boolean DFS_IMAGE_PARALLEL_LOAD_DEFAULT = true;
+
+  public static final String DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY =
+  "dfs.image.parallel.target.sections";
+  public static final int DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT = 12;
+
+  public static final String DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY =
+  "dfs.image.parallel.inode.threshold";
+  public static final int DFS_IMAGE_PARALLEL_INODE_THRESHOLD_DEFAULT = 100;
+
+  public static final String DFS_IMAGE_PARALLEL_THREADS_KEY =
+  "dfs.image.parallel.threads";
+  public static final int DFS_IMAGE_PARALLEL_THREADS_DEFAULT = 4;
+
 
 Review comment:
   IIUC, threads size should be not greater than target sections, otherwise the 
remaining threads will not be used or some other issues. So is it necessary to 
warn this configuration limit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301366228
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -187,6 +195,73 @@ void load(File file) throws IOException {
   }
 }
 
+/**
+ * Given a FSImage FileSummary.section, return a LimitInput stream set to
+ * the starting position of the section and limited to the section length
+ * @param section The FileSummary.Section containing the offset and length
+ * @param compressionCodec The compression codec in use, if any
 
 Review comment:
   missing any annotation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301354771
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -462,6 +619,60 @@ long save(File file, FSImageCompression compression) 
throws IOException {
   }
 }
 
+private void enableSubSectionsIfRequired() {
+  boolean parallelEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_DEFAULT);
+  int inodeThreshold = conf.getInt(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_DEFAULT);
+  int targetSections = conf.getInt(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT);
+  boolean compressionEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY,
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_DEFAULT);
+
+
+  if (parallelEnabled) {
+if (compressionEnabled) {
+  LOG.warn("Parallel Image loading is not supported when {} is set to" 
+
+  " true. Parallel loading will be disabled.",
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY);
+  WRITE_SUB_SECTIONS = false;
+  return;
+}
+if (targetSections <= 0) {
+  LOG.warn("{} is set to {}. It must be greater than zero. Setting to" 
+
+  "default of {}",
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY,
+  targetSections,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT);
+  targetSections =
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT;
+}
+if (inodeThreshold <= 0) {
+  LOG.warn("{} is set to {}. It must be greater than zero. Setting to" 
+
+  "default of {}",
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY,
+  targetSections,
 
 Review comment:
   `targetSections` should be `inodeThreshold` here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301362319
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -197,16 +203,66 @@ public static void updateBlocksMap(INodeFile file, 
BlockManager bm) {
 private final FSDirectory dir;
 private final FSNamesystem fsn;
 private final FSImageFormatProtobuf.Loader parent;
+private ReentrantLock cacheNameMapLock;
+private ReentrantLock blockMapLock;
 
 Loader(FSNamesystem fsn, final FSImageFormatProtobuf.Loader parent) {
   this.fsn = fsn;
   this.dir = fsn.dir;
   this.parent = parent;
+  cacheNameMapLock = new ReentrantLock(true);
+  blockMapLock = new ReentrantLock(true);
+}
+
+void loadINodeDirectorySectionInParallel(ExecutorService service,
+ArrayList sections, String compressionCodec)
+throws IOException {
+  LOG.info("Loading the INodeDirectory section in parallel with {} sub-" +
+  "sections", sections.size());
+  CountDownLatch latch = new CountDownLatch(sections.size());
+  final CopyOnWriteArrayList exceptions =
+  new CopyOnWriteArrayList<>();
+  for (FileSummary.Section s : sections) {
+service.submit(new Runnable() {
+  public void run() {
+InputStream ins = null;
+try {
+  ins = parent.getInputStreamForSection(s,
+  compressionCodec);
+  loadINodeDirectorySection(ins);
+} catch (Exception e) {
+  LOG.error("An exception occurred loading INodeDirectories in " +
+  "parallel", e);
+  exceptions.add(new IOException(e));
+} finally {
+  latch.countDown();
+  try {
+ins.close();
+  } catch (IOException ioe) {
+LOG.warn("Failed to close the input stream, ignoring", ioe);
+  }
+}
+  }
+});
+  }
+  try {
+latch.await();
+  } catch (InterruptedException e) {
+LOG.error("Interrupted waiting for countdown latch", e);
+throw new IOException(e);
+  }
+  if (exceptions.size() != 0) {
+LOG.error("{} exceptions occurred loading INodeDirectories",
+exceptions.size());
+throw exceptions.get(0);
+  }
+  LOG.info("Completed loading all INodeDirectory sub-sections");
 }
 
 void loadINodeDirectorySection(InputStream in) throws IOException {
   final List refList = parent.getLoaderContext()
   .getRefList();
+  ArrayList inodeList = new ArrayList<>();
 
 Review comment:
   use `inodeList` to batch cache name and update `blocksMap` here, right?
   1. following code fragment limit the size under 1000, should this value be 
defined as constant?
   2. batch process update `blocksMap` is good idea, however it will bring 
negative effects for cache? anti-locality? Please correct me if I am wrong.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301357453
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -389,21 +496,26 @@ private void loadErasureCodingSection(InputStream in)
 
   public static final class Saver {
 public static final int CHECK_CANCEL_INTERVAL = 4096;
+public static boolean WRITE_SUB_SECTIONS = false;
 
 Review comment:
   `WRITE_SUB_SECTIONS` and `INODES_PER_SUB_SECTION `as attributes of Saver 
class rather than constant?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301355625
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -462,6 +619,60 @@ long save(File file, FSImageCompression compression) 
throws IOException {
   }
 }
 
+private void enableSubSectionsIfRequired() {
+  boolean parallelEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_DEFAULT);
+  int inodeThreshold = conf.getInt(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_DEFAULT);
+  int targetSections = conf.getInt(
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY,
+  DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_DEFAULT);
+  boolean compressionEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY,
+  DFSConfigKeys.DFS_IMAGE_COMPRESS_DEFAULT);
+
 
 Review comment:
   Redundant empty lines.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301360582
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 
 Review comment:
   empty line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301363874
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 }
+
 for (int refId : e.getRefChildrenList()) {
   INodeReference ref = refList.get(refId);
-  addToParent(p, ref);
+  if (addToParent(p, ref)) {
+if (ref.isFile()) {
+  inodeList.add(ref);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 }
   }
+  addToCacheAndBlockMap(inodeList);
+}
+
+private void addToCacheAndBlockMap(ArrayList inodeList) {
+  try {
+cacheNameMapLock.lock();
+for (INode i : inodeList) {
+  dir.cacheName(i);
+}
+  } finally {
+cacheNameMapLock.unlock();
+  }
+
+  try {
+blockMapLock.lock();
+for (INode i : inodeList) {
+  updateBlocksMap(i.asFile(), fsn.getBlockManager());
+}
+  } finally {
+blockMapLock.unlock();
+  }
 }
 
 void loadINodeSection(InputStream in, StartupProgress prog,
 Step currentStep) throws IOException {
-  INodeSection s = INodeSection.parseDelimitedFrom(in);
-  fsn.dir.resetLastInodeId(s.getLastInodeId());
-  long numInodes = s.getNumInodes();
-  LOG.info("Loading " + numInodes + " INodes.");
-  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+  loadINodeSectionHeader(in, prog, currentStep);
   Counter counter = prog.getCounter(Phase.LOADING_FSIMAGE, currentStep);
-  for (int i = 0; i < numInodes; ++i) {
+  int totalLoaded = loadINodesInSection(in, counter);
+  LOG.info("Successfully loaded {} inodes", totalLoaded);
+}
+
+private int loadINodesInSection(InputStream in, Counter counter)
+throws IOException {
+  // As the input stream is a LimitInputStream, the reading will stop when
+  // EOF is encountered at the end of the stream.
+  int cntr = 0;
+  while (true) {
 INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
+if (p == null) {
+  break;
+}
 if (p.getId() == INodeId.ROOT_INODE_ID) {
-  loadRootINode(p);
+  synchronized(this) {
 
 Review comment:
   `synchronized` should take effect only for parallel mode rather than serial 
loading mode?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301369183
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 ##
 @@ -255,14 +345,28 @@ public int compare(FileSummary.Section s1, 
FileSummary.Section s2) {
 case INODE: {
   currentStep = new Step(StepType.INODES);
   prog.beginStep(Phase.LOADING_FSIMAGE, currentStep);
-  inodeLoader.loadINodeSection(in, prog, currentStep);
+  stageSubSections = getSubSectionsOfName(
+  subSections, SectionName.INODE_SUB);
+  if (loadInParallel && (stageSubSections.size() > 0)) {
+inodeLoader.loadINodeSectionInParallel(executorService,
+stageSubSections, summary.getCodec(), prog, currentStep);
+  } else {
+inodeLoader.loadINodeSection(in, prog, currentStep);
+  }
 }
   break;
 case INODE_REFERENCE:
   snapshotLoader.loadINodeReferenceSection(in);
   break;
 case INODE_DIR:
-  inodeLoader.loadINodeDirectorySection(in);
+  stageSubSections = getSubSectionsOfName(
+  subSections, SectionName.INODE_DIR_SUB);
+  if (loadInParallel && stageSubSections.size() > 0) {
 
 Review comment:
   Would you like to add some unit test to cover serial loading and parallel 
loading for old format fsimage?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301364827
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 }
+
 for (int refId : e.getRefChildrenList()) {
   INodeReference ref = refList.get(refId);
-  addToParent(p, ref);
+  if (addToParent(p, ref)) {
+if (ref.isFile()) {
+  inodeList.add(ref);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 }
   }
+  addToCacheAndBlockMap(inodeList);
+}
+
+private void addToCacheAndBlockMap(ArrayList inodeList) {
+  try {
+cacheNameMapLock.lock();
+for (INode i : inodeList) {
+  dir.cacheName(i);
+}
+  } finally {
+cacheNameMapLock.unlock();
+  }
+
+  try {
+blockMapLock.lock();
+for (INode i : inodeList) {
+  updateBlocksMap(i.asFile(), fsn.getBlockManager());
+}
+  } finally {
+blockMapLock.unlock();
+  }
 }
 
 void loadINodeSection(InputStream in, StartupProgress prog,
 Step currentStep) throws IOException {
-  INodeSection s = INodeSection.parseDelimitedFrom(in);
-  fsn.dir.resetLastInodeId(s.getLastInodeId());
-  long numInodes = s.getNumInodes();
-  LOG.info("Loading " + numInodes + " INodes.");
-  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+  loadINodeSectionHeader(in, prog, currentStep);
   Counter counter = prog.getCounter(Phase.LOADING_FSIMAGE, currentStep);
-  for (int i = 0; i < numInodes; ++i) {
+  int totalLoaded = loadINodesInSection(in, counter);
+  LOG.info("Successfully loaded {} inodes", totalLoaded);
+}
+
+private int loadINodesInSection(InputStream in, Counter counter)
+throws IOException {
+  // As the input stream is a LimitInputStream, the reading will stop when
+  // EOF is encountered at the end of the stream.
+  int cntr = 0;
+  while (true) {
 INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
+if (p == null) {
+  break;
+}
 if (p.getId() == INodeId.ROOT_INODE_ID) {
-  loadRootINode(p);
+  synchronized(this) {
+loadRootINode(p);
+  }
 } else {
   INode n = loadINode(p);
-  dir.addToInodeMap(n);
+  synchronized(this) {
+dir.addToInodeMap(n);
+  }
+}
+cntr ++;
+if (counter != null) {
+  counter.increment();
 }
-counter.increment();
   }
+  return cntr;
+}
+
+
+private void loadINodeSectionHeader(InputStream in, StartupProgress prog,
+Step currentStep) throws IOException {
+  INodeSection s = INodeSection.parseDelimitedFrom(in);
+  fsn.dir.resetLastInodeId(s.getLastInodeId());
+  long numInodes = s.getNumInodes();
+  LOG.info("Loading " + numInodes + " INodes.");
+  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+}
+
+void loadINodeSectionInParallel(ExecutorService service,
+ArrayList sections,
+String compressionCodec, StartupProgress prog,
+Step currentStep) throws IOException {
+  LOG.info("Loading the INode section in parallel with {} sub-sections",
+  sections.size());
+  CountDownLatch latch = new CountDownLatch(sections.size());
+  AtomicInteger totalLoaded = new AtomicInteger(0);
+  final CopyOnWriteArrayList exceptions =
+  new CopyOnWriteArrayList<>();
+
+  for (int i=0; i < sections.size(); i++) {
+FileSummary.Section s = sections.get(i);
+InputStream ins = parent.getInputStreamForSection(s, compressionCodec);
+if (i == 0) {
+  // The first inode section has a header which must be processed first
+  loadINodeSectionHeader(ins, prog, currentStep);
+}
+
+service.submit(new Runnable() {
+   public void run() {
+try {
+   totalLoaded.addAndGet(loadINodesInSection(ins, null));
+   

[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-08 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r301364041
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 }
+
 for (int refId : e.getRefChildrenList()) {
   INodeReference ref = refList.get(refId);
-  addToParent(p, ref);
+  if (addToParent(p, ref)) {
+if (ref.isFile()) {
+  inodeList.add(ref);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 }
   }
+  addToCacheAndBlockMap(inodeList);
+}
+
+private void addToCacheAndBlockMap(ArrayList inodeList) {
+  try {
+cacheNameMapLock.lock();
+for (INode i : inodeList) {
+  dir.cacheName(i);
+}
+  } finally {
+cacheNameMapLock.unlock();
+  }
+
+  try {
+blockMapLock.lock();
+for (INode i : inodeList) {
+  updateBlocksMap(i.asFile(), fsn.getBlockManager());
+}
+  } finally {
+blockMapLock.unlock();
+  }
 }
 
 void loadINodeSection(InputStream in, StartupProgress prog,
 Step currentStep) throws IOException {
-  INodeSection s = INodeSection.parseDelimitedFrom(in);
-  fsn.dir.resetLastInodeId(s.getLastInodeId());
-  long numInodes = s.getNumInodes();
-  LOG.info("Loading " + numInodes + " INodes.");
-  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+  loadINodeSectionHeader(in, prog, currentStep);
   Counter counter = prog.getCounter(Phase.LOADING_FSIMAGE, currentStep);
-  for (int i = 0; i < numInodes; ++i) {
+  int totalLoaded = loadINodesInSection(in, counter);
+  LOG.info("Successfully loaded {} inodes", totalLoaded);
+}
+
+private int loadINodesInSection(InputStream in, Counter counter)
+throws IOException {
+  // As the input stream is a LimitInputStream, the reading will stop when
+  // EOF is encountered at the end of the stream.
+  int cntr = 0;
+  while (true) {
 INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
+if (p == null) {
+  break;
+}
 if (p.getId() == INodeId.ROOT_INODE_ID) {
-  loadRootINode(p);
+  synchronized(this) {
+loadRootINode(p);
+  }
 } else {
   INode n = loadINode(p);
-  dir.addToInodeMap(n);
+  synchronized(this) {
+dir.addToInodeMap(n);
+  }
+}
+cntr ++;
+if (counter != null) {
+  counter.increment();
 }
-counter.increment();
   }
+  return cntr;
+}
+
+
+private void loadINodeSectionHeader(InputStream in, StartupProgress prog,
+Step currentStep) throws IOException {
+  INodeSection s = INodeSection.parseDelimitedFrom(in);
+  fsn.dir.resetLastInodeId(s.getLastInodeId());
+  long numInodes = s.getNumInodes();
+  LOG.info("Loading " + numInodes + " INodes.");
+  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+}
+
+void loadINodeSectionInParallel(ExecutorService service,
+ArrayList sections,
+String compressionCodec, StartupProgress prog,
+Step currentStep) throws IOException {
+  LOG.info("Loading the INode section in parallel with {} sub-sections",
+  sections.size());
+  CountDownLatch latch = new CountDownLatch(sections.size());
+  AtomicInteger totalLoaded = new AtomicInteger(0);
+  final CopyOnWriteArrayList exceptions =
+  new CopyOnWriteArrayList<>();
+
+  for (int i=0; i < sections.size(); i++) {
+FileSummary.Section s = sections.get(i);
+InputStream ins = parent.getInputStreamForSection(s, compressionCodec);
+if (i == 0) {
+  // The first inode section has a header which must be processed first
+  loadINodeSectionHeader(ins, prog, currentStep);
+}
+
+service.submit(new Runnable() {
+   public void run() {
+try {
+   totalLoaded.addAndGet(loadINodesInSection(ins, null));
+   

[GitHub] [hadoop] swagle commented on a change in pull request #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
swagle commented on a change in pull request #1055: HDDS-1705. Recon: Add 
estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r301368594
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/types/GuiceInjectorTest.java
 ##
 @@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.types;
+
+import com.google.inject.AbstractModule;
+import com.google.inject.Guice;
+import com.google.inject.Injector;
+import com.google.inject.Singleton;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.persistence.AbstractSqlDatabaseTest;
+import org.apache.hadoop.ozone.recon.persistence.DataSourceConfiguration;
+import org.apache.hadoop.ozone.recon.persistence.JooqPersistenceModule;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.apache.hadoop.ozone.recon.spi.ContainerDBServiceProvider;
+import org.apache.hadoop.ozone.recon.spi.OzoneManagerServiceProvider;
+import org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl;
+import org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl;
+import org.apache.hadoop.ozone.recon.spi.impl.ReconContainerDBProvider;
+import org.apache.hadoop.utils.db.DBStore;
+import org.junit.Assert;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.File;
+import java.io.IOException;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_DB_DIR;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_OM_SNAPSHOT_DB_DIR;
+
+/**
+ * Utility methods to get guice injector and ozone configuration.
+ */
+public interface GuiceInjectorTest {
 
 Review comment:
   Let's rename to GuiceInjectorUtilsForTests. It seems with the current name 
that we are testing guice injection.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru opened a new pull request #1063: HDDS-1775. Make OM KeyDeletingService compatible with HA model

2019-07-08 Thread GitBox
hanishakoneru opened a new pull request #1063: HDDS-1775. Make OM 
KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket 
key and prefix to authorize access. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#issuecomment-509453474
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | +1 | mvninstall | 504 | trunk passed |
   | +1 | compile | 285 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 800 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 328 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 525 | trunk passed |
   | -0 | patch | 376 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 82 | Maven dependency ordering for patch |
   | +1 | mvninstall | 447 | the patch passed |
   | +1 | compile | 266 | the patch passed |
   | +1 | cc | 266 | the patch passed |
   | +1 | javac | 266 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | -1 | whitespace | 0 | The patch has 14 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 644 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 536 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 165 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1567 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6712 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/973 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc shellcheck shelldocs |
   | uname | Linux 51fed471e0a1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4632708 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/12/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/12/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/12/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/12/testReport/ |
   | Max. process+thread count | 5402 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozonefs hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/12/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880838#comment-16880838
 ] 

Hadoop QA commented on HADOOP-16418:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-16418 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973996/HADOOP-16418.000.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16374/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-16418.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16411:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
vivekratnavel commented on a change in pull request #1055: HDDS-1705. Recon: 
Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r301358174
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -80,41 +92,77 @@
 @PrepareForTest(ReconUtils.class)
 public class TestContainerKeyService extends AbstractOMMetadataManagerTest {
 
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
   private ContainerDBServiceProvider containerDbServiceProvider;
   private OMMetadataManager omMetadataManager;
   private ReconOMMetadataManager reconOMMetadataManager;
   private Injector injector;
   private OzoneManagerServiceProviderImpl ozoneManagerServiceProvider;
   private ContainerKeyService containerKeyService;
+  private boolean setUpIsDone = false;
+
+  private Injector getInjector() {
+return injector;
+  }
 
   @Before
   public void setUp() throws Exception {
 omMetadataManager = initializeNewOmMetadataManager();
-injector = Guice.createInjector(new AbstractModule() {
-  @Override
-  protected void configure() {
-try {
-  bind(OzoneConfiguration.class).toInstance(
-  getTestOzoneConfiguration());
-  reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
-  
bind(ReconOMMetadataManager.class).toInstance(reconOMMetadataManager);
-  bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
-  in(Singleton.class);
-  bind(ContainerDBServiceProvider.class).to(
-  ContainerDBServiceProviderImpl.class).in(Singleton.class);
-  ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
-  getTestOzoneConfiguration());
-  bind(OzoneManagerServiceProvider.class)
-  .toInstance(ozoneManagerServiceProvider);
-  containerKeyService = new ContainerKeyService();
-  bind(ContainerKeyService.class).toInstance(containerKeyService);
-} catch (IOException e) {
-  Assert.fail();
+File tempDir = temporaryFolder.newFolder();
+AbstractSqlDatabaseTest.DataSourceConfigurationProvider
+configurationProvider =
+new AbstractSqlDatabaseTest.DataSourceConfigurationProvider(tempDir);
+
+JooqPersistenceModule jooqPersistenceModule =
+new JooqPersistenceModule(configurationProvider);
+
+injector = Guice.createInjector(jooqPersistenceModule,
+new AbstractModule() {
+@Override
+public void configure() {
+  try {
+bind(DataSourceConfiguration.class)
+.toProvider(configurationProvider);
+OzoneConfiguration configuration = getTestOzoneConfiguration();
+bind(OzoneConfiguration.class).toInstance(configuration);
+
+ozoneManagerServiceProvider = new OzoneManagerServiceProviderImpl(
+configuration);
+
+reconOMMetadataManager = getTestMetadataManager(omMetadataManager);
+bind(ReconOMMetadataManager.class)
+.toInstance(reconOMMetadataManager);
+
+bind(DBStore.class).toProvider(ReconContainerDBProvider.class).
+in(Singleton.class);
+bind(ContainerDBServiceProvider.class)
+.to(ContainerDBServiceProviderImpl.class).in(Singleton.class);
+
+bind(OzoneManagerServiceProvider.class)
+.toInstance(ozoneManagerServiceProvider);
+containerKeyService = new ContainerKeyService();
+bind(ContainerKeyService.class).toInstance(containerKeyService);
+  } catch (IOException e) {
+Assert.fail();
+  }
 }
-  }
-});
-containerDbServiceProvider = injector.getInstance(
-ContainerDBServiceProvider.class);
+  });
+
+// The following setup is run only once
+if (!setUpIsDone) {
 
 Review comment:
   Also, power mock runner does not apply JUnit ClassRules - 
https://github.com/powermock/powermock/issues/687
   
   The workaround to use `BlockJUnit4ClassRunner.class` also doesn't work as 
expected and causes a lot of test failures. Hence, sticking with this approach 
for test classes that use power mock runner.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880822#comment-16880822
 ] 

Erik Krogen commented on HADOOP-16418:
--

Cleaned up 5 findbugs warnings and some checkstyle in v000

> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-16418.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16418:
-
Status: Patch Available  (was: In Progress)

> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-16418.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16418:
-
Attachment: HADOOP-16418.000.patch

> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-16418.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16418 started by Erik Krogen.

> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-16418.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880821#comment-16880821
 ] 

Hudson commented on HADOOP-16411:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16871 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16871/])
HADOOP-16411. Fix javadoc warnings in hadoop-dynamometer. (iwasakims: rev 
738c09349eb6178065797fc9cd624bf5e2285069)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/ApplicationMaster.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditReplayMapper.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/CreateFileMapper.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/WorkloadMapper.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditLogDirectParser.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditLogHiveTableParser.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditCommandParser.java


> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880819#comment-16880819
 ] 

Masatake Iwasaki commented on HADOOP-16411:
---

Thanks [~xkrogen]. I'm committing this.

> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #931: HDDS-1586. Allow Ozone RPC client to read with topology awareness.

2019-07-08 Thread GitBox
xiaoyuyao commented on issue #931: HDDS-1586. Allow Ozone RPC client to read 
with topology awareness.
URL: https://github.com/apache/hadoop/pull/931#issuecomment-509436807
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880804#comment-16880804
 ] 

Erik Krogen commented on HADOOP-16418:
--

It seems that there are outstanding issues which were not reported by the 
pre-commit Jenkins when this patch was initially being committed (see 
[here|https://issues.apache.org/jira/browse/HDFS-12345?focusedCommentId=16870101=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870101]).

> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880799#comment-16880799
 ] 

Erik Krogen edited comment on HADOOP-16411 at 7/8/19 11:54 PM:
---

Interesting... We got a [clean Jenkins 
report|https://issues.apache.org/jira/browse/HDFS-12345?focusedCommentId=16870101=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870101]
 for Javadoc (and findbugs) on the initial patch. Thanks for reporting this 
[~iwasakims] and for working on it. I'll take HADOOP-16418.

+1 from me on v002 patch. I see Yetus voted -1 on the unit test, but when I 
click through to the test report, it seems that all of the tests passed.


was (Author: xkrogen):
Interesting... We got a [clean Jenkins 
report|https://issues.apache.org/jira/browse/HDFS-12345?focusedCommentId=16870101=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870101]
 for Javadoc (and findbugs) on the initial patch. Thanks for reporting this 
[~iwasakims] and for working on it. I'll take HADOOP-16418.

+1 from me on v002 patch. 

> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880799#comment-16880799
 ] 

Erik Krogen edited comment on HADOOP-16411 at 7/8/19 11:53 PM:
---

Interesting... We got a [clean Jenkins 
report|https://issues.apache.org/jira/browse/HDFS-12345?focusedCommentId=16870101=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870101]
 for Javadoc (and findbugs) on the initial patch. Thanks for reporting this 
[~iwasakims] and for working on it. I'll take HADOOP-16418.

+1 from me on v002 patch. 


was (Author: xkrogen):
Interesting... We got a [clean Jenkins 
report|https://issues.apache.org/jira/browse/HDFS-12345?focusedCommentId=16870101=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870101]
 for Javadoc (and findbugs) on the initial patch. Thanks for reporting this 
[~iwasakims] and for working on it. I'll take HADOOP-16418.

> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880799#comment-16880799
 ] 

Erik Krogen commented on HADOOP-16411:
--

Interesting... We got a [clean Jenkins 
report|https://issues.apache.org/jira/browse/HDFS-12345?focusedCommentId=16870101=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870101]
 for Javadoc (and findbugs) on the initial patch. Thanks for reporting this 
[~iwasakims] and for working on it. I'll take HADOOP-16418.

> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen reassigned HADOOP-16418:


Assignee: Erik Krogen

> Fix checkstyle and findbugs warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16418
> URL: https://issues.apache.org/jira/browse/HADOOP-16418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Erik Krogen
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16411) Fix javadoc warnings in hadoop-dynamometer

2019-07-08 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880795#comment-16880795
 ] 

Masatake Iwasaki commented on HADOOP-16411:
---

I filed HADOOP-16418 for checkstyle and findbugs warnings. I would like to fix 
javadoc warnings at first here since it affects -Pdist build. 002 is the 
updated patch.

> Fix javadoc warnings in hadoop-dynamometer
> --
>
> Key: HADOOP-16411
> URL: https://issues.apache.org/jira/browse/HADOOP-16411
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16411.001.patch, HADOOP-16411.002.patch
>
>
> "mvn package -Pdist" failed due to javadoc warnings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16418) Fix checkstyle and findbugs warnings in hadoop-dynamometer

2019-07-08 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-16418:
-

 Summary: Fix checkstyle and findbugs warnings in hadoop-dynamometer
 Key: HADOOP-16418
 URL: https://issues.apache.org/jira/browse/HADOOP-16418
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Masatake Iwasaki






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16401) ABFS: port Azure doc to 3.2 branch

2019-07-08 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880763#comment-16880763
 ] 

Da Zhou commented on HADOOP-16401:
--

[~iwasakims] thanks for the patch. This looks good to me!

> ABFS: port Azure doc to 3.2 branch
> --
>
> Key: HADOOP-16401
> URL: https://issues.apache.org/jira/browse/HADOOP-16401
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HADOOP-16401-branch-3.2.001.patch
>
>
> Need to port the latest Azure markdown docs from trunk to 3.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16395) when CallQueueManager swap queues, we should remove metrics about FCQ

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880740#comment-16880740
 ] 

Erik Krogen commented on HADOOP-16395:
--

If we leave the method as a generic {{close()}}, I don't think it's acceptable 
for us to continue using it afterwards. Another implementation may reasonably 
assume that after {{close()}} is called, the queue will not be used, and then 
do some cleanup steps which would cause the {{put()}} and {{take()}} methods to 
no longer work properly. It sounds to me like we need to have something like 
{{preclose()}}, {{prepareClose()}}, {{initiateShutdown()}}, {{closeMetrics()}}, 
etc. to indicate that this isn't closing the queue for all further use. Since 
you're leaning towards creating a new interface anyway, I think defining a 
method that is more semantically correct makes sense here.

> when CallQueueManager swap queues, we should remove metrics about FCQ
> -
>
> Key: HADOOP-16395
> URL: https://issues.apache.org/jira/browse/HADOOP-16395
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Affects Versions: 3.2.0
>Reporter: yanghuafeng
>Priority: Minor
> Attachments: HADOOP-16395.001.patch, HADOOP-16395.002.patch
>
>
> when we use "dfsadmin -refreshCallQueue" between FCQ and LBQ, we find that 
> the metrics about FCQ still display by the JMX. Normally the metrics should 
> disappear like what  DecayRpcScheduler did. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16391) Duplicate values in rpcDetailedMetrics

2019-07-08 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880731#comment-16880731
 ] 

Erik Krogen commented on HADOOP-16391:
--

[~BilwaST] the new changes also look good to me. I agree that it could use a 
test, and can we also add Javadoc to {{typePrefix}} and the new constructor for 
{{MutableRate}} (explain what sampleName / valueName are)?

> Duplicate values in rpcDetailedMetrics
> --
>
> Key: HADOOP-16391
> URL: https://issues.apache.org/jira/browse/HADOOP-16391
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-16391-001.patch, 
> image-2019-06-25-20-30-15-395.png, screenshot-1.png, screenshot-2.png
>
>
> In RpcDetailedMetrics init is called two times . Once for deferredRpcrates 
> and other one rates metrics which causes duplicate values in RM and NM 
> metrics.
>  !image-2019-06-25-20-30-15-395.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1062: HDDS-1718. Increase Ratis Leader election timeout default.

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1062: HDDS-1718. Increase Ratis Leader 
election timeout default.
URL: https://github.com/apache/hadoop/pull/1062#issuecomment-509401488
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 641 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | trunk passed |
   | 0 | spotbugs | 338 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 524 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 514 | the patch passed |
   | +1 | compile | 306 | the patch passed |
   | +1 | javac | 306 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 638 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 382 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3110 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 131 | The patch does not generate ASF License warnings. |
   | | | 8766 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1062/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1062 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 690698be1fea 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de6b7bc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1062/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1062/1/testReport/ |
   | Max. process+thread count | 4833 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1062/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1062: HDDS-1718. Increase Ratis Leader election timeout default.

2019-07-08 Thread GitBox
swagle commented on issue #1062: HDDS-1718. Increase Ratis Leader election 
timeout default.
URL: https://github.com/apache/hadoop/pull/1062#issuecomment-509387697
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1032: [HDDS-1201] Reporting corrupted containers info to SCM

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1032: [HDDS-1201] Reporting corrupted 
containers info to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-509363207
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 473 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 320 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 508 | the patch passed |
   | +1 | compile | 246 | the patch passed |
   | +1 | javac | 246 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 517 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 249 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1812 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6833 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1032 |
   | JIRA Issue | HDDS-1201 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b131bec4f483 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de6b7bc |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/3/testReport/ |
   | Max. process+thread count | 5124 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16417) abfs can't access storage account without password

2019-07-08 Thread Jose Luis Pedrosa (JIRA)
Jose Luis Pedrosa created HADOOP-16417:
--

 Summary: abfs can't access storage account without password
 Key: HADOOP-16417
 URL: https://issues.apache.org/jira/browse/HADOOP-16417
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Jose Luis Pedrosa


It does not seem possible to access storage accounts without passwords using 
abfs, but it is possible using wasb.

 

This sample code (Spark based) to illustrate, the following code using 
abfs_path with throw an exception
{noformat}
Exception in thread "main" java.lang.IllegalArgumentException: Invalid account 
key.
at 
org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials.(SharedKeyCredentials.java:70)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:812)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:149)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:108)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
{noformat}
  While using the wasb_path will work normally,
{code:java}
import org.apache.spark.api.java.function.FilterFunction;
import org.apache.spark.sql.RuntimeConfig;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SimpleApp {

static String blob_account_name = "azureopendatastorage";
static String blob_container_name = "gfsweatherdatacontainer";
static String blob_relative_path = "GFSWeather/GFSProcessed";
static String blob_sas_token = "";
static String abfs_path = 
"abfs://"+blob_container_name+"@"+blob_account_name+".dfs.core.windows.net/"+blob_relative_path;
static String wasbs_path = "wasbs://"+blob_container_name + 
"@"+blob_account_name+".blob.core.windows.net/" + blob_relative_path;


public static void main(String[] args) {
   
SparkSession spark = SparkSession.builder().appName("NOAAGFS 
Run").getOrCreate();
configureAzureHadoopConnetor(spark);
RuntimeConfig conf = spark.conf();


conf.set("fs.azure.account.key."+blob_account_name+".dfs.core.windows.net", 
blob_sas_token);

conf.set("fs.azure.account.key."+blob_account_name+".blob.core.windows.net", 
blob_sas_token);

System.out.println("Creating parquet dataset");
Dataset logData = spark.read().parquet(abfs_path);

System.out.println("Creating temp view");
logData.createOrReplaceTempView("source");

System.out.println("SQL");
spark.sql("SELECT * FROM source LIMIT 10").show();
spark.stop();
}

public static void configureAzureHadoopConnetor(SparkSession session) {
RuntimeConfig conf = session.conf();


conf.set("fs.AbstractFileSystem.wasb.impl","org.apache.hadoop.fs.azure.Wasb");

conf.set("fs.AbstractFileSystem.wasbs.impl","org.apache.hadoop.fs.azure.Wasbs");

conf.set("fs.wasb.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem");

conf.set("fs.wasbs.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure");

conf.set("fs.azure.secure.mode", false);

conf.set("fs.abfs.impl",  
"org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem");
conf.set("fs.abfss.impl", 
"org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem");


conf.set("fs.AbstractFileSystem.abfs.impl","org.apache.hadoop.fs.azurebfs.Abfs");

conf.set("fs.AbstractFileSystem.abfss.impl","org.apache.hadoop.fs.azurebfs.Abfss");

// Works in conjuction with fs.azure.secure.mode. Setting this config 
to true
//results in fs.azure.NativeAzureFileSystem using the local SAS key 
generation
//where the SAS keys are generating in the same process as 
fs.azure.NativeAzureFileSystem.
//If fs.azure.secure.mode flag is set to false, this flag has no 
effect.
conf.set("fs.azure.local.sas.key.mode", false);
}
}
{code}
Sample build.gradle
{noformat}
plugins {
id 'java'
}

group 'org.samples'
version '1.0-SNAPSHOT'

sourceCompatibility = 1.8

repositories {
mavenCentral()
}

dependencies {
compile  'org.apache.spark:spark-sql_2.12:2.4.3'
}
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on issue #1056: HDDS-1717. Remove OMFailoverProxyProvider's dependency on hadoop-3.2

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1056: HDDS-1717. Remove 
OMFailoverProxyProvider's dependency on hadoop-3.2
URL: https://github.com/apache/hadoop/pull/1056#issuecomment-509361113
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 43 | Maven dependency ordering for branch |
   | +1 | mvninstall | 478 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 877 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   | 0 | spotbugs | 313 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 506 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 440 | the patch passed |
   | +1 | compile | 270 | the patch passed |
   | +1 | javac | 270 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 709 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 518 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 248 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1404 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 6525 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1056 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 373a16be14bd 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de6b7bc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/2/testReport/ |
   | Max. process+thread count | 5148 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1056/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1008: HDDS-1713. ReplicationManager fail to find proper node topology based…

2019-07-08 Thread GitBox
xiaoyuyao commented on issue #1008: HDDS-1713. ReplicationManager fail to find 
proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#issuecomment-509355465
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle opened a new pull request #1062: HDDS-1718. Increase Ratis Leader election timeout default.

2019-07-08 Thread GitBox
swagle opened a new pull request #1062: HDDS-1718. Increase Ratis Leader 
election timeout default.
URL: https://github.com/apache/hadoop/pull/1062
 
 
   Increased default to 5s. Fixed failing unit test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-07-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16416:
---

 Summary: mark DynamoDBMetadataStore.deleteTrackingValueMap as final
 Key: HADOOP-16416
 URL: https://issues.apache.org/jira/browse/HADOOP-16416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static and 
can/should be marked as final; its name changed to upper case to match the 
coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16409) Allow authoritative mode on non-qualified paths

2019-07-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880601#comment-16880601
 ] 

Hudson commented on HADOOP-16409:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16869 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16869/])
HADOOP-16409. Allow authoritative mode on non-qualified paths. (gabor.bota: rev 
de6b7bc67ace7744adb0320ee7de79cf28259d2d)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestAuthoritativePath.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java


> Allow authoritative mode on non-qualified paths
> ---
>
> Key: HADOOP-16409
> URL: https://issues.apache.org/jira/browse/HADOOP-16409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.3.0
>
>
> fs.s3a.authoritative.path currently requires a qualified URI (e.g. 
> s3a://bucket/path) which is how I see this being used most immediately, but 
> it also make sense for someone to just be able to configure /path, if all of 
> their buckets follow that pattern, or if they're providing configuration 
> already in a bucket-specific context (e.g. job-level configs, etc.) Just need 
> to qualify whatever is passed in to allowAuthoritative to make that work.
> Also, in HADOOP-16396 Gabor pointed out a few whitepace nits that I neglected 
> to fix before merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13980:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-15619)

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1056: HDDS-1717. Remove OMFailoverProxyProvider's dependency on hadoop-3.2

2019-07-08 Thread GitBox
hanishakoneru commented on issue #1056: HDDS-1717. Remove 
OMFailoverProxyProvider's dependency on hadoop-3.2
URL: https://github.com/apache/hadoop/pull/1056#issuecomment-509324151
 
 
   Thank you @elek for the review. 
   Fixed the findbug error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1056: HDDS-1717. Remove OMFailoverProxyProvider's dependency on hadoop-3.2

2019-07-08 Thread GitBox
hanishakoneru commented on issue #1056: HDDS-1717. Remove 
OMFailoverProxyProvider's dependency on hadoop-3.2
URL: https://github.com/apache/hadoop/pull/1056#issuecomment-509324189
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1055: HDDS-1705. Recon: Add estimatedTotalCount to the response of ...

2019-07-08 Thread GitBox
vivekratnavel commented on a change in pull request #1055: HDDS-1705. Recon: 
Add estimatedTotalCount to the response of ...
URL: https://github.com/apache/hadoop/pull/1055#discussion_r301220038
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
 ##
 @@ -144,24 +150,35 @@ private void  deleteOMKeyFromContainerDB(String key)
 Table.KeyValue> containerIterator =
 containerDBServiceProvider.getContainerTableIterator();
 
-Set keysToDeDeleted = new HashSet<>();
+Set keysToBeDeleted = new HashSet<>();
 
 while (containerIterator.hasNext()) {
   Table.KeyValue keyValue =
   containerIterator.next();
   String keyPrefix = keyValue.getKey().getKeyPrefix();
   if (keyPrefix.equals(key)) {
-keysToDeDeleted.add(keyValue.getKey());
+keysToBeDeleted.add(keyValue.getKey());
   }
 }
 
-for (ContainerKeyPrefix containerKeyPrefix : keysToDeDeleted) {
+for (ContainerKeyPrefix containerKeyPrefix : keysToBeDeleted) {
   containerDBServiceProvider.deleteContainerMapping(containerKeyPrefix);
+
+  // decrement count and update containerKeyCount.
+  Long containerID = containerKeyPrefix.getContainerId();
+  long keyCount =
+  containerDBServiceProvider.getKeyCountForContainer(containerID);
+  if (keyCount > 0) {
+containerDBServiceProvider.storeContainerKeyCount(containerID,
 
 Review comment:
   Yes, this is to keep key counts up to date. And, this is not a test but the 
actual task that updates the key counts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16396) Allow authoritative mode on a subdirectory

2019-07-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16396:

Affects Version/s: 3.3.0

> Allow authoritative mode on a subdirectory
> --
>
> Key: HADOOP-16396
> URL: https://issues.apache.org/jira/browse/HADOOP-16396
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-16396.001.patch, HADOOP-16396.002.patch, 
> HADOOP-16396.003.patch
>
>
> Let's allow authoritative mode to be applied only to a subset of a bucket. 
> This is coming primarily from a Hive warehousing use-case where Hive-managed 
> tables can benefit from query planning, but can't speak for the rest of the 
> bucket. This should be limited in scope and is not a general attempt to allow 
> configuration on a per-path basis, as configuration is currently done on a 
> per-process of a per-bucket basis.
> I propose a new property (we could overload 
> fs.s3a.metadatastore.authoritative, but that seems likely to cause confusion 
> somewhere). A string would be allowed that would then be qualified in the 
> context of the FileSystem, and used to check if it is a prefix for a given 
> path. If it is, we act as though authoritative mode is enabled. If not, we 
> revert to the existing behavior of fs.s3a.metadatastore.authoritative (which 
> in practice will probably be false, the default, if the new property is in 
> use).
> Let's be clear about a few things:
> * Currently authoritative mode only short-cuts the process to avoid a 
> round-trip to S3 if we know it is safe to do so. This means that even when 
> authoritative mode is enabled for a bucket, if the metadata store does not 
> have a complete (or "authoritative") current listing cached, authoritative 
> mode still has no effect. This will still apply.
> * This will only apply to getFileStatus and listStatus, and internal calls to 
> their internal counterparts. No other API is currently using authoritative 
> mode to change behavior.
> * This will only apply to getFileStatus and listStatus calls INSIDE the 
> configured prefix. If there is a recursvie listing on the parent of the 
> configured prefix, no change in behavior will be observed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16409) Allow authoritative mode on non-qualified paths

2019-07-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16409.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> Allow authoritative mode on non-qualified paths
> ---
>
> Key: HADOOP-16409
> URL: https://issues.apache.org/jira/browse/HADOOP-16409
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.3.0
>
>
> fs.s3a.authoritative.path currently requires a qualified URI (e.g. 
> s3a://bucket/path) which is how I see this being used most immediately, but 
> it also make sense for someone to just be able to configure /path, if all of 
> their buckets follow that pattern, or if they're providing configuration 
> already in a bucket-specific context (e.g. job-level configs, etc.) Just need 
> to qualify whatever is passed in to allowAuthoritative to make that work.
> Also, in HADOOP-16396 Gabor pointed out a few whitepace nits that I neglected 
> to fix before merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16396) Allow authoritative mode on a subdirectory

2019-07-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16396:

Fix Version/s: 3.3.0

> Allow authoritative mode on a subdirectory
> --
>
> Key: HADOOP-16396
> URL: https://issues.apache.org/jira/browse/HADOOP-16396
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16396.001.patch, HADOOP-16396.002.patch, 
> HADOOP-16396.003.patch
>
>
> Let's allow authoritative mode to be applied only to a subset of a bucket. 
> This is coming primarily from a Hive warehousing use-case where Hive-managed 
> tables can benefit from query planning, but can't speak for the rest of the 
> bucket. This should be limited in scope and is not a general attempt to allow 
> configuration on a per-path basis, as configuration is currently done on a 
> per-process of a per-bucket basis.
> I propose a new property (we could overload 
> fs.s3a.metadatastore.authoritative, but that seems likely to cause confusion 
> somewhere). A string would be allowed that would then be qualified in the 
> context of the FileSystem, and used to check if it is a prefix for a given 
> path. If it is, we act as though authoritative mode is enabled. If not, we 
> revert to the existing behavior of fs.s3a.metadatastore.authoritative (which 
> in practice will probably be false, the default, if the new property is in 
> use).
> Let's be clear about a few things:
> * Currently authoritative mode only short-cuts the process to avoid a 
> round-trip to S3 if we know it is safe to do so. This means that even when 
> authoritative mode is enabled for a bucket, if the metadata store does not 
> have a complete (or "authoritative") current listing cached, authoritative 
> mode still has no effect. This will still apply.
> * This will only apply to getFileStatus and listStatus, and internal calls to 
> their internal counterparts. No other API is currently using authoritative 
> mode to change behavior.
> * This will only apply to getFileStatus and listStatus calls INSIDE the 
> configured prefix. If there is a recursvie listing on the parent of the 
> configured prefix, no change in behavior will be observed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16409) Allow authoritative mode on non-qualified paths

2019-07-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16409:

Affects Version/s: 3.3.0

> Allow authoritative mode on non-qualified paths
> ---
>
> Key: HADOOP-16409
> URL: https://issues.apache.org/jira/browse/HADOOP-16409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.3.0
>
>
> fs.s3a.authoritative.path currently requires a qualified URI (e.g. 
> s3a://bucket/path) which is how I see this being used most immediately, but 
> it also make sense for someone to just be able to configure /path, if all of 
> their buckets follow that pattern, or if they're providing configuration 
> already in a bucket-specific context (e.g. job-level configs, etc.) Just need 
> to qualify whatever is passed in to allowAuthoritative to make that work.
> Also, in HADOOP-16396 Gabor pointed out a few whitepace nits that I neglected 
> to fix before merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16409) Allow authoritative mode on non-qualified paths

2019-07-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16409:

Component/s: fs/s3

> Allow authoritative mode on non-qualified paths
> ---
>
> Key: HADOOP-16409
> URL: https://issues.apache.org/jira/browse/HADOOP-16409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.3.0
>
>
> fs.s3a.authoritative.path currently requires a qualified URI (e.g. 
> s3a://bucket/path) which is how I see this being used most immediately, but 
> it also make sense for someone to just be able to configure /path, if all of 
> their buckets follow that pattern, or if they're providing configuration 
> already in a bucket-specific context (e.g. job-level configs, etc.) Just need 
> to qualify whatever is passed in to allowAuthoritative to make that work.
> Also, in HADOOP-16396 Gabor pointed out a few whitepace nits that I neglected 
> to fix before merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16409) Allow authoritative mode on non-qualified paths

2019-07-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880563#comment-16880563
 ] 

Gabor Bota commented on HADOOP-16409:
-

+1, committed to trunk.

> Allow authoritative mode on non-qualified paths
> ---
>
> Key: HADOOP-16409
> URL: https://issues.apache.org/jira/browse/HADOOP-16409
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> fs.s3a.authoritative.path currently requires a qualified URI (e.g. 
> s3a://bucket/path) which is how I see this being used most immediately, but 
> it also make sense for someone to just be able to configure /path, if all of 
> their buckets follow that pattern, or if they're providing configuration 
> already in a bucket-specific context (e.g. job-level configs, etc.) Just need 
> to qualify whatever is passed in to allowAuthoritative to make that work.
> Also, in HADOOP-16396 Gabor pointed out a few whitepace nits that I neglected 
> to fix before merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg merged pull request #1054: HADOOP-16409. Allow authoritative mode on non-qualified paths.

2019-07-08 Thread GitBox
bgaborg merged pull request #1054: HADOOP-16409. Allow authoritative mode on 
non-qualified paths.
URL: https://github.com/apache/hadoop/pull/1054
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16415) Speed up S3A test runs

2019-07-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16415:
---

 Summary: Speed up S3A test runs
 Key: HADOOP-16415
 URL: https://issues.apache.org/jira/browse/HADOOP-16415
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


S3A Test runs are way too slow.

Speed them by

* reducing test setup/teardown costs
* eliminating obsolete test cases
* merge small tests into larger ones.

One thing i see is that the main S3A test cases create and destroy new FS 
instances; There's both a setup and teardown cost there, but it does guarantee 
better isolation.

Maybe if we know all test cases in a specific suite need the same options, we 
can manage that better; demand create the FS but only delete it in an 
@Afterclass method. That'd give us the OO-inheritance based setup of tests, but 
mean only one instance is done per suite



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16384) ITestS3AContractRootDir failing: inconsistent DDB tables

2019-07-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880541#comment-16880541
 ] 

Steve Loughran commented on HADOOP-16384:
-

I'm going to wonder if test teardown there also creates problems

> ITestS3AContractRootDir failing: inconsistent DDB tables
> 
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16384) ITestS3AContractRootDir failing: inconsistent DDB tables

2019-07-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880536#comment-16880536
 ] 

Steve Loughran edited comment on HADOOP-16384 at 7/8/19 4:55 PM:
-

Notable that some failures all come from the listings of entries created in the 
same "inconsistent listing" test

{code}
"type"  "deleted"   "path"  "is_auth_dir"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
"s3a://hwdev-steve-ireland-new/fork-0009/test/a/b/dir3-DELAY_LISTING_ME"
"dir"   "false" ""  "UNKNOWN"   0   ""  1562604657792   
"Mon Jul 08 17:50:57 BST 2019"  ""  ""
"s3a://hwdev-steve-ireland-new/fork-0009/test/rolling/1""dir"   "false" 
""  "UNKNOWN"   0   ""  1562604657792   "Mon Jul 08 
17:50:57 BST 2019"  ""  ""
"s3a://hwdev-steve-ireland-new/test/DELAY_LISTING_ME"   "dir"   "false" ""  
"UNKNOWN"   0   ""  1562604657792   "Mon Jul 08 17:50:57 
BST 2019"  ""  ""
{code}


was (Author: ste...@apache.org):
Notable that some failures all come from the listings of entries created in the 
same "inconsistent listing" test
{code}
"type"  "deleted"   "path"  "is_auth_dir"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
"s3a://hwdev-steve-ireland-new/fork-0009/test/a/b/dir3-DELAY_LISTING_ME"
"dir"   "false" ""  "UNKNOWN"   0   ""  1562604657792   
"Mon Jul 08 17:50:57 BST 2019"  ""  ""
"s3a://hwdev-steve-ireland-new/fork-0009/test/rolling/1""dir"   "false" 
""  "UNKNOWN"   0   ""  1562604657792   "Mon Jul 08 
17:50:57 BST 2019"  ""  ""
"s3a://hwdev-steve-ireland-new/test/DELAY_LISTING_ME"   "dir"   "false" ""  
"UNKNOWN"   0   ""  1562604657792   "Mon Jul 08 17:50:57 
BST 2019"  ""  ""
{code}

> ITestS3AContractRootDir failing: inconsistent DDB tables
> 
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16384) ITestS3AContractRootDir failing: inconsistent DDB tables

2019-07-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880536#comment-16880536
 ] 

Steve Loughran commented on HADOOP-16384:
-

Notable that some failures all come from the listings of entries created in the 
same "inconsistent listing" test
{code}
"type"  "deleted"   "path"  "is_auth_dir"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
"s3a://hwdev-steve-ireland-new/fork-0009/test/a/b/dir3-DELAY_LISTING_ME"
"dir"   "false" ""  "UNKNOWN"   0   ""  1562604657792   
"Mon Jul 08 17:50:57 BST 2019"  ""  ""
"s3a://hwdev-steve-ireland-new/fork-0009/test/rolling/1""dir"   "false" 
""  "UNKNOWN"   0   ""  1562604657792   "Mon Jul 08 
17:50:57 BST 2019"  ""  ""
"s3a://hwdev-steve-ireland-new/test/DELAY_LISTING_ME"   "dir"   "false" ""  
"UNKNOWN"   0   ""  1562604657792   "Mon Jul 08 17:50:57 
BST 2019"  ""  ""
{code}

> ITestS3AContractRootDir failing: inconsistent DDB tables
> 
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1061: HADOOP-16380 S3Guard tombstones can mislead about directory empty status

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1061: HADOOP-16380 S3Guard tombstones can 
mislead about directory empty status
URL: https://github.com/apache/hadoop/pull/1061#issuecomment-509303819
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1048 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 700 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 724 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | the patch passed |
   | +1 | findbugs | 63 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 269 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3241 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1061/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1061 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 83974efa03db 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec851e4 |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1061/1/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1061/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1050: HDDS-1550. MiniOzoneCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1050: HDDS-1550. MiniOzoneCluster is not 
shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1050#issuecomment-509296144
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 552 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 866 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 352 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 573 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 460 | the patch passed |
   | +1 | compile | 251 | the patch passed |
   | +1 | javac | 251 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 545 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 246 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1599 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6860 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1050 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d4680d08f996 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec851e4 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/5/testReport/ |
   | Max. process+thread count | 5261 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memor…

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memor…
URL: https://github.com/apache/hadoop/pull/1011#issuecomment-509295309
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 79 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1229 | trunk passed |
   | +1 | compile | 1416 | trunk passed |
   | +1 | checkstyle | 153 | trunk passed |
   | +1 | mvnsite | 173 | trunk passed |
   | +1 | shadedclient | 1101 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 131 | trunk passed |
   | 0 | spotbugs | 182 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 305 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 122 | the patch passed |
   | +1 | compile | 1316 | the patch passed |
   | +1 | javac | 1316 | the patch passed |
   | -0 | checkstyle | 160 | root: The patch generated 2 new + 245 unchanged - 
1 fixed = 247 total (was 246) |
   | +1 | mvnsite | 163 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 731 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 131 | the patch passed |
   | +1 | findbugs | 329 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 575 | hadoop-common in the patch passed. |
   | -1 | unit | 7050 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 15294 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFSImage |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.tools.TestHdfsConfigFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1011 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5110b20e143c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec851e4 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/10/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/10/testReport/ |
   | Max. process+thread count | 3476 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1061: HADOOP-16380S3Guard tombstones can mislead about directory empty status

2019-07-08 Thread GitBox
steveloughran opened a new pull request #1061: HADOOP-16380S3Guard tombstones 
can mislead about directory empty status
URL: https://github.com/apache/hadoop/pull/1061
 
 
   
   Initial patch changes ITestS3GuardEmptyDirs to replicate tombstone problem. 
   
   Moved the access to the innerGetFileStatus call into S3ATestUtils so that 
tests in the s3guard package can also get at it
   
   Change-Id: I5e0aecea008ea281c12ca2ff16388effef45956c
   
   Tested: s3 ireland. Now successfully fails :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1060: HADOOP-13980. fsck - Work In Progress

2019-07-08 Thread GitBox
hadoop-yetus commented on a change in pull request #1060: HADOOP-13980. fsck - 
Work In Progress
URL: https://github.com/apache/hadoop/pull/1060#discussion_r301168854
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java
 ##
 @@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.commons.math3.ode.UnknownParameterException;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Violation handler for the S3Guard's fsck
+ * 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1060: HADOOP-13980. fsck - Work In Progress

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1060: HADOOP-13980. fsck - Work In Progress
URL: https://github.com/apache/hadoop/pull/1060#issuecomment-509281781
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1056 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 695 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 6 | The patch fails to run checkstyle in hadoop-aws |
   | -1 | mvnsite | 18 | hadoop-aws in the patch failed. |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 736 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | -1 | findbugs | 62 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-aws in the patch passed. |
   | -1 | asflicense | 28 | The patch generated 1 ASF License warnings. |
   | | | 3249 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.compareFileStatusToPathMetadata(S3AFileStatus,
 PathMetadata)   At S3GuardFsck.java:== or != in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.compareFileStatusToPathMetadata(S3AFileStatus,
 PathMetadata)   At S3GuardFsck.java:[line 221] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1060 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 27a7f95ca05a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec851e4 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1060/out/maven-patch-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1060/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #1060: HADOOP-13980. fsck - Work In Progress

2019-07-08 Thread GitBox
bgaborg opened a new pull request #1060: HADOOP-13980. fsck - Work In Progress
URL: https://github.com/apache/hadoop/pull/1060
 
 
   WIP


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1054: HADOOP-16409. Allow authoritative mode on non-qualified paths.

2019-07-08 Thread GitBox
bgaborg commented on issue #1054: HADOOP-16409. Allow authoritative mode on 
non-qualified paths.
URL: https://github.com/apache/hadoop/pull/1054#issuecomment-509258404
 
 
   Test result against ireland: 4 known testMRJob failures, no others.
   +1 on this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memor…

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memor…
URL: https://github.com/apache/hadoop/pull/1011#issuecomment-509257330
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1069 | trunk passed |
   | +1 | compile | 1049 | trunk passed |
   | +1 | checkstyle | 138 | trunk passed |
   | +1 | mvnsite | 151 | trunk passed |
   | +1 | shadedclient | 997 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 132 | trunk passed |
   | 0 | spotbugs | 172 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 290 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 108 | the patch passed |
   | +1 | compile | 980 | the patch passed |
   | +1 | javac | 980 | the patch passed |
   | -0 | checkstyle | 136 | root: The patch generated 2 new + 245 unchanged - 
1 fixed = 247 total (was 246) |
   | +1 | mvnsite | 147 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | the patch passed |
   | +1 | findbugs | 304 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 528 | hadoop-common in the patch failed. |
   | -1 | unit | 5030 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 12044 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.sink.TestFileSink |
   |   | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestBlockStoragePolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1011 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux daa98c35f1b9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec851e4 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/9/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/9/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/9/testReport/ |
   | Max. process+thread count | 4179 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-07-08 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880424#comment-16880424
 ] 

Sean Mackrory commented on HADOOP-13980:


{quote}Export metadatastore and S3 bucket hierarchy{quote}
Is this different in some way from "export scan results in human readable 
format". Thought about maybe having a machine-readable export that we could 
import if that might help with supportability. I've personally never seen a 
support issue that it would've helped with, but just something to think about...

{quote}Implement the fixing mechanism{quote}
We can probably break this into more subtasks. If would be best if the 
implementation had a sequence of specific "fixers" to address specific 
discrepancies. "fixMissingParents", "fixOutOfDateEntries", etc.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16380) S3Guard tombstones can mislead about directory empty status

2019-07-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880418#comment-16880418
 ] 

Steve Loughran commented on HADOOP-16380:
-

You seem to be able to generate this failure mode in {{ITestS3GuardEmptyDirs}} 
simply by giving the second file (the one created with the knowledge of the 
metastore) a name which ensures it comes ahead in the listings than the earlier 
file. 

> S3Guard tombstones can mislead about directory empty status
> ---
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.0.3, 3.3.0, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> If S3AFileSystem does an S3 LIST restricted to a single object to see if a 
> directory is empty, and the single entry found has a tombstone marker (either 
> from an inconsistent DDB Table or from an eventually consistent LIST) then it 
> will consider the directory empty, _even if there is 1+ entry which is not 
> deleted_
> We need to make sure the calculation of whether a directory is empty or not 
> is resilient to this, efficiently. 
> It surfaces  as an issue two places
> * delete(path) (where it may make things worse)
> * rename(src, dest), where a check is made for dest != an empty directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16400) clover task failed

2019-07-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880417#comment-16880417
 ] 

Hadoop QA commented on HADOOP-16400:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
70m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 32s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}286m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16400 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973912/HADOOP-16400-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux e9d06a671c20 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ec851e4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16373/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16373/testReport/ |
| Max. process+thread count | 4340 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16373/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> clover task failed
> --
>
> Key: HADOOP-16400
> URL: https://issues.apache.org/jira/browse/HADOOP-16400
> Project: Hadoop Common
>  Issue Type: 

[jira] [Comment Edited] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-07-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880359#comment-16880359
 ] 

Gabor Bota edited comment on HADOOP-13980 at 7/8/19 2:35 PM:
-

I started to work on this and I'll create some subtasks with the things we want 
to have with the checker.
First thought:
* Checking metadata consistency between S3 and metadatastore and log it
* Checking internal consistency of the MetadataStore
* Export metadatastore and S3 bucket hierarchy 
* Export scan results in human readable format
* Implement the fixing mechanism

As you can see the first thing that will be implemented is the consistency 
checker. If you agree with this (so no concerns or ideas) I'll create these 
sub-tasks and create a pull request for the first one.


was (Author: gabor.bota):
I started to work on this and I'll create some subtasks with the things we want 
to have with the checker.
First thought:
* Checking metadata consistency between S3 and metadatastore and log it
* Checking internal consistency of the MetadataStore
* Export metadatastore and S3 bucket hierarchi 
* Export scan results in human readable format
* Implement the fixing mechanism

As you can see the first thing that will be implemented is the consistency 
checker. If you agree with this (so no concerns or ideas) I'll create these 
sub-tasks and create a pull request for the first one.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16414) ITestS3AMiniYarnCluster fails on sequential runs with Kerberos error

2019-07-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16414:
---

 Summary: ITestS3AMiniYarnCluster fails on sequential runs with 
Kerberos error
 Key: HADOOP-16414
 URL: https://issues.apache.org/jira/browse/HADOOP-16414
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


If you do a sequential test run of hadoop-aws, you get a failure on 
{{ITestS3AMiniYarnCluster}}, with a message about Kerberos coming from inside 
job launch.

{code}
[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)  
Time elapsed: 3.438 s  <<< ERROR!
java.io.IOException: Can't get Master Kerberos principal for use as renewer
at 
org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster.testWithMiniCluster(ITestS3AMiniYarnCluster.java:117)
{code}

Assumption: some state in the single JVM is making this test think it should be 
using Kerberos.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1054: HADOOP-16409. Allow authoritative mode on non-qualified paths.

2019-07-08 Thread GitBox
bgaborg commented on a change in pull request #1054: HADOOP-16409. Allow 
authoritative mode on non-qualified paths.
URL: https://github.com/apache/hadoop/pull/1054#discussion_r301107537
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -3810,7 +3809,6 @@ public LocatedFileStatus next() throws IOException {
   final PathMetadata pm = metadataStore.get(path, true);
   // shouldn't need to check pm.isDeleted() because that will have
   // been caught by getFileStatus above.
-
 
 Review comment:
   please add this line back in


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1054: HADOOP-16409. Allow authoritative mode on non-qualified paths.

2019-07-08 Thread GitBox
bgaborg commented on a change in pull request #1054: HADOOP-16409. Allow 
authoritative mode on non-qualified paths.
URL: https://github.com/apache/hadoop/pull/1054#discussion_r301107465
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
 ##
 @@ -1321,6 +1321,7 @@ public void put(
   final DirListingMetadata meta,
   @Nullable final BulkOperationState operationState) throws IOException {
 LOG.debug("Saving to table {} in region {}: {}", tableName, region, meta);
+
 
 Review comment:
   Please remove this line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1054: HADOOP-16409. Allow authoritative mode on non-qualified paths.

2019-07-08 Thread GitBox
bgaborg commented on a change in pull request #1054: HADOOP-16409. Allow 
authoritative mode on non-qualified paths.
URL: https://github.com/apache/hadoop/pull/1054#discussion_r301107644
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2421,7 +2421,6 @@ void maybeCreateFakeParentDirectory(Path path)
 result.add(files.next());
   }
   // merge the results. This will update the store as needed
-
 
 Review comment:
   please add this line back in


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16380) S3Guard tombstones can mislead about directory empty status

2019-07-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880368#comment-16880368
 ] 

Steve Loughran commented on HADOOP-16380:
-

Note comment in {{org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirs}}

{code}
/**
 * Test logic around whether or not a directory is empty, with S3Guard enabled.
 * The fact that S3AFileStatus has an isEmptyDirectory flag in it makes caching
 * S3AFileStatus's really tricky, as the flag can change as a side effect of
 * changes to other paths.
 * After S3Guard is merged to trunk, we should try to remove the
 * isEmptyDirectory flag from S3AFileStatus, or maintain it outside
 * of the MetadataStore.
 */
{code}

> S3Guard tombstones can mislead about directory empty status
> ---
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.0.3, 3.3.0, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> If S3AFileSystem does an S3 LIST restricted to a single object to see if a 
> directory is empty, and the single entry found has a tombstone marker (either 
> from an inconsistent DDB Table or from an eventually consistent LIST) then it 
> will consider the directory empty, _even if there is 1+ entry which is not 
> deleted_
> We need to make sure the calculation of whether a directory is empty or not 
> is resilient to this, efficiently. 
> It surfaces  as an issue two places
> * delete(path) (where it may make things worse)
> * rename(src, dest), where a check is made for dest != an empty directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-07-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880359#comment-16880359
 ] 

Gabor Bota commented on HADOOP-13980:
-

I started to work on this and I'll create some subtasks with the things we want 
to have with the checker.
First thought:
* Checking metadata consistency between S3 and metadatastore and log it
* Checking internal consistency of the MetadataStore
* Export metadatastore and S3 bucket hierarchi 
* Export scan results in human readable format
* Implement the fixing mechanism

As you can see the first thing that will be implemented is the consistency 
checker. If you agree with this (so no concerns or ideas) I'll create these 
sub-tasks and create a pull request for the first one.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16380) S3Guard tombstones can mislead about directory empty status

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16380 started by Steve Loughran.
---
> S3Guard tombstones can mislead about directory empty status
> ---
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.0.3, 3.3.0, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> If S3AFileSystem does an S3 LIST restricted to a single object to see if a 
> directory is empty, and the single entry found has a tombstone marker (either 
> from an inconsistent DDB Table or from an eventually consistent LIST) then it 
> will consider the directory empty, _even if there is 1+ entry which is not 
> deleted_
> We need to make sure the calculation of whether a directory is empty or not 
> is resilient to this, efficiently. 
> It surfaces  as an issue two places
> * delete(path) (where it may make things worse)
> * rename(src, dest), where a check is made for dest != an empty directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-8232) Provide a command line entry point to view/test topology options

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-8232 stopped by Steve Loughran.
--
> Provide a command line entry point to view/test topology options
> 
>
> Key: HADOOP-8232
> URL: https://issues.apache.org/jira/browse/HADOOP-8232
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-8232.patch, HADOOP-8232.patch
>
>
> Add a new command line entry point "topo" with commands for preflight 
> checking of a clusters topology setup. 
> The initial operations would be to list the implementation class of the 
> mapper, and attempt to load it, resolve a set of supplied hostnames, then 
> dump the topology map after the resolution process.
> Target audience: 
> # ops teams trying to get a new/changed script working before deploying it on 
> a cluster.
> # someone trying to write their first script.
> Resolve and list the rack mappings of the given host
> {code}
> hadoop topo test [host1] [host2] ... 
> {code}
> This would load the hostnames from a given file, resolve all of them and list 
> the results:
> {code}
> hadoop topo testfile filename
> {code}
>  This version is intended for the ops team who have a list of hostnames, IP 
> addresses. 
> * Rather than just list them, the ops team may want to mandate that there 
> were no /default-rack mappings found, as that is invariably a sign that the 
> script isn't handling a hostname properly.
> * No attempt to be clever and do IP address resolution, FQDN to hostname 
> mapping, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8232) Provide a command line entry point to view/test topology options

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-8232:
--

Assignee: (was: Steve Loughran)

> Provide a command line entry point to view/test topology options
> 
>
> Key: HADOOP-8232
> URL: https://issues.apache.org/jira/browse/HADOOP-8232
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-8232.patch, HADOOP-8232.patch
>
>
> Add a new command line entry point "topo" with commands for preflight 
> checking of a clusters topology setup. 
> The initial operations would be to list the implementation class of the 
> mapper, and attempt to load it, resolve a set of supplied hostnames, then 
> dump the topology map after the resolution process.
> Target audience: 
> # ops teams trying to get a new/changed script working before deploying it on 
> a cluster.
> # someone trying to write their first script.
> Resolve and list the rack mappings of the given host
> {code}
> hadoop topo test [host1] [host2] ... 
> {code}
> This would load the hostnames from a given file, resolve all of them and list 
> the results:
> {code}
> hadoop topo testfile filename
> {code}
>  This version is intended for the ops team who have a list of hostnames, IP 
> addresses. 
> * Rather than just list them, the ops team may want to mandate that there 
> were no /default-rack mappings found, as that is invariably a sign that the 
> script isn't handling a hostname properly.
> * No attempt to be clever and do IP address resolution, FQDN to hostname 
> mapping, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8232) Provide a command line entry point to view/test topology options

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-8232:
--

Assignee: (was: Steve Loughran)

> Provide a command line entry point to view/test topology options
> 
>
> Key: HADOOP-8232
> URL: https://issues.apache.org/jira/browse/HADOOP-8232
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-8232.patch, HADOOP-8232.patch
>
>
> Add a new command line entry point "topo" with commands for preflight 
> checking of a clusters topology setup. 
> The initial operations would be to list the implementation class of the 
> mapper, and attempt to load it, resolve a set of supplied hostnames, then 
> dump the topology map after the resolution process.
> Target audience: 
> # ops teams trying to get a new/changed script working before deploying it on 
> a cluster.
> # someone trying to write their first script.
> Resolve and list the rack mappings of the given host
> {code}
> hadoop topo test [host1] [host2] ... 
> {code}
> This would load the hostnames from a given file, resolve all of them and list 
> the results:
> {code}
> hadoop topo testfile filename
> {code}
>  This version is intended for the ops team who have a list of hostnames, IP 
> addresses. 
> * Rather than just list them, the ops team may want to mandate that there 
> were no /default-rack mappings found, as that is invariably a sign that the 
> script isn't handling a hostname properly.
> * No attempt to be clever and do IP address resolution, FQDN to hostname 
> mapping, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8232) Provide a command line entry point to view/test topology options

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-8232:
--

Assignee: Steve Loughran

> Provide a command line entry point to view/test topology options
> 
>
> Key: HADOOP-8232
> URL: https://issues.apache.org/jira/browse/HADOOP-8232
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-8232.patch, HADOOP-8232.patch
>
>
> Add a new command line entry point "topo" with commands for preflight 
> checking of a clusters topology setup. 
> The initial operations would be to list the implementation class of the 
> mapper, and attempt to load it, resolve a set of supplied hostnames, then 
> dump the topology map after the resolution process.
> Target audience: 
> # ops teams trying to get a new/changed script working before deploying it on 
> a cluster.
> # someone trying to write their first script.
> Resolve and list the rack mappings of the given host
> {code}
> hadoop topo test [host1] [host2] ... 
> {code}
> This would load the hostnames from a given file, resolve all of them and list 
> the results:
> {code}
> hadoop topo testfile filename
> {code}
>  This version is intended for the ops team who have a list of hostnames, IP 
> addresses. 
> * Rather than just list them, the ops team may want to mandate that there 
> were no /default-rack mappings found, as that is invariably a sign that the 
> script isn't handling a hostname properly.
> * No attempt to be clever and do IP address resolution, FQDN to hostname 
> mapping, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-07-08 Thread Tsuyoshi Ozawa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned HADOOP-13363:
---

Assignee: (was: Tsuyoshi Ozawa)

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-8231) Make topologies easier to set up and debug

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-8231 stopped by Steve Loughran.
--
> Make topologies easier to set up and debug
> --
>
> Key: HADOOP-8231
> URL: https://issues.apache.org/jira/browse/HADOOP-8231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Topology scripts are a source of problems as they 
> # are site-specific.
> # hard to get right.
> # can have adverse consequences on cluster operation when they go wrong.
> This issue is to group up the features needed to make it easier for ops 
> people to get their scripts up and running.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8231) Make topologies easier to set up and debug

2019-07-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-8231:
--

Assignee: (was: Steve Loughran)

> Make topologies easier to set up and debug
> --
>
> Key: HADOOP-8231
> URL: https://issues.apache.org/jira/browse/HADOOP-8231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Topology scripts are a source of problems as they 
> # are site-specific.
> # hard to get right.
> # can have adverse consequences on cluster operation when they go wrong.
> This issue is to group up the features needed to make it easier for ops 
> people to get their scripts up and running.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1050: HDDS-1550. MiniOzoneCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-07-08 Thread GitBox
hadoop-yetus commented on issue #1050: HDDS-1550. MiniOzoneCluster is not 
shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1050#issuecomment-509223960
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 466 | trunk passed |
   | +1 | compile | 245 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 816 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 496 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 417 | the patch passed |
   | +1 | compile | 255 | the patch passed |
   | +1 | javac | 255 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 138 | the patch passed |
   | +1 | findbugs | 503 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 253 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1422 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6188 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1050 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6741665d49d0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec851e4 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/4/testReport/ |
   | Max. process+thread count | 5314 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1050/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >