[jira] [Created] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19146:
-

 Summary: noaa-cors-pds bucket access with global endpoint fails
 Key: HADOOP-19146
 URL: https://issues.apache.org/jira/browse/HADOOP-19146
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Viraj Jasani


All tests accessing noaa-cors-pds use us-east-1 region, as configured at bucket 
level. If global endpoint is configured (e.g. us-west-2), they fail to access 
to bucket.

 

Sample error:
{code:java}
org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
response to region [us-east-1].  This likely indicates that the S3 region 
configured in fs.s3a.endpoint.region does not match the AWS region containing 
the bucket.: null (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, 
Extended Request ID: 
6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
    at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
    at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
    at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
    at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
    at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
    at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
    at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
 {code}
{code:java}
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null (Service: 
S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended Request ID: 
6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
    at 
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:93)
    at 
software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHandler.java:279)
    ...
    ...
    ...
    at 
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)

[jira] [Created] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-02-04 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19066:
-

 Summary: AWS SDK V2 - Enabling FIPS should be allowed with central 
endpoint
 Key: HADOOP-19066
 URL: https://issues.apache.org/jira/browse/HADOOP-19066
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.5.0, 3.4.1
Reporter: Viraj Jasani


FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
considers overriding endpoint and enabling fips as mutually exclusive, we fail 
fast if fs.s3a.endpoint is set with fips support (details on HADOOP-18975).

Now, we no longer override SDK endpoint for central endpoint since we enable 
cross region access (details on HADOOP-19044) but we would still fail fast if 
endpoint is central and fips is enabled.

Changes proposed:
 * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
configured.
 * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
cross region access (expected with central endpoint).
 * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19023) ITestS3AConcurrentOps#testParallelRename intermittent timeout failure

2024-01-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19023:
-

 Summary: ITestS3AConcurrentOps#testParallelRename intermittent 
timeout failure
 Key: HADOOP-19023
 URL: https://issues.apache.org/jira/browse/HADOOP-19023
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani


Need to configure higher timeout for the test.

 
{code:java}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 256.281 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps
[ERROR] 
testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps)  Time 
elapsed: 72.565 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on 
fork-0005/test/testParallelRename-source0: 
software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution 
did not complete before the specified timeout configuration: 15000 millis
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: 
Client execution did not complete before the specified timeout configuration: 
15000 millis
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97)
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at 
software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
at 

[jira] [Created] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure

2024-01-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19022:
-

 Summary: ITestS3AConfiguration#testRequestTimeout failure
 Key: HADOOP-19022
 URL: https://issues.apache.org/jira/browse/HADOOP-19022
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani


"fs.s3a.connection.request.timeout" should be specified in milliseconds as per
{code:java}
Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT,
DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); 
{code}
The test fails consistently because it sets 120 ms timeout which is less than 
15s (min network operation duration), and hence gets reset to 15000 ms based on 
the enforcement.

 
{code:java}
[ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration)  
Time elapsed: 0.016 s  <<< FAILURE!
java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is 
different than what AWS sdk configuration uses internally expected:<12> but 
was:<15000>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at 
org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18959) Use builder for prefetch CachingBlockManager

2023-10-29 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18959:
-

 Summary: Use builder for prefetch CachingBlockManager
 Key: HADOOP-18959
 URL: https://issues.apache.org/jira/browse/HADOOP-18959
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani


Some of the recent changes (HADOOP-18399, HADOOP-18291, HADOOP-18829 etc) have 
added more params for prefetch CachingBlockManager c'tor to process read/write 
block requests. They have added too many params and more are likely to be 
introduced later. We should use builder pattern to pass params.

This would also help consolidating required prefetch params into one single 
place within S3ACachingInputStream, from scattered locations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18829) s3a prefetch LRU cache eviction metric

2023-10-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-18829.
---
Fix Version/s: 3.4.0
   3.3.9
 Hadoop Flags: Reviewed
   Resolution: Fixed

> s3a prefetch LRU cache eviction metric
> --
>
> Key: HADOOP-18829
> URL: https://issues.apache.org/jira/browse/HADOOP-18829
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Follow-up from HADOOP-18291:
> Add new IO statistics metric to capture s3a prefetch LRU cache eviction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18918) ITestS3GuardTool fails if SSE encryption is used

2023-10-02 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18918:
-

 Summary: ITestS3GuardTool fails if SSE encryption is used
 Key: HADOOP-18918
 URL: https://issues.apache.org/jira/browse/HADOOP-18918
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.6
Reporter: Viraj Jasani


{code:java}
[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 25.989 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardTool
[ERROR] 
testLandsatBucketRequireUnencrypted(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardTool)
  Time elapsed: 0.807 s  <<< ERROR!
46: Bucket s3a://landsat-pds: required encryption is none but actual encryption 
is DSSE-KMS
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.exitException(S3GuardTool.java:915)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.badState(S3GuardTool.java:881)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:511)
    at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:283)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
    at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:963)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.runS3GuardCommand(S3GuardToolTestHelper.java:147)
    at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:114)
    at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardTool.testLandsatBucketRequireUnencrypted(ITestS3GuardTool.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.lang.Thread.run(Thread.java:750)
 {code}
Since landsat requires none encryption, the test should be skipped for any 
encryption algorithm.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18832:
-

 Summary: Upgrade aws-java-sdk to 1.12.499+
 Key: HADOOP-18832
 URL: https://issues.apache.org/jira/browse/HADOOP-18832
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Viraj Jasani


aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
is 4.1.94.Final and this is used by aws-java-adk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18829) s3a prefetch LRU cache eviction metric

2023-07-26 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18829:
-

 Summary: s3a prefetch LRU cache eviction metric
 Key: HADOOP-18829
 URL: https://issues.apache.org/jira/browse/HADOOP-18829
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Follow-up from HADOOP-18291:

Add new IO statistics metric to capture s3a prefetch LRU cache eviction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18809) s3a prefetch read/write file operations should guard channel close

2023-07-17 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18809:
-

 Summary: s3a prefetch read/write file operations should guard 
channel close
 Key: HADOOP-18809
 URL: https://issues.apache.org/jira/browse/HADOOP-18809
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As per Steve's suggestion from s3a prefetch LRU cache,

s3a prefetch disk based cache file read and write operations should guard 
against close of FileChannel and WritableByteChannel, close them even if 
read/write operations throw IOException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18740) s3a prefetch cache blocks should be accessed by RW locks

2023-05-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18740:
-

 Summary: s3a prefetch cache blocks should be accessed by RW locks
 Key: HADOOP-18740
 URL: https://issues.apache.org/jira/browse/HADOOP-18740
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


In order to implement LRU or LFU based cache removal policies for s3a 
prefetched cache blocks, it is important for all cache reader threads to 
acquire read lock and similarly cache file removal mechanism (fs close or cache 
eviction) to acquire write lock before accessing the files.

As we maintain the block entries in an in-memory map, we should be able to 
introduce read-write lock per cache file entry, we don't need coarse-grained 
lock shared by all entries.

 

This is a prerequisite to HADOOP-18291.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2023-05-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-17612.
---
Resolution: Fixed

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Let's upgrade Zookeeper and Curator to 3.6.3 and 5.2.0 respectively.
> Curator 5.2 also supports Zookeeper 3.5 servers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2023-05-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-17612:
---

Reopening to update the resolution

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Let's upgrade Zookeeper and Curator to 3.6.3 and 5.2.0 respectively.
> Curator 5.2 also supports Zookeeper 3.5 servers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18669) Remove Log4Json Layout

2023-03-17 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18669:
-

 Summary: Remove Log4Json Layout
 Key: HADOOP-18669
 URL: https://issues.apache.org/jira/browse/HADOOP-18669
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Log4Json extends org.apache.log4j.Layout to provide log layout for Json. This 
utility is not being used anywhere in Hadoop. It is IA.Private (by default).

Log4j2 has introduced drastic changes to the Layout. It also converted it as an 
interface. Log4j2 also has JsonLayout, it provides options like Pretty vs. 
compact JSON, Encoding UTF-8 or UTF-16, Complete well-formed JSON vs. fragment 
JSON, addition of custom fields into generated JSON

[https://github.com/apache/logging-log4j2/blob/2.x/log4j-core/src/main/java/org/apache/logging/log4j/core/layout/JsonLayout.java]

 

This utility is more suitable to be part of log4j project rather than hadoop 
because the maintenance cost in hadoop would be higher with any more upgrades 
introducing changes to the Layout format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-03-17 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-18631.
---
Resolution: Fixed

> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-03-17 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-18631:
---

> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18668) Path capability probe for truncate is only honored by RawLocalFileSystem

2023-03-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18668:
-

 Summary: Path capability probe for truncate is only honored by 
RawLocalFileSystem
 Key: HADOOP-18668
 URL: https://issues.apache.org/jira/browse/HADOOP-18668
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


FileSystem#hasPathCapability returns true for probing 
"fs.capability.paths.truncate" only by RawLocalFileSystem. It should be honored 
by all file system implementations that support truncate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18654) Remove unused custom appender TaskLogAppender

2023-03-06 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18654:
-

 Summary: Remove unused custom appender TaskLogAppender
 Key: HADOOP-18654
 URL: https://issues.apache.org/jira/browse/HADOOP-18654
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


TaskLogAppender is no longer being used in codebase. The only past references 
we have are from old releasenotes (HADOOP-7308, MAPREDUCE-3208, MAPREDUCE-2372, 
HADOOP-1355).

Before we migrate to log4j2, it would be good to remove TaskLogAppender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18653) LogLevel servlet to determine log impl before using setLevel

2023-03-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18653:
-

 Summary: LogLevel servlet to determine log impl before using 
setLevel
 Key: HADOOP-18653
 URL: https://issues.apache.org/jira/browse/HADOOP-18653
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


LogLevel GET API is used to set log level for a given class name dynamically. 
While we have cleaned up the commons-logging references, it would be great to 
determine whether slf4j log4j adapter is in the classpath before allowing 
client to set the log level.

Proposed changes:
 * Use slf4j logger factory to get the log reference for the given class name
 * Use generic utility to identify if the slf4j log4j adapter is in the 
classpath before using log4j API to update the log level
 * If the log4j adapter is not in the classpath, report error in the output



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18649) CLA and CRLA appenders to be replaced with RFA

2023-03-01 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18649:
-

 Summary: CLA and CRLA appenders to be replaced with RFA
 Key: HADOOP-18649
 URL: https://issues.apache.org/jira/browse/HADOOP-18649
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ContainerLogAppender and ContainerRollingLogAppender both have quite similar 
functionality as RollingFileAppender. Maintenance of custom appenders for 
Log4J2 is costly when there is very minor difference in comparison with 
built-in appender provided by Log4J. 

The goal of this sub-task is to replace both ContainerLogAppender and 
ContainerRollingLogAppender custom appenders with RollingFileAppender without 
changing any system properties already being used to determine file name, file 
size, backup index, pattern layout properties etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18648) Avoid loading kms log4j properties dynamically by KMSWebServer

2023-02-27 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18648:
-

 Summary: Avoid loading kms log4j properties dynamically by 
KMSWebServer
 Key: HADOOP-18648
 URL: https://issues.apache.org/jira/browse/HADOOP-18648
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Log4j2 does not support loading of log4j properties (/xml/json/yaml) 
dynamically by applications. It no longer supports overriding the loading of 
properties using "log4j.defaultInitOverride" the way log4j1 does.

For KMS, instead of loading the properties file dynamically, we should add the 
log4j properties file as part of HADOOP_OPTS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18645) Provide keytab file key name with ServiceStateException

2023-02-24 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18645:
-

 Summary: Provide keytab file key name with ServiceStateException
 Key: HADOOP-18645
 URL: https://issues.apache.org/jira/browse/HADOOP-18645
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Viraj Jasani
Assignee: Viraj Jasani


 
{code:java}
util.ExitUtil - Exiting with status 1: 
org.apache.hadoop.service.ServiceStateException: java.io.IOException: Running 
in secure mode, but config doesn't have a keytab
1: org.apache.hadoop.service.ServiceStateException: java.io.IOException: 
Running in secure mode, but config doesn't have a keytab
  at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:264)
..
..
 {code}
 

 

When multiple downstreamers use different configs to present the same keytab 
file, if one of the config key gets missing or overridden as part of config 
generators, it becomes bit confusing for operators to realize which config is 
missing for a particular service, especially when keytab file value is already 
present with different config.

It would be nice to report config key with the stacktrace error message.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18631:
-

 Summary: Migrate Async appenders to log4j properties
 Key: HADOOP-18631
 URL: https://issues.apache.org/jira/browse/HADOOP-18631
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Before we can upgrade to log4j2, we need to migrate async appenders that we add 
"dynamically in the code" to the log4j.properties file. Instead of using 
core/hdfs site configs, whether to use async appenders should be decided based 
on system properties that log4j properties can derive value from.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18628) Server connection should log host name before returning VersionMismatch error

2023-02-10 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18628:
-

 Summary: Server connection should log host name before returning 
VersionMismatch error
 Key: HADOOP-18628
 URL: https://issues.apache.org/jira/browse/HADOOP-18628
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Viraj Jasani
Assignee: Viraj Jasani


In env with dynamically changing IP addresses, debugging issue with the logs 
with only IP address becomes a bit difficult at times.
{code:java}
2023-02-08 23:26:50,112 WARN  [Socket Reader #1 for port 8485] ipc.Server - 
Incorrect RPC Header length from {IPV4}:36556 expected length: 
java.nio.HeapByteBuffer[pos=0 lim=4 cap=4] got length: 
java.nio.HeapByteBuffer[pos=0 lim=4 cap=4] {code}
It would be better to log full hostname for the given IP address rather than 
only IP address.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18620) Avoid using grizzly-http classes

2023-02-06 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18620:
-

 Summary: Avoid using grizzly-http classes
 Key: HADOOP-18620
 URL: https://issues.apache.org/jira/browse/HADOOP-18620
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed on the parent Jira HADOOP-15984, we do not have any 
grizzly-http-servlet version available that uses Jersey 2 dependencies. 

version 2.4.4 contains Jersey 1 artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/2.4.4/grizzly-http-servlet-2.4.4.pom]

The next higher version available is 3.0.0-M1 and it contains Jersey 3 
artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/3.0.0-M1/grizzly-http-servlet-3.0.0-M1.pom]

 

Moreover, we do not use grizzly-http-* modules extensively. We use them only 
for few tests such that we don't have to implement all the methods of 
HttpServletResponse for our custom test classes.

We should get rid of grizzly-http-servlet, grizzly-http and grizzly-http-server 
artifacts of org.glassfish.grizzly and rather implement HttpServletResponse 
directly to avoid having to depend on grizzly upgrades as part of overall 
Jersey upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18592) Sasl connection failure should log remote address

2023-01-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18592:
-

 Summary: Sasl connection failure should log remote address
 Key: HADOOP-18592
 URL: https://issues.apache.org/jira/browse/HADOOP-18592
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.4
Reporter: Viraj Jasani
Assignee: Viraj Jasani


If Sasl connection fails with some generic error, we miss logging remote server 
that the client was trying to connect to.

Sample log:
{code:java}
2023-01-12 00:22:28,148 WARN  [20%2C1673404849949,1] ipc.Client - Exception 
encountered while connecting to the server 
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
    at sun.nio.ch.IOUtil.read(IOUtil.java:197)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
    at 
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
    at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:141)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
    at java.io.FilterInputStream.read(FilterInputStream.java:133)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
    at java.io.DataInputStream.readInt(DataInputStream.java:387)
    at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1950)
    at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367)
    at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:623)
    at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:414)
...
... {code}
We should log the remote server address.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18466) Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field

2022-09-22 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18466:
-

 Summary: Limit the findbugs suppression IS2_INCONSISTENT_SYNC to 
S3AFileSystem field
 Key: HADOOP-18466
 URL: https://issues.apache.org/jira/browse/HADOOP-18466
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field 
futurePool to avoid letting it discover other synchronization bugs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18455) s3a prefetching Executor should be closed

2022-09-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18455:
-

 Summary: s3a prefetching Executor should be closed
 Key: HADOOP-18455
 URL: https://issues.apache.org/jira/browse/HADOOP-18455
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


This is the follow-up work for HADOOP-18186. The new executor service we use 
for s3a prefetching should be closed while shutting down the file system.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18186) s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-09-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-18186.
---
Resolution: Fixed

> s3a prefetching to use SemaphoredDelegatingExecutor for submitting work
> ---
>
> Key: HADOOP-18186
> URL: https://issues.apache.org/jira/browse/HADOOP-18186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Use SemaphoredDelegatingExecutor for each to stream to submit work, if 
> possible, for better fairness in processes with many streams.
> this also takes a DurationTrackerFactory to count how long was spent in the 
> queue, something we would want to know



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-18186) s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-09-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-18186:
---

Re-opening for an addendum

> s3a prefetching to use SemaphoredDelegatingExecutor for submitting work
> ---
>
> Key: HADOOP-18186
> URL: https://issues.apache.org/jira/browse/HADOOP-18186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Use SemaphoredDelegatingExecutor for each to stream to submit work, if 
> possible, for better fairness in processes with many streams.
> this also takes a DurationTrackerFactory to count how long was spent in the 
> queue, something we would want to know



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18435) Remove usage of fs.s3a.executor.capacity

2022-08-31 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18435:
-

 Summary: Remove usage of fs.s3a.executor.capacity
 Key: HADOOP-18435
 URL: https://issues.apache.org/jira/browse/HADOOP-18435
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


When s3guard was part of s3a, DynamoDBMetadataStore was the only consumer of 
StoreContext that used throttled executor provided by StoreContext, which 
internally uses fs.s3a.executor.capacity to determine executor capacity for 
SemaphoredDelegatingExecutor. With the removal of s3guard from s3a, we should 
also remove fs.s3a.executor.capacity and it's usages as it's no longer being 
used by any StoreContext consumers. The config's existence and its description 
can be really confusing for the users.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18403) Fix FileSystem leak in ITestS3AAWSCredentialsProvider

2022-08-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18403:
-

 Summary: Fix FileSystem leak in ITestS3AAWSCredentialsProvider
 Key: HADOOP-18403
 URL: https://issues.apache.org/jira/browse/HADOOP-18403
 Project: Hadoop Common
  Issue Type: Test
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ITestS3AAWSCredentialsProvider#testAnonymousProvider has FileSystem leak that 
should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18397) Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18397:
-

 Summary: Shutdown AWSSecurityTokenService when it's resources are 
no longer in use
 Key: HADOOP-18397
 URL: https://issues.apache.org/jira/browse/HADOOP-18397
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
Reporter: Viraj Jasani
Assignee: Viraj Jasani


AWSSecurityTokenService resources can be released whenever they are no longer 
in use. The documentation of AWSSecurityTokenService#shutdown says while it is 
not important for client to compulsorily shutdown the token service, client can 
definitely perform early release whenever client no longer requires token 
service resources. We achieve this by making STSClient closable, so we can 
certainly utilize it in all places where it's suitable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18303) Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime

2022-07-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-18303.
---
Resolution: Won't Fix

> Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime
> --
>
> Key: HADOOP-18303
> URL: https://issues.apache.org/jira/browse/HADOOP-18303
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As part of HADOOP-18033, we have excluded shading of javax.ws.rs-api from 
> both hadoop-client-runtime and hadoop-client-minicluster. This has caused 
> issues for downstreamers e.g. 
> [https://github.com/apache/incubator-kyuubi/issues/2904], more discussions.
> We should put the shading back in hadoop-client-runtime to fix CNFE issues 
> for downstreamers.
> cc [~ayushsaxena] [~pan3793] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18303) Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime

2022-06-19 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18303:
-

 Summary: Remove shading exclusion of javax.ws.rs-api from 
hadoop-client-runtime
 Key: HADOOP-18303
 URL: https://issues.apache.org/jira/browse/HADOOP-18303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of HADOOP-18033, we have excluded shading of javax.ws.rs-api from both 
hadoop-client-runtime and hadoop-client-minicluster. This has caused issues for 
downstreamers e.g. [https://github.com/apache/incubator-kyuubi/issues/2904], 
more discussions.

We should put the shading back in hadoop-client-runtime to fix CNFE issues for 
downstreamers.

cc [~ayushsaxena] [~pan3793] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18288) Total requests and total requests per sec served by RPC servers

2022-06-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18288:
-

 Summary: Total requests and total requests per sec served by RPC 
servers
 Key: HADOOP-18288
 URL: https://issues.apache.org/jira/browse/HADOOP-18288
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Viraj Jasani
Assignee: Viraj Jasani


RPC Servers provide bunch of useful information like num of open connections, 
slow requests, num of in-progress handlers, RPC processing time, queue time 
etc, however so far it doesn't provide accumulation of all requests as well as 
current snapshot of requests per second served by the server. Exposing them 
would benefit from operational viewpoint in identifying how busy the servers 
have been and how much load they are currently serving in the presence of 
cluster wide high load.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18228) Update hadoop-vote to use HADOOP_RC_VERSION dir

2022-05-06 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18228:
-

 Summary: Update hadoop-vote to use HADOOP_RC_VERSION dir
 Key: HADOOP-18228
 URL: https://issues.apache.org/jira/browse/HADOOP-18228
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


The recent changes in release script requires a minor change in hadoop-vote to 
use Hadoop RC version dir before verifying signature and checksum of .tar.gz 
files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18224) Upgrade maven compiler plugin

2022-05-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18224:
-

 Summary: Upgrade maven compiler plugin
 Key: HADOOP-18224
 URL: https://issues.apache.org/jira/browse/HADOOP-18224
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Currently we are using maven-compiler-plugin 3.1 version, which is quite old 
(2013) and it's also pulling in vulnerable log4j dependency:
{code:java}
[INFO]
org.apache.maven.plugins:maven-compiler-plugin:maven-plugin:3.1:runtime
[INFO]   org.apache.maven.plugins:maven-compiler-plugin:jar:3.1
[INFO]   org.apache.maven:maven-plugin-api:jar:2.0.9
[INFO]   org.apache.maven:maven-artifact:jar:2.0.9
[INFO]   org.codehaus.plexus:plexus-utils:jar:1.5.1
[INFO]   org.apache.maven:maven-core:jar:2.0.9
[INFO]   org.apache.maven:maven-settings:jar:2.0.9
[INFO]   org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.9
...
...
...
[INFO]   log4j:log4j:jar:1.2.12
[INFO]   commons-logging:commons-logging-api:jar:1.1
[INFO]   com.google.collections:google-collections:jar:1.0
[INFO]   junit:junit:jar:3.8.2
 {code}
 

We should upgrade to 3.10.1 (latest Mar, 2022) version of maven-compiler-plugin.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18196) Remove replace-guava from replacer plugin

2022-04-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18196:
-

 Summary: Remove replace-guava from replacer plugin
 Key: HADOOP-18196
 URL: https://issues.apache.org/jira/browse/HADOOP-18196
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


While running the build, realized that all replacer plugin executions run only 
after "banned-illegal-imports" enforcer plugin.

For instance,
{code:java}
[INFO] --- maven-enforcer-plugin:3.0.0:enforce (banned-illegal-imports) @ 
hadoop-cloud-storage ---
[INFO] 
[INFO] --- replacer:1.5.3:replace (replace-generated-sources) @ 
hadoop-cloud-storage ---
[INFO] Skipping
[INFO] 
[INFO] --- replacer:1.5.3:replace (replace-sources) @ hadoop-cloud-storage ---
[INFO] Skipping
[INFO] 
[INFO] --- replacer:1.5.3:replace (replace-guava) @ hadoop-cloud-storage ---
[INFO] Replacement run on 0 file.
[INFO]  {code}
Hence, if our source code uses com.google.common, banned-illegal-imports will 
cause the build failure and replacer plugin would not even get executed.

We should remove it as it is only redundant execution step.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18191) Log retry count while handling exceptions in RetryInvocationHandler

2022-04-04 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18191:
-

 Summary: Log retry count while handling exceptions in 
RetryInvocationHandler
 Key: HADOOP-18191
 URL: https://issues.apache.org/jira/browse/HADOOP-18191
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of failure handling in RetryInvocationHandler, we log details of the 
Exception details with which API was invoked, failover attempts, delay.

For the purpose of better debugging as well as fine-tuning of retry params, it 
would be good to also log retry count that we already maintain in the Counter 
object.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18142) Increase precommit job timeout from 24 hr to 30 hr

2022-02-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18142:
-

 Summary: Increase precommit job timeout from 24 hr to 30 hr
 Key: HADOOP-18142
 URL: https://issues.apache.org/jira/browse/HADOOP-18142
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As per some recent precommit build results, full build QA is not getting 
completed in 24 hr (recent example 
[here|https://github.com/apache/hadoop/pull/4000] where more than 5 builds 
timed out after 24 hr). We should increase it to 30 hr.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18131) Upgrade maven enforcer plugin and relevant dependencies

2022-02-18 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18131:
-

 Summary: Upgrade maven enforcer plugin and relevant dependencies
 Key: HADOOP-18131
 URL: https://issues.apache.org/jira/browse/HADOOP-18131
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Maven enforcer plugin's latest version 3.0.0 has some noticeable improvements 
(e.g. MENFORCER-350, MENFORCER-388, MENFORCER-353) and fixes for us to 
incorporate. Besides, some of the relevant enforcer dependencies (e.g. extra 
enforcer rules and restrict import enforcer) too have good improvements.

We should upgrade maven enforcer plugin and the relevant dependencies.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18125) Utility to identify git commit / Jira fixVersion discrepancies for RC preparation

2022-02-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18125:
-

 Summary: Utility to identify git commit / Jira fixVersion 
discrepancies for RC preparation
 Key: HADOOP-18125
 URL: https://issues.apache.org/jira/browse/HADOOP-18125
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of RC preparation,  we need to identify all git commits that landed on 
release branch, however their corresponding Jira is either not resolved yet or 
does not contain expected fixVersions. Only when we have git commits and 
corresponding Jiras with expected fixVersion resolved, we get all such Jiras 
included in auto-generated CHANGES.md as per Yetus changelog generator.

Proposal of this Jira is to provide such script that can be useful for all 
upcoming RC preparations and list down all Jiras where we need manual 
intervention. This utility script should use Jira API to retrieve individual 
fields and use git log to loop through commit history.

The script should identify these issues:
 # commit is reverted as per commit message
 # commit does not contain Jira number format (e.g. HADOOP- / HDFS- 
etc) in message
 # Jira does not have expected fixVersion
 # Jira has expected fixVersion, but it is not yet resolved
 # Jira has release corresponding fixVersion and is resolved, but no 
corresponding commit yet found

It can take inputs as:
 # First commit hash to start excluding commits from history
 # Fix Version
 # JIRA Project Name
 # Path of project's working dir
 # Jira server url



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18098) Basic verification of release candidates

2022-01-28 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18098:
-

 Summary: Basic verification of release candidates
 Key: HADOOP-18098
 URL: https://issues.apache.org/jira/browse/HADOOP-18098
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


We should provide script for the basic sanity of Hadoop release candidates. It 
should include:
 * Signature
 * Checksum
 * Rat check
 * Build from src
 * Build tarball from src

 

Although we can include unit test as well, but overall unit test run is going 
to be significantly higher, and precommit Jenkins builds provide better view of 
UT sanity.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18089) Test coverage for Async profiler servlets

2022-01-21 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18089:
-

 Summary: Test coverage for Async profiler servlets
 Key: HADOOP-18089
 URL: https://issues.apache.org/jira/browse/HADOOP-18089
 Project: Hadoop Common
  Issue Type: Test
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed in HADOOP-18077, we should provide sufficient test coverage to 
discover any potential regression in async profiler servlets: ProfileServlet 
and ProfileOutputServlet.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18077) ProfileOutputServlet unable to proceed due to NPE

2022-01-10 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18077:
-

 Summary: ProfileOutputServlet unable to proceed due to NPE
 Key: HADOOP-18077
 URL: https://issues.apache.org/jira/browse/HADOOP-18077
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ProfileOutputServlet context doesn't have Hadoop configs available and hence 
async profiler redirection to output servlet is failing to identify if admin 
access is allowed:
{code:java}
HTTP ERROR 500 java.lang.NullPointerException
URI:    /prof-output-hadoop/async-prof-pid-98613-cpu-2.html
STATUS:    500
MESSAGE:    java.lang.NullPointerException
SERVLET:    org.apache.hadoop.http.ProfileOutputServlet-58c34bb3
CAUSED BY:    java.lang.NullPointerException
Caused by:
java.lang.NullPointerException
    at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1619)
    at 
org.apache.hadoop.http.ProfileOutputServlet.doGet(ProfileOutputServlet.java:51)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
    at 
org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    at org.eclipse.jetty.server.Server.handle(Server.java:516)
    at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
    at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
    at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
    at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
    at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
    at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
    at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18055) Async Profiler endpoint for Hadoop daemons

2021-12-22 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18055:
-

 Summary: Async Profiler endpoint for Hadoop daemons
 Key: HADOOP-18055
 URL: https://issues.apache.org/jira/browse/HADOOP-18055
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Async profiler ([https://github.com/jvm-profiling-tools/async-profiler]) is a 
low overhead sampling profiler for Java that does not suffer from Safepoint 
bias problem. It features HotSpot-specific APIs to collect stack traces and to 
track memory allocations. The profiler works with OpenJDK, Oracle JDK and other 
Java runtimes based on the HotSpot JVM.

Async profiler can also profile heap allocations, lock contention, and HW 
performance counters in addition to CPU.

We have an httpserver based servlet stack hence we can use HIVE-20202 as an 
implementation template to provide async profiler as servlet for Hadoop 
daemons. Ideally we achieve these requirements:
 * Retrieve flamegraph SVG generated from latest profile trace.
 * Online enable and disable of profiling activity. (async-profiler does not do 
instrumentation based profiling so this should not cause the code gen related 
perf problems of that other approach and can be safely toggled on and off while 
under production load.)
 * CPU profiling.
 * ALLOCATION profiling.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18039) Upgrade hbase2 version and fix TestTimelineWriterHBaseDown

2021-12-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18039:
-

 Summary: Upgrade hbase2 version and fix TestTimelineWriterHBaseDown
 Key: HADOOP-18039
 URL: https://issues.apache.org/jira/browse/HADOOP-18039
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As mentioned on the parent Jira, we can't upgrade hbase2 profile version beyond 
2.2.4 until we either have hbase 2 artifacts available that are built with 
hadoop 3 profile by default or hbase 3 is rolled out (hbase 3 is compatible 
with hadoop 3 versions only).

Let's upgrade hbase2 profile version to 2.2.4 as part of this Jira and also fix 
TestTimelineWriterHBaseDown to create connection only after mini cluster is up.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18027) Include static imports in the maven plugin rules

2021-11-29 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18027:
-

 Summary: Include static imports in the maven plugin rules
 Key: HADOOP-18027
 URL: https://issues.apache.org/jira/browse/HADOOP-18027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Maven enforcer plugin to ban illegal imports require explicit mention of static 
imports in order to evaluate whether any publicly accessible static entities 
from the banned classes are directly imported by Hadoop code.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18025) Upgrade HBase version to 1.7.1 for hbase1 profile

2021-11-25 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18025:
-

 Summary: Upgrade HBase version to 1.7.1 for hbase1 profile
 Key: HADOOP-18025
 URL: https://issues.apache.org/jira/browse/HADOOP-18025
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18022) Add restrict-imports-enforcer-rule for Guava Preconditions in hadoop-main pom

2021-11-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18022:
-

 Summary: Add restrict-imports-enforcer-rule for Guava 
Preconditions in hadoop-main pom
 Key: HADOOP-18022
 URL: https://issues.apache.org/jira/browse/HADOOP-18022
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Add restrict-imports-enforcer-rule for Guava Preconditions in hadoop-main pom 
to restrict any new import in future. Remove any remaining usages of Guava 
Preconditions from the codebase.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18018) unguava: remove Preconditions from hadoop-tools modules

2021-11-19 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18018:
-

 Summary: unguava: remove Preconditions from hadoop-tools modules
 Key: HADOOP-18018
 URL: https://issues.apache.org/jira/browse/HADOOP-18018
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Replace guava Preconditions by internal implementations that rely on java8+ 
APIs in the hadoop.util for all modules in hadoop-tools.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18017) unguava: remove Preconditions from hadoop-yarn-project modules

2021-11-19 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18017:
-

 Summary: unguava: remove Preconditions from hadoop-yarn-project 
modules
 Key: HADOOP-18017
 URL: https://issues.apache.org/jira/browse/HADOOP-18017
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Replace guava Preconditions by internal implementations that rely on java8+ 
APIs in the hadoop.util for all modules in hadoop-yarn-project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18006) maven-enforcer-plugin's execution of banned-illegal-imports gets overridden in child poms

2021-11-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18006:
-

 Summary: maven-enforcer-plugin's execution of 
banned-illegal-imports gets overridden in child poms
 Key: HADOOP-18006
 URL: https://issues.apache.org/jira/browse/HADOOP-18006
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


When we specify any maven plugin with execution tag in the parent as well as 
child modules, child module plugin overrides parent plugin. For instance, when 
{{banned-illegal-imports}} is applied for any child module with only one banned 
import (let’s say {{{}Preconditions{}}}), then only that banned import is 
covered by that child module and all imports defined in parent module (e.g 
Sets, Lists etc) are overridden and they are no longer applied.
After this 
[commit|https://github.com/apache/hadoop/commit/62c86eaa0e539a4307ca794e0fcd502a77ebceb8],
 hadoop-hdfs module will not complain about {{Sets}} even if i import it from 
guava banned imports but on the other hand, hadoop-yarn module doesn’t have any 
child level {{banned-illegal-imports}} defined so yarn modules will fail if 
{{Sets}} guava import is used.
So going forward, it would be good to replace guava imports with Hadoop’s own 
imports module-by-module and only at the end, we should add new entry to parent 
pom {{banned-illegal-imports}} list.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17968) Migrate checkstyle IllegalImport to banned-illegal-imports enforcer

2021-10-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17968:
-

 Summary: Migrate checkstyle IllegalImport to 
banned-illegal-imports enforcer
 Key: HADOOP-17968
 URL: https://issues.apache.org/jira/browse/HADOOP-17968
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed on PR [3503|https://github.com/apache/hadoop/pull/3503], we should 
migrate existing imports provided in IllegalImport tag in checkstyle.xml to 
maven-enforcer-plugin's banned-illegal-imports enforcer rule so that build 
never succeeds in the presence of any of the illegal imports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17967) Keep restrict-imports-enforcer-rule for Guava VisibleForTesting in hadoop-main pom

2021-10-14 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17967:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava 
VisibleForTesting in hadoop-main pom
 Key: HADOOP-17967
 URL: https://issues.apache.org/jira/browse/HADOOP-17967
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17963) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-yarn-project modules

2021-10-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17963:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-yarn-project modules
 Key: HADOOP-17963
 URL: https://issues.apache.org/jira/browse/HADOOP-17963
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17962) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-tools modules

2021-10-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17962:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-tools modules
 Key: HADOOP-17962
 URL: https://issues.apache.org/jira/browse/HADOOP-17962
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17959) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-cloud-storage-project and hadoop-mapreduce-project modules

2021-10-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17959:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-cloud-storage-project and hadoop-mapreduce-project modules
 Key: HADOOP-17959
 URL: https://issues.apache.org/jira/browse/HADOOP-17959
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17957) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-hdfs-project modules

2021-10-07 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17957:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-hdfs-project modules
 Key: HADOOP-17957
 URL: https://issues.apache.org/jira/browse/HADOOP-17957
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17956) Replace all default Charset usage with UTF-8

2021-10-07 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17956:
-

 Summary: Replace all default Charset usage with UTF-8
 Key: HADOOP-17956
 URL: https://issues.apache.org/jira/browse/HADOOP-17956
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As discussed on PR#3515, creating this sub-task to replace all default charset 
with UTF-8 as default charset has some potential problems (e.g. HADOOP-11379, 
HADOOP-11389).

FYI [~aajisaka]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17947) Provide alternative to Guava VisibleForTesting

2021-10-05 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-17947:
---

Reopening for a minor addendum.

> Provide alternative to Guava VisibleForTesting
> --
>
> Key: HADOOP-17947
> URL: https://issues.apache.org/jira/browse/HADOOP-17947
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In an attempt to reduce the dependency on Guava, we should remove 
> VisibleForTesting annotation usages as it has very high usage in our 
> codebase. This Jira is to provide Hadoop's own alternative and use it in 
> hadoop-common-project modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17952) Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-common-project modules

2021-10-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17952:
-

 Summary: Replace Guava VisibleForTesting by Hadoop's own 
annotation in hadoop-common-project modules
 Key: HADOOP-17952
 URL: https://issues.apache.org/jira/browse/HADOOP-17952
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17950) Provide replacement for deprecated APIs of commons-io IOUtils

2021-10-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17950:
-

 Summary: Provide replacement for deprecated APIs of commons-io 
IOUtils
 Key: HADOOP-17950
 URL: https://issues.apache.org/jira/browse/HADOOP-17950
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17947) Provide alternative to Guava VisibleForTesting in Hadoop common

2021-09-30 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17947:
-

 Summary: Provide alternative to Guava VisibleForTesting in Hadoop 
common
 Key: HADOOP-17947
 URL: https://issues.apache.org/jira/browse/HADOOP-17947
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


In an attempt to reduce the dependency on Guava, we should remove 
VisibleForTesting annotation usages as it has very high usage in our codebase. 
This Jira is to provide Hadoop's own alternative and use it in 
hadoop-common-project modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17892) Add Hadoop code formatter in dev-support

2021-09-05 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17892:
-

 Summary: Add Hadoop code formatter in dev-support
 Key: HADOOP-17892
 URL: https://issues.apache.org/jira/browse/HADOOP-17892
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


We should add Hadoop code formatter xml to dev-support specifically for new 
developers to refer to.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17874) ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner

2021-08-26 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17874:
-

 Summary: ExceptionsHandler to add terse/suppressed Exceptions in 
thread-safe manner
 Key: HADOOP-17874
 URL: https://issues.apache.org/jira/browse/HADOOP-17874
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Even though we have explicit comments stating that we have thread-safe 
replacement of terseExceptions and suppressedExceptions, in reality we don't 
have it. As we can't guarantee only non-concurrent addition of Exceptions at a 
time from any Server implementation, we should make this thread-safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17858) Avoid possible class loading deadlock with VerifierNone initialization

2021-08-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17858:
-

 Summary: Avoid possible class loading deadlock with VerifierNone 
initialization
 Key: HADOOP-17858
 URL: https://issues.apache.org/jira/browse/HADOOP-17858
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Superclass Verifier has a static initializer VERIFIER_NONE that initializes 
sub-class VerifierNone. This reference can result in deadlock during class 
loading as per 
[https://docs.oracle.com/javase/specs/jls/se8/html/jls-12.html#jls-12.4.2].

As of today, only RpcProgram use this instance and hence it is safe but if more 
clients start using this (specifically static ones), it has potential to bring 
deadlock. We should break this referencing before it is late.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17841) Remove ListenerHandle from Hadoop registry

2021-08-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17841:
-

 Summary: Remove ListenerHandle from Hadoop registry
 Key: HADOOP-17841
 URL: https://issues.apache.org/jira/browse/HADOOP-17841
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As part of HADOOP-17835 (replacing PathChildrenCache/TreeCache by 
CuratorCache), realized that although registerPathListener() of CuratorService 
returns ListenerHandle, it is not used by RegistryDNSServer. We can remove 
ListenerHandle from hadoop-registry as it is not Public/LP interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17808) ipc.Client not setting interrupt flag after catching InterruptedException

2021-08-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HADOOP-17808:
---

Reopening for an addendum to remove excessive logging.

> ipc.Client not setting interrupt flag after catching InterruptedException
> -
>
> Key: HADOOP-17808
> URL: https://issues.apache.org/jira/browse/HADOOP-17808
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> ipc.Client is swallowing InterruptedException at a couple of places:
>  # While waiting on all connections to be closed
>  # While waiting to retrieve some RPC response
> We should at least set the interrupt signal and also log the 
> InterruptedException caught.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-04 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17835:
-

 Summary: Use CuratorCache implementation instead of 
PathChildrenCache / TreeCache
 Key: HADOOP-17835
 URL: https://issues.apache.org/jira/browse/HADOOP-17835
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
CuratorCache service implementation in place of deprecated PathChildrenCache 
and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17814) Provide fallbacks for identity/cost providers and backoff enable

2021-07-24 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17814:
-

 Summary: Provide fallbacks for identity/cost providers and backoff 
enable
 Key: HADOOP-17814
 URL: https://issues.apache.org/jira/browse/HADOOP-17814
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


This sub-task is to provide default properties for identity-provider.impl, 
cost-provider.impl and backoff.enable such that if properties with port is not 
configured, we can fallback to default property (port-less).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17808) ipc.Client not setting interrupt flag after catching InterruptedException

2021-07-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17808:
-

 Summary: ipc.Client not setting interrupt flag after catching 
InterruptedException
 Key: HADOOP-17808
 URL: https://issues.apache.org/jira/browse/HADOOP-17808
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ipc.Client is swallowing InterruptedException at a couple of places:
 # While waiting on all connections to be closed
 # While waiting to retrieve some RPC response

We should at least set the interrupt signal and also log the 
InterruptedException caught.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17795) Provide fallbacks for callqueue.impl and scheduler.impl

2021-07-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17795:
-

 Summary: Provide fallbacks for callqueue.impl and scheduler.impl
 Key: HADOOP-17795
 URL: https://issues.apache.org/jira/browse/HADOOP-17795
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As mentioned in parent Jira, we should provide default properties for 
callqueue.impl and scheduler.impl such that if properties with port is not 
configured, we can fallback to default property. If "ipc.8020.callqueue.impl" 
is not present, fallback property could be "ipc.callqueue.impl" (without port). 
We can take up rest of the callqueue properties in separate sub-tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17788) Replace IOUtils#closeQuietly usages

2021-07-02 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17788:
-

 Summary: Replace IOUtils#closeQuietly usages
 Key: HADOOP-17788
 URL: https://issues.apache.org/jira/browse/HADOOP-17788
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


IOUtils#closeQuietly is deprecated since 2.6 release of commons-io without any 
replacement. Since we already have good replacement available in Hadoop's own 
IOUtils, we should use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2021-06-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-17114.
---
Resolution: Duplicate

> Replace Guava initialization of Lists.newArrayList
> --
>
> Key: HADOOP-17114
> URL: https://issues.apache.org/jira/browse/HADOOP-17114
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> There are unjustified use of Guava APIs to initialize LinkedLists and 
> ArrayLists. This could be simply replaced by Java API.
> By analyzing hadoop code, the best way to replace guava  is to do the 
> following steps:
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)
>  
> After this class is created, we can simply replace the import statement in 
> all the source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.collect.Lists;' in project with mask 
> '*.java'
> Found Occurrences  (246 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 22 import com.google.common.collect.Lists;
> org.apache.hadoop.crypto  (1 usage found)
> CryptoCodec.java  (1 usage found)
> 35 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.azurebfs  (3 usages found)
> ITestAbfsIdentityTransformer.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> ITestAzureBlobFilesystemAcl.java  (1 usage found)
> 21 import com.google.common.collect.Lists;
> ITestAzureBlobFileSystemCheckAccess.java  (1 usage found)
> 20 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.http.client  (2 usages found)
> BaseTestHttpFSWith.java  (1 usage found)
> 77 import com.google.common.collect.Lists;
> HttpFSFileSystem.java  (1 usage found)
> 75 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.permission  (2 usages found)
> AclStatus.java  (1 usage found)
> 27 import com.google.common.collect.Lists;
> AclUtil.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a  (3 usages found)
> ITestS3AFailureHandling.java  (1 usage found)
> 23 import com.google.common.collect.Lists;
> ITestS3GuardListConsistency.java  (1 usage found)
> 34 import com.google.common.collect.Lists;
> S3AUtils.java  (1 usage found)
> 57 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> RolePolicies.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit  (2 usages found)
> ITestCommitOperations.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> TestMagicCommitPaths.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit.staging  (3 usages found)
> StagingTestBase.java  (1 usage found)
> 47 import com.google.common.collect.Lists;
> TestStagingPartitionedFileListing.java  (1 usage found)
> 31 import com.google.common.collect.Lists;
> TestStagingPartitionedTaskCommit.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.impl  (2 usages found)
> RenameOperation.java  (1 usage found)
> 30 import com.google.common.collect.Lists;
> TestPartialDeleteFailures.java  (1 usage found)
> 37 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.s3guard  (3 usages found)
> DumpS3GuardDynamoTable.java  (1 usage found)
> 38 import com.google.common.collect.Lists;
> DynamoDBMetadataStore.java  (1 usage found)
> 67 import com.google.common.collect.Lists;
> ITestDynamoDBMetadataStore.java  (1 usage found)
> 49 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.shell  (1 usage found)
> AclCommands.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> 

[jira] [Created] (HADOOP-17753) Keep restrict-imports-enforcer-rule for Guava Lists in hadoop-main pom

2021-06-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17753:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Lists in 
hadoop-main pom
 Key: HADOOP-17753
 URL: https://issues.apache.org/jira/browse/HADOOP-17753
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17743) Replace Guava Lists usage by Hadoop's own Lists in hadoop-common, hadoop-tools and cloud-storage projects

2021-06-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17743:
-

 Summary: Replace Guava Lists usage by Hadoop's own Lists in 
hadoop-common, hadoop-tools and cloud-storage projects
 Key: HADOOP-17743
 URL: https://issues.apache.org/jira/browse/HADOOP-17743
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17732) Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom

2021-05-25 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17732:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Sets in 
hadoop-main pom
 Key: HADOOP-17732
 URL: https://issues.apache.org/jira/browse/HADOOP-17732
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Now that all sub-tasks to remove dependency on Guava Sets are completed, we 
should move restrict-imports-enforcer-rule for Guava Sets import in hadoop-main 
pom and remove from individual project poms.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2021-05-21 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17726:
-

 Summary: Replace Sets#newHashSet() and newTreeSet() with 
constructors directly
 Key: HADOOP-17726
 URL: https://issues.apache.org/jira/browse/HADOOP-17726
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


As per the guidelines provided by Guava Sets#newHashSet() and 
Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
newTreeSet<>() directly.

Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17722) Replace Guava Sets usage by Hadoop's own Sets in MapReduce

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17722:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in MapReduce
 Key: HADOOP-17722
 URL: https://issues.apache.org/jira/browse/HADOOP-17722
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17721) Replace Guava Sets usage by Hadoop's own Sets in Yarn

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17721:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in Yarn
 Key: HADOOP-17721
 URL: https://issues.apache.org/jira/browse/HADOOP-17721
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17720) Replace Guava Sets usage by Hadoop's own Sets in HDFS

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17720:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in HDFS
 Key: HADOOP-17720
 URL: https://issues.apache.org/jira/browse/HADOOP-17720
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-04-29 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17676:
-

 Summary: Restrict imports from org.apache.curator.shaded
 Key: HADOOP-17676
 URL: https://issues.apache.org/jira/browse/HADOOP-17676
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports as 
discussed on PR#2945. We can use enforcer-rule to restrict imports such that if 
ever used, mvn build fails.

Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17642) Could not instantiate class org.apache.hadoop.log.metrics.EventCounter

2021-04-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17642:
-

 Summary: Could not instantiate class 
org.apache.hadoop.log.metrics.EventCounter
 Key: HADOOP-17642
 URL: https://issues.apache.org/jira/browse/HADOOP-17642
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


After removal of EventCounter class, we are not able to bring up HDFS cluster.
{code:java}
log4j:ERROR Could not instantiate class 
[org.apache.hadoop.log.metrics.EventCounter].
java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at 
org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at 
org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
at org.apache.hadoop.conf.Configuration.(Configuration.java:229)
at org.apache.hadoop.hdfs.tools.GetConf.(GetConf.java:131)
log4j:ERROR Could not instantiate appender named "EventCounter".
{code}
We need to clean up log4j.properties to avoid instantiating appender 
EventCounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17622) Avoid usage of deprecated IOUtils#cleanup API

2021-04-04 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17622:
-

 Summary: Avoid usage of deprecated IOUtils#cleanup API
 Key: HADOOP-17622
 URL: https://issues.apache.org/jira/browse/HADOOP-17622
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


We can replace usage of deprecated API IOUtils#cleanup() with 
IOUtils#cleanupWithLogger().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17616) Some tests in TestBlockRecovery are consistently failing

2021-03-31 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-17616.
---
Resolution: Duplicate

> Some tests in TestBlockRecovery are consistently failing
> 
>
> Key: HADOOP-17616
> URL: https://issues.apache.org/jira/browse/HADOOP-17616
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Priority: Major
>
> Some long running tests in TestBlockRecovery are consistently failing. Also, 
> TestBlockRecovery is huge with so many tests, we should refactor some of long 
> running and race condition specific tests to separate class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17616) Some tests in TestBlockRecovery are consistently failing

2021-03-31 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17616:
-

 Summary: Some tests in TestBlockRecovery are consistently failing
 Key: HADOOP-17616
 URL: https://issues.apache.org/jira/browse/HADOOP-17616
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.4.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Some long running tests in TestBlockRecovery are consistently failing. Also, 
TestBlockRecovery is huge with so many tests, we should refactor some of long 
running and race condition specific tests to separate class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17612) Bump default Zookeeper version to 3.7.0

2021-03-30 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17612:
-

 Summary: Bump default Zookeeper version to 3.7.0
 Key: HADOOP-17612
 URL: https://issues.apache.org/jira/browse/HADOOP-17612
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


We can bump Zookeeper version to 3.7.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17574) Build failure on trunk

2021-03-10 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-17574.
---
Resolution: Duplicate

> Build failure on trunk
> --
>
> Key: HADOOP-17574
> URL: https://issues.apache.org/jira/browse/HADOOP-17574
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Build is broken on trunk:
> hadoop-huaweicloud: Compilation failure
> [ERROR] 
> hadoop/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java:[396,58]
>  incompatible types: org.apache.hadoop.util.BlockingThreadPoolExecutorService 
> cannot be converted to 
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17574) Build failure on trunk

2021-03-10 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17574:
-

 Summary: Build failure on trunk
 Key: HADOOP-17574
 URL: https://issues.apache.org/jira/browse/HADOOP-17574
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.4.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Build is broken on trunk:

hadoop-huaweicloud: Compilation failure

[ERROR] 
hadoop/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java:[396,58]
 incompatible types: org.apache.hadoop.util.BlockingThreadPoolExecutorService 
cannot be converted to 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17571) Upgrade com.fasterxml.woodstox:woodstox-core for security reasons

2021-03-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17571:
-

 Summary: Upgrade com.fasterxml.woodstox:woodstox-core for security 
reasons
 Key: HADOOP-17571
 URL: https://issues.apache.org/jira/browse/HADOOP-17571
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


Due to security concerns (CVE: sonatype-2018-0624), we should bump up 
woodstox-core to 5.3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16672) missing null check for UserGroupInformation while during IOSteam setup

2019-10-28 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-16672:
-

 Summary: missing null check for UserGroupInformation while during 
IOSteam setup
 Key: HADOOP-16672
 URL: https://issues.apache.org/jira/browse/HADOOP-16672
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Viraj Jasani


While setting up IOStreams, we might end up with NPE if UserGroupInformation is 
null resulting from getTicket() call. Similar to other operations, we should 
add null check for ticket.doAs() call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org