[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-15 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952432#comment-16952432
 ] 

lqjacklee commented on HADOOP-15870:


HADOOP-15870-006.patch provide the option `support-available-is-zero` to check 
the available is zero support
provide the option `support-available-is-positive` to check the available is 
positive support
provide the option `support-available-at-eof` to check the available at eof 
support

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch, HADOOP-15870-006.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-15 Thread lqjacklee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15870:
---
Attachment: HADOOP-15870-006.patch

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch, HADOOP-15870-006.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-14 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950806#comment-16950806
 ] 

lqjacklee commented on HADOOP-15870:


[~ayushtkn] Thank you very much. We will check it.

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-10-11 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949859#comment-16949859
 ] 

lqjacklee commented on HADOOP-15961:


[~ste...@apache.org] Thanks the reply , I will create another patch for this 
task. 

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, 
> HADOOP-15961-003.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16577) Build fails as can't retrieve websocket-servlet

2019-09-17 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931457#comment-16931457
 ] 

lqjacklee commented on HADOOP-16577:


Retrieval of 
/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
 from M2Repository(id=snapshots) is forbidden by repository policy SNAPSHOT.

It seems that the policy settings issue.

> Build fails as can't retrieve websocket-servlet
> ---
>
> Key: HADOOP-16577
> URL: https://issues.apache.org/jira/browse/HADOOP-16577
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Erkin Alp Güney
>Priority: Blocker
>  Labels: dependencies
>
> I encountered this error when building Hadoop:
> Downloading: 
> https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute
> INFO: I/O exception 
> (org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) 
> caught when processing request to {s}->https://repository.apache.org:443: The 
> target server failed to respond
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-10 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927182#comment-16927182
 ] 

lqjacklee commented on HADOOP-16543:


[~roliu][HDDS-1933|https://issues.apache.org/jira/browse/HDDS-1933] should see 
the same issue ? 

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-03 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921957#comment-16921957
 ] 

lqjacklee commented on HADOOP-16543:


I think something need to be changed. 
1, should change IP to alias/dns name
2, should update the alias/dns name to the new IP address ( this part should be 
done in the k8s). 

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892330#comment-16892330
 ] 

lqjacklee commented on HADOOP-16435:


[~kgyrtkirk] I mean that the session is end is not identity that the server is 
end . 

> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16435) RpcMetrics should not be retained forever

2019-07-24 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891915#comment-16891915
 ] 

lqjacklee commented on HADOOP-16435:


[~kgyrtkirk] org.apache.hadoop.ipc.metrics.RpcMetrics#shutdown will be called 
when the server is stopped. so I wonder whether the solution is ok? 

> RpcMetrics should not be retained forever
> -
>
> Key: HADOOP-16435
> URL: https://issues.apache.org/jira/browse/HADOOP-16435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Critical
> Attachments: HADOOP-16435.01.patch, classes.png, 
> defaultMetricsHoldsRpcMetrics.png, related.jxray.png, rpcm.hprof.xz
>
>
> * RpcMetrics objects are registered into 
> [defaultmetricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L101]
> * although there is a shutdown() call (which is actually invoked) it doesn't 
> unregisters itself from the 
> [metricssystem|https://github.com/apache/hadoop/blob/85d9111a88f94a5e6833cd142272be2c5823e922/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java#L185]
> * RpcDetailedMetrics has the same issue
> background
> * hiveserver2 slowly eats up memory when running simple queries in new 
> sessions (select 1)
> * every session opens a tezsession
> * tezsession has rpcmetrics
> * with a 150M heap after around 30 session the jvm gets outofmemmory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 0

2019-07-01 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876225#comment-16876225
 ] 

lqjacklee commented on HADOOP-15847:


[~ste...@apache.org] Thanks to the feedback. Have updated the merge request, 
please review, thanks.

> S3Guard testConcurrentTableCreations to set r & w capacity == 0
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2019-06-29 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Status: In Progress  (was: Patch Available)

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2019-06-29 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875538#comment-16875538
 ] 

lqjacklee commented on HADOOP-15847:


https://github.com/apache/hadoop/pull/1037

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2019-06-29 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875536#comment-16875536
 ] 

lqjacklee commented on HADOOP-15847:


[~ste...@apache.org] how to check whether the bucket is on-demand?  

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3A committers for zero-rename commits to S3 endpoints

2019-06-27 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874307#comment-16874307
 ] 

lqjacklee commented on HADOOP-13786:


[~ste...@apache.org] great job. 

> Add S3A committers for zero-rename commits to S3 endpoints
> --
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> MAPREDUCE-6823-003.patch, cloud-intergration-test-failure.log, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16399) Add missing getFileBlockLocations overload to FilterFileSystem

2019-06-27 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874216#comment-16874216
 ] 

lqjacklee commented on HADOOP-16399:


[~ste...@apache.org] which subclass is not implemented?

> Add missing getFileBlockLocations overload to FilterFileSystem
> --
>
> Key: HADOOP-16399
> URL: https://issues.apache.org/jira/browse/HADOOP-16399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: David Phillips
>Priority: Major
>
> The {{getFileBlockLocations(Path, long, long)}} overload is missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15410) hoop-auth org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider org.apache.log4j package compile error

2019-06-23 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15410:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> hoop-auth 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider 
> org.apache.log4j package compile error
> --
>
> Key: HADOOP-15410
> URL: https://issues.apache.org/jira/browse/HADOOP-15410
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: lqjack
>Priority: Major
> Attachments: HADOOP-15410.000.patch
>
>
> When run the 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider , 
> IDE will automatic compile the java class , but unlucky org.apache.log4j 
> compile failed. 
> should change the pom.xml 
> 
>   log4j
>   log4j
>   runtime
> 
> to 
> 
>   log4j
>   log4j
>   compile
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15410) hoop-auth org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider org.apache.log4j package compile error

2019-06-23 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15410:
--

Assignee: lqjacklee

> hoop-auth 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider 
> org.apache.log4j package compile error
> --
>
> Key: HADOOP-15410
> URL: https://issues.apache.org/jira/browse/HADOOP-15410
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: lqjack
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15410.000.patch
>
>
> When run the 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider , 
> IDE will automatic compile the java class , but unlucky org.apache.log4j 
> compile failed. 
> should change the pom.xml 
> 
>   log4j
>   log4j
>   runtime
> 
> to 
> 
>   log4j
>   log4j
>   compile
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-04-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15961:
---
Attachment: HADOOP-15961-003.patch

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, 
> HADOOP-15961-003.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-04-03 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809420#comment-16809420
 ] 

lqjacklee commented on HADOOP-15961:


[~ste...@apache.org] I am working to this task, thanks. 

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16121) Cannot build in dev docker environment

2019-03-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee resolved HADOOP-16121.

Resolution: Resolved
  Assignee: lqjacklee

> Cannot build in dev docker environment
> --
>
> Key: HADOOP-16121
> URL: https://issues.apache.org/jira/browse/HADOOP-16121
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
> Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
> Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
> root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
>
> Operation as below : 
>  
> 1, run the docker daemon
> 2, run ./start-build-env.sh
> 3, mvn clean package -DskipTests 
>  
> Response from the command line : 
>  
> [ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
> its dependencies could not be resolved: Failed to read artifact descriptor 
> for org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not 
> transfer artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 
> from/to central (https://repo.maven.apache.org/maven2): 
> /home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
>  (No such file or directory) -> [Help 1] 
>  
> solution : 
> a, sudo chmod -R 775 ${USER_HOME}/.m2/
> b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2
>  
> After try the way , it still in trouble. 
>  
> c, sudo mvn clean package -DskipTests. but in this way, will download the 
> file (pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16121) Cannot build in dev docker environment

2019-03-10 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789128#comment-16789128
 ] 

lqjacklee commented on HADOOP-16121:


sudo ln -s /opt/protobuf/bin/protoc /usr/bin/protoc
sudo ln -s /opt/cmake/bin/cmake /usr/bin/cmake

> Cannot build in dev docker environment
> --
>
> Key: HADOOP-16121
> URL: https://issues.apache.org/jira/browse/HADOOP-16121
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
> Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
> Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
> root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
>Reporter: lqjacklee
>Priority: Minor
>
> Operation as below : 
>  
> 1, run the docker daemon
> 2, run ./start-build-env.sh
> 3, mvn clean package -DskipTests 
>  
> Response from the command line : 
>  
> [ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
> its dependencies could not be resolved: Failed to read artifact descriptor 
> for org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not 
> transfer artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 
> from/to central (https://repo.maven.apache.org/maven2): 
> /home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
>  (No such file or directory) -> [Help 1] 
>  
> solution : 
> a, sudo chmod -R 775 ${USER_HOME}/.m2/
> b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2
>  
> After try the way , it still in trouble. 
>  
> c, sudo mvn clean package -DskipTests. but in this way, will download the 
> file (pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16123) Lack of protoc in docker

2019-03-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-16123:
---
Attachment: HADOOP-16123-002.patch

> Lack of protoc in docker
> 
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Priority: Minor
> Attachments: HADOOP-16123-001.patch, HADOOP-16123-002.patch
>
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16123) Lack of protoc in docker

2019-03-09 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788575#comment-16788575
 ] 

lqjacklee commented on HADOOP-16123:


002 patch remove the test case from the yarn nodenamage

> Lack of protoc in docker
> 
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Priority: Minor
> Attachments: HADOOP-16123-001.patch, HADOOP-16123-002.patch
>
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16123) Lack of protoc in docker

2019-03-08 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-16123:
---
Status: Patch Available  (was: Open)

> Lack of protoc in docker
> 
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Priority: Minor
> Attachments: HADOOP-16123-001.patch
>
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16175) DynamoDBLocal Support

2019-03-08 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16175:
--

 Summary: DynamoDBLocal Support
 Key: HADOOP-16175
 URL: https://issues.apache.org/jira/browse/HADOOP-16175
 Project: Hadoop Common
  Issue Type: New Feature
  Components: common
Reporter: lqjacklee
Assignee: lqjacklee


DynamoDB Local 
([https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html)]
 provide the function that the user/developer can run the local environment 
without depending on AWS. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16123) Lack of protoc in docker

2019-03-08 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-16123:
---
Attachment: HADOOP-16123-001.patch

> Lack of protoc in docker
> 
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Priority: Minor
> Attachments: HADOOP-16123-001.patch
>
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16123) Lack of protoc in docker

2019-03-08 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788516#comment-16788516
 ] 

lqjacklee commented on HADOOP-16123:


After change the code :  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 . 

#add_executable(test-oom-listener
# main/native/oom-listener/impl/oom_listener.c
# main/native/oom-listener/impl/oom_listener.h
# main/native/oom-listener/test/oom_listener_test_main.cc
#)
#target_link_libraries(test-oom-listener gtest rt)
#output_directory(test-oom-listener test)

 

It can work fine now. I wonder whether the test can be removed ? 

 

 

> Lack of protoc in docker
> 
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Priority: Minor
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-20 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15920:
---
Attachment: HADOOP-15920-07.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-20 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773615#comment-16773615
 ] 

lqjacklee commented on HADOOP-15920:


HADOOP-15920-06.patch fix checkstyle 

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15920-06.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-20 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15920:
---
Attachment: HADOOP-15920-06.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15920-06.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16123) Lack of protoc in docker

2019-02-20 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-16123:
--

Assignee: (was: lqjacklee)

> Lack of protoc in docker
> 
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Priority: Minor
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-20 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773579#comment-16773579
 ] 

lqjacklee commented on HADOOP-15920:


Thanks, I will format it. 

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16123) Lack of protoc

2019-02-19 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772513#comment-16772513
 ] 

lqjacklee commented on HADOOP-16123:


[~ste...@apache.org] please help check that, thanks .

> Lack of protoc 
> ---
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Assignee: Steve Loughran
>Priority: Minor
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16123) Lack of protoc

2019-02-19 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16123:
--

 Summary: Lack of protoc 
 Key: HADOOP-16123
 URL: https://issues.apache.org/jira/browse/HADOOP-16123
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.0
Reporter: lqjacklee
Assignee: Steve Loughran


During build the source code , do the steps as below : 

 

1, run docker daemon 

2, ./start-build-env.sh

3, sudo mvn clean install -DskipTests -Pnative 

the response prompt that : 

[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
'protoc --version' did not return a version -> 

[Help 1]

However , when execute the command : whereis protoc 

liu@a65d187055f9:~/hadoop$ whereis protoc
protoc: /opt/protobuf/bin/protoc

 

the PATH value : 
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin

 

liu@a65d187055f9:~/hadoop$ protoc --version
libprotoc 2.5.0

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16121:
--

 Summary: Cannot build in dev docker environment
 Key: HADOOP-16121
 URL: https://issues.apache.org/jira/browse/HADOOP-16121
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.3.0
 Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
Reporter: lqjacklee
Assignee: Steve Loughran


Operation as below : 

 

1, run the docker daemon

2, run ./start-build-env.sh

3, mvn clean package -DskipTests 

 

Response from the command line : 

 

[ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
its dependencies could not be resolved: Failed to read artifact descriptor for 
org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not transfer 
artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 from/to 
central (https://repo.maven.apache.org/maven2): 
/home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
 (No such file or directory) -> [Help 1] 

 

solution : 

a, sudo chmod -R 775 ${USER_HOME}/.m2/

b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2

 

After try the way , it still in trouble. 

 

c, sudo mvn clean package -DskipTests. but in this way, will download the file 
(pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-02-18 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771474#comment-16771474
 ] 

lqjacklee commented on HADOOP-15961:


[~ste...@apache.org] I will update the code today. thanks .

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-20 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15961:
---
Attachment: HADOOP-15961-002.patch

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-20 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747620#comment-16747620
 ] 

lqjacklee edited comment on HADOOP-15961 at 1/21/19 2:47 AM:
-

[~ste...@apache.org] Which Branch should I apply ? 


was (Author: jack-lee):
[~ste...@apache.org] Which Branch I should I apply ? 

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-20 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747620#comment-16747620
 ] 

lqjacklee commented on HADOOP-15961:


[~ste...@apache.org] Which Branch I should I apply ? 

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-18 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746123#comment-16746123
 ] 

lqjacklee commented on HADOOP-15961:


[~ste...@apache.org]Thanks the comment.

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-01-16 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744588#comment-16744588
 ] 

lqjacklee commented on HADOOP-15920:


[~ste...@apache.org] Thanks the comment , I will update. 

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2019-01-16 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744586#comment-16744586
 ] 

lqjacklee commented on HADOOP-15990:


[~ste...@apache.org] Got it , I will double check again. 

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-16 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744585#comment-16744585
 ] 

lqjacklee commented on HADOOP-15961:


I configure the region : ap-southeast-1

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-14 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742645#comment-16742645
 ] 

lqjacklee commented on HADOOP-15961:


I notice that test pass , do you get any issue about the patch ? 

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2019-01-14 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742642#comment-16742642
 ] 

lqjacklee commented on HADOOP-15990:


[~ste...@apache.org] What's error do you occur ? Do I need update new patch? 

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to 2.9.8

2019-01-11 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Status: Patch Available  (was: Reopened)

> Upgrade Jackson2 to 2.9.8
> -
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: 460.patch, HADOOP-15994-001.patch, 
> HADOOP-15994-002.patch, HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to the latest version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15994) Upgrade Jackson2 to 2.9.8

2019-01-11 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740284#comment-16740284
 ] 

lqjacklee commented on HADOOP-15994:


[~ajisakaa] sorry to update the status.

> Upgrade Jackson2 to 2.9.8
> -
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: 460.patch, HADOOP-15994-001.patch, 
> HADOOP-15994-002.patch, HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to the latest version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to the latest version

2019-01-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: 460.patch, HADOOP-15994-001.patch, 
> HADOOP-15994-002.patch, HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to the latest version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15994) Upgrade Jackson2 to the latest version

2019-01-09 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738097#comment-16738097
 ] 

lqjacklee commented on HADOOP-15994:


[https://github.com/apache/hadoop/pull/460]

update the Jackson version to 2.9.8

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch, 
> HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to the latest version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15994) Upgrade Jackson2 to the latest version

2019-01-08 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737934#comment-16737934
 ] 

lqjacklee commented on HADOOP-15994:


[~ajisakaa] do we need update the version so frequency ? 

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch, 
> HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2019-01-02 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16732564#comment-16732564
 ] 

lqjacklee commented on HADOOP-16016:


I notice that the enabled cipher suits is 43, while after building the SSL , 
the supported cipher suits is 59.  So I wonder whether it is just the reason 
you referred ?

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16021) SequenceFile.createWriter appendIfExists codec cause NullPointerException

2019-01-01 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731579#comment-16731579
 ] 

lqjacklee commented on HADOOP-16021:


[~xinkenny] Thanks ,I will try to reproduce it .

> SequenceFile.createWriter appendIfExists codec cause NullPointerException
> -
>
> Key: HADOOP-16021
> URL: https://issues.apache.org/jira/browse/HADOOP-16021
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: windows10 or Linux-centos , hadoop2.7.3, jdk8
>Reporter: asin
>Priority: Major
>  Labels: bug
> Attachments: 055.png, 62.png, CompressionType.BLOCK-Not 
> supported-error log.txt, CompressionType.NONE-NullPointerException-error 
> log.txt
>
>
>  
>  I want append the data in a file , when i use SequenceFile.appendIfExists , 
> it throw NullPointerException at at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1119)
> when i remove the 'appendIfExists', it works, but it will cover old file.
>  
> when i try use CompressionType.RECORD or CompressionType.BLOCK throw "not 
> support" exception
>  
> {code:java}
> // my code
> SequenceFile.Writer writer = null; 
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(path), 
> SequenceFile.Writer.keyClass(Text.class), 
> SequenceFile.Writer.valueClass(Text.class), 
> SequenceFile.Writer.appendIfExists(true) );
> {code}
>  
> {code:java}
> // all my code
> public class Writer1 implements VoidFunction String>>> {
> private static Configuration conf = new Configuration();
> private int MAX_LINE = 3; // little num,for test
> @Override
> public void call(Iterator> iterator) throws 
> Exception {
> int partitionId = TaskContext.get().partitionId();
> int count = 0;
> SequenceFile.Writer writer = null;
> while (iterator.hasNext()) {
> Tuple2 tp = iterator.next();
> Path path = new Path("D:/tmp-doc/logs/logs.txt");
> if (writer == null)
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(path),
> SequenceFile.Writer.keyClass(Text.class),
> SequenceFile.Writer.valueClass(Text.class),
> SequenceFile.Writer.appendIfExists(true)
> );
> writer.append(new Text(tp._1), new Text(tp._2));
> count++;
> if (count > MAX_LINE) {
> IOUtils.closeStream(writer);
> count = 0;
> writer = SequenceFile.createWriter(... // same as above
> }
> }
> if (count > 0) {
> IOUtils.closeStream(writer);
> }
> IOUtils.closeStream(writer);
> }
> }
> {code}
>  // above code call by below
> {code:java}
> import com.xxx.algo.hadoop.Writer1
> import com.xxx.algo.utils.Utils
> import kafka.serializer.StringDecoder
> import org.apache.spark.sql.SparkSession
> import org.apache.spark.streaming.kafka.KafkaUtils
> import org.apache.spark.streaming.{Durations, StreamingContext}
> import org.apache.spark.{SparkConf, SparkContext}
> object KafkaSparkStreamingApp {
>   def main(args: Array[String]): Unit = {
> val kafka = "192.168.30.4:9092,192.168.30.5:9092,192.168.30.6:9092"
> val zk = "192.168.30.4:2181,192.168.30.5:2181,192.168.30.6:2181"
> val topics = Set("test.aries.collection.appevent.biz")
> val tag = "biz"
> val durationSeconds = 5000
> val conf = new SparkConf()
> conf.setAppName("user-log-consumer")
>   .set("spark.serilizer","org.apache.spark.serializer.KryoSerializer")
>   .set("spark.kryo.registrationRequired", "true")
>   .set("spark.defalut.parallelism","2")
>   .set("spark.rdd.compress","true")
>   .setMaster("local[2]")
> val sc = new SparkContext(conf)
> val session = SparkSession.builder()
>   .config(conf)
>   .getOrCreate()
> val ssc = new StreamingContext(sc, 
> Durations.milliseconds(durationSeconds))
> val kafkaParams = Map[String, String](
>   "metadata.broker.list" -> kafka,
>   "bootstrap.servers" -> kafka,
>   "zookeeper.connect" -> zk,
>   "group.id" -> "recommend_stream_spark",
>   "key.serializer" -> 
> "org.apache.kafka.common.serialization.StringSerializer",
>   "key.deserializer" -> 
> "org.apache.kafka.common.serialization.StringDeserializer",
>   "value.deserializer" -> 
> "org.apache.kafka.common.serialization.StringDeserializer"
> )
> val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, 
> StringDecoder](
>   ssc,
>   kafkaParams,
>   topics
> )
> val timeFieldName = "log_time"
> 

[jira] [Comment Edited] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2018-12-30 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731155#comment-16731155
 ] 

lqjacklee edited comment on HADOOP-16016 at 12/31/18 4:05 AM:
--

Just ignore the exception is one solution. however the issue still be there, so 
I wonder you need change the enabled cipher suites. However there exist the 
security issue , we should just enable the ones (or all supported)? I want to 
submit the patch for us to review and update the solution .thanks .

 
{code:java}
private SSLEngineResult wrap(SSLEngine engine, ByteBuffer from,
ByteBuffer to) throws Exception {
  String[] supportedCipherSuites = engine.getSupportedCipherSuites();
  engine.setEnabledCipherSuites(supportedCipherSuites);
  SSLEngineResult result = engine.wrap(from, to);
  runDelegatedTasks(result, engine);
  return result;
}
{code}
 


was (Author: jack-lee):
Just ignore the exception is one solution. however the issue still be there, so 
I wonder you need change the enabled cipher suites. However there exist the 
security issue , we should just enable the ones (or all supported). I want to 
submit the patch for us to review and update the solution .thanks .

 
{code:java}
private SSLEngineResult wrap(SSLEngine engine, ByteBuffer from,
ByteBuffer to) throws Exception {
  String[] supportedCipherSuites = engine.getSupportedCipherSuites();
  engine.setEnabledCipherSuites(supportedCipherSuites);
  SSLEngineResult result = engine.wrap(from, to);
  runDelegatedTasks(result, engine);
  return result;
}
{code}
 

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2018-12-30 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731155#comment-16731155
 ] 

lqjacklee commented on HADOOP-16016:


Just ignore the exception is one solution. however the issue still be there, so 
I wonder you need change the enabled cipher suites. However there exist the 
security issue , we should just enable the ones (or all supported). I want to 
submit the patch for us to review and update the solution .thanks .

 
{code:java}
private SSLEngineResult wrap(SSLEngine engine, ByteBuffer from,
ByteBuffer to) throws Exception {
  String[] supportedCipherSuites = engine.getSupportedCipherSuites();
  engine.setEnabledCipherSuites(supportedCipherSuites);
  SSLEngineResult result = engine.wrap(from, to);
  runDelegatedTasks(result, engine);
  return result;
}
{code}
 

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2018-12-30 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-16016:
---
Attachment: HADOOP-16016-002.patch

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16024) org.apache.hadoop.security.ssl.TestSSLFactory#testServerWeakCiphers failed

2018-12-30 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16024:
--

 Summary: 
org.apache.hadoop.security.ssl.TestSSLFactory#testServerWeakCiphers failed
 Key: HADOOP-16024
 URL: https://issues.apache.org/jira/browse/HADOOP-16024
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: lqjacklee


The enabled cipher suites locally is : TLS_ECDHE_RSA_WITH_RC4_128_SHA. however 
I found it is excluded.

The track tree :

java.lang.AssertionError: Expected to find 'no cipher suites in common' but got 
unexpected exception: javax.net.ssl.SSLHandshakeException: No appropriate 
protocol (protocol is disabled or cipher suites are inappropriate)
 at sun.security.ssl.Handshaker.activate(Handshaker.java:509)
 at sun.security.ssl.SSLEngineImpl.kickstartHandshake(SSLEngineImpl.java:714)
 at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1212)
 at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1165)
 at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
 at org.apache.hadoop.security.ssl.TestSSLFactory.wrap(TestSSLFactory.java:248)
 at 
org.apache.hadoop.security.ssl.TestSSLFactory.testServerWeakCiphers(TestSSLFactory.java:220)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
 at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)


 at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:350)
 at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:327)
 at 
org.apache.hadoop.security.ssl.TestSSLFactory.testServerWeakCiphers(TestSSLFactory.java:240)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at 

[jira] [Updated] (HADOOP-16024) org.apache.hadoop.security.ssl.TestSSLFactory#testServerWeakCiphers failed

2018-12-30 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-16024:
---
Description: 
The enabled cipher suites locally is :

enabledProtocols =  "[SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]"
enabledCipherSuites = [TLS_ECDHE_RSA_WITH_RC4_128_SHA, 
SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, 
SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_RSA_EXPORT_WITH_RC4_40_MD5, 
SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_RSA_WITH_RC4_128_MD5]"

. however I found it is excluded.

The track tree :

java.lang.AssertionError: Expected to find 'no cipher suites in common' but got 
unexpected exception: javax.net.ssl.SSLHandshakeException: No appropriate 
protocol (protocol is disabled or cipher suites are inappropriate)
 at sun.security.ssl.Handshaker.activate(Handshaker.java:509)
 at sun.security.ssl.SSLEngineImpl.kickstartHandshake(SSLEngineImpl.java:714)
 at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1212)
 at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1165)
 at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
 at org.apache.hadoop.security.ssl.TestSSLFactory.wrap(TestSSLFactory.java:248)
 at 
org.apache.hadoop.security.ssl.TestSSLFactory.testServerWeakCiphers(TestSSLFactory.java:220)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
 at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:350)
 at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:327)
 at 
org.apache.hadoop.security.ssl.TestSSLFactory.testServerWeakCiphers(TestSSLFactory.java:240)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 

[jira] [Commented] (HADOOP-16021) SequenceFile.createWriter appendIfExists codec cause NullPointerException

2018-12-27 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16729639#comment-16729639
 ] 

lqjacklee commented on HADOOP-16021:


[~xinkenny] could provide the whole code ? thanks . 

> SequenceFile.createWriter appendIfExists codec cause NullPointerException
> -
>
> Key: HADOOP-16021
> URL: https://issues.apache.org/jira/browse/HADOOP-16021
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: windows10, hadoop2.7.3, jdk8
>Reporter: asin
>Priority: Major
>  Labels: bug
> Attachments: 57.png
>
>
>  
>  I want append the data in a file , when i use SequenceFile.appendIfExists , 
> it throw NullPointerException at at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1119)
> when i remove the 'appendIfExists', it works, but it will cover old file.
>  
> when i try use CompressionType.RECORD or CompressionType.BLOCK throw "not 
> support" exception
>  
> {code:java}
> // my code
> SequenceFile.Writer writer = null; 
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(path), 
> SequenceFile.Writer.keyClass(Text.class), 
> SequenceFile.Writer.valueClass(Text.class), 
> SequenceFile.Writer.appendIfExists(true) );
> {code}
>  
>   
>  {{more info 
> see:[https://stackoverflow.com/questions/53943978/hadoop-sequencefile-createwriter-appendifexists-codec-cause-nullpointerexception]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16019) ZKDelegationTokenSecretManager won't log exception message occured in function setJaasConfiguration

2018-12-25 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-16019:
---
Description: * when the config  ZK_DTSM_ZK_KERBEROS_KEYTAB  or 
ZK_DTSM_ZK_KERBEROS_PRINCIPAL are not set, the IllegalArgumentException message 
cannot be logged.  (was: when the config  ZK_DTSM_ZK_KERBEROS_KEYTAB  or 
ZK_DTSM_ZK_KERBEROS_PRINCIPAL are not set, the IllegalArgumentException message 
cannot be logged.)

> ZKDelegationTokenSecretManager won't log exception message occured in 
> function setJaasConfiguration
> ---
>
> Key: HADOOP-16019
> URL: https://issues.apache.org/jira/browse/HADOOP-16019
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: luhuachao
>Priority: Minor
> Attachments: HADOOP-16019.1.patch
>
>
> * when the config  ZK_DTSM_ZK_KERBEROS_KEYTAB  or 
> ZK_DTSM_ZK_KERBEROS_PRINCIPAL are not set, the IllegalArgumentException 
> message cannot be logged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15991) testMultipartUpload timing out

2018-12-17 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723479#comment-16723479
 ] 

lqjacklee commented on HADOOP-15991:


# what happens if you don't run with s3guard on?
 ## The configuration is off
 # what are the s3guard settings for that bucket (e.g IO allocation).
 ## the default one, just change the region and credentials
 # How far is that AWS region from you?
 ## PING s3.ap-south-1.amazonaws.com (52.219.66.29): 56 data bytes
64 bytes from 52.219.66.29: icmp_seq=0 ttl=32 time=513.278 ms
 # and what is your bandwidth, especially uploading
 ## 
DOWNLOAD 48.58 Mbps /UPLOAD 3.46 Mbps
 

> testMultipartUpload timing out
> --
>
> Key: HADOOP-15991
> URL: https://issues.apache.org/jira/browse/HADOOP-15991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
>
> timeout of S3 mpu tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-17 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723468#comment-16723468
 ] 

lqjacklee commented on HADOOP-15847:


[~gabor.bota] Thanks the comment. In order to reduce the cost in the test 
case.We provide the option to limit . 

I have change the logic only in the ITestS3GuardConcurrentOps. Please help 
review. [^HADOOP-15847-002.patch]

 

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-17 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Attachment: HADOOP-15847-002.patch

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15991) testMultipartUpload timing out

2018-12-17 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15991:
--

Assignee: lqjacklee

> testMultipartUpload timing out
> --
>
> Key: HADOOP-15991
> URL: https://issues.apache.org/jira/browse/HADOOP-15991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
>
> timeout of S3 mpu tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2018-12-11 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15961:
---
Attachment: HADOOP-15961-001.patch

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2018-12-11 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15961:
---
Status: Patch Available  (was: Open)

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2018-12-11 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15961:
--

Assignee: lqjacklee

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-11 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Attachment: HADOOP-15994-003.patch

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch, 
> HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Attachment: HADOOP-15994-002.patch

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Attachment: HADOOP-15994-001.patch

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15994:
---
Status: Patch Available  (was: Open)

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-10 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15994:
--

Assignee: lqjacklee

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-12-09 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714238#comment-16714238
 ] 

lqjacklee commented on HADOOP-15990:


1, provide the version to identity which version user configure. The default 
version is 1. 

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-12-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15990:
---
Attachment: HADOOP-15990-006.patch

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-12-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15990:
---
Status: Patch Available  (was: Open)

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-12-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15990 stopped by lqjacklee.
--
> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch, HADOOP-15990-006.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15991) testMultipartUpload

2018-12-09 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-15991:
--

 Summary: testMultipartUpload
 Key: HADOOP-15991
 URL: https://issues.apache.org/jira/browse/HADOOP-15991
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.2.0
Reporter: lqjacklee


2018-12-10 09:58:56,482 [Thread-746] INFO contract.AbstractFSContractTestBase 
(AbstractFSContractTestBase.java:setup(184)) - Test filesystem = 
s3a://jack-testlambda implemented by S3AFileSystem\{uri=s3a://jack-testlambda, 
workingDir=s3a://jack-testlambda/user/liuquan, inputPolicy=normal, 
partSize=104857600, enableMultiObjectsDelete=true, maxKeys=5000, 
readAhead=65536, blockSize=33554432, multiPartThreshold=2147483647, 
serverSideEncryptionAlgorithm='NONE', 
blockFactory=org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@3111299f, 
metastore=DynamoDBMetadataStore{region=ap-southeast-1, tableName=test-h}, 
authoritative=false, useListV1=false, magicCommitter=false, 
boundedExecutor=BlockingThreadPoolExecutorService\{SemaphoredDelegatingExecutor{permitCount=25,
 available=25, waiting=0}, activeCount=0}, 
unboundedExecutor=java.util.concurrent.ThreadPoolExecutor@78c596aa[Running, 
pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 20], 
credentials=AWSCredentialProviderList[refcount= 2: 
[SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, 
com.amazonaws.auth.InstanceProfileCredentialsProvider@4a214309], statistics 
\{18901220 bytes read, 18912956 bytes written, 580 read ops, 0 large read ops, 
843 write ops}, metrics \{{Context=s3aFileSystem} 
\{s3aFileSystemId=94fd09ed-8145-4d1e-b11b-678151785e0b} 
\{bucket=jack-testlambda} \{stream_opened=28} \{stream_close_operations=28} 
\{stream_closed=28} \{stream_aborted=0} \{stream_seek_operations=0} 
\{stream_read_exceptions=0} \{stream_forward_seek_operations=0} 
\{stream_backward_seek_operations=0} \{stream_bytes_skipped_on_seek=0} 
\{stream_bytes_backwards_on_seek=0} \{stream_bytes_read=18901220} 
\{stream_read_operations=2734} \{stream_read_fully_operations=0} 
\{stream_read_operations_incomplete=2171} \{stream_bytes_read_in_close=0} 
\{stream_bytes_discarded_in_abort=0} \{files_created=49} \{files_copied=20} 
\{files_copied_bytes=9441684} \{files_deleted=79} 
\{fake_directories_deleted=630} \{directories_created=98} 
\{directories_deleted=19} \{ignored_errors=71} \{op_copy_from_local_file=0} 
\{op_create=55} \{op_create_non_recursive=6} \{op_delete=74} \{op_exists=80} 
\{op_get_file_checksum=8} \{op_get_file_status=923} \{op_glob_status=19} 
\{op_is_directory=0} \{op_is_file=0} \{op_list_files=8} 
\{op_list_located_status=0} \{op_list_status=51} \{op_mkdirs=114} \{op_open=28} 
\{op_rename=20} \{object_copy_requests=0} \{object_delete_requests=198} 
\{object_list_requests=337} \{object_continue_list_requests=0} 
\{object_metadata_requests=560} \{object_multipart_aborted=0} 
\{object_put_bytes=18904764} \{object_put_requests=147} 
\{object_put_requests_completed=147} \{stream_write_failures=0} 
\{stream_write_block_uploads=0} \{stream_write_block_uploads_committed=0} 
\{stream_write_block_uploads_aborted=0} \{stream_write_total_time=0} 
\{stream_write_total_data=18904764} \{committer_commits_created=0} 
\{committer_commits_completed=0} \{committer_jobs_completed=0} 
\{committer_jobs_failed=0} \{committer_tasks_completed=0} 
\{committer_tasks_failed=0} \{committer_bytes_committed=0} 
\{committer_bytes_uploaded=0} \{committer_commits_failed=0} 
\{committer_commits_aborted=0} \{committer_commits_reverted=0} 
\{committer_magic_files_created=0} 
\{s3guard_metadatastore_put_path_request=166} 
\{s3guard_metadatastore_initialization=1} \{s3guard_metadatastore_retry=0} 
\{s3guard_metadatastore_throttled=0} \{store_io_throttled=0} 
\{object_put_requests_active=0} \{object_put_bytes_pending=0} 
\{stream_write_block_uploads_active=0} \{stream_write_block_uploads_pending=49} 
\{stream_write_block_uploads_data_pending=0} 
\{S3guard_metadatastore_put_path_latencyNumOps=1} 
\{S3guard_metadatastore_put_path_latency50thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency75thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency90thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency95thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency99thPercentileLatency=427507288} 
\{S3guard_metadatastore_throttle_rateNumEvents=0} 
\{S3guard_metadatastore_throttle_rate50thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate75thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate90thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate95thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate99thPercentileFrequency (Hz)=0} }}
2018-12-10 10:01:50,127 [Thread-746] INFO s3a.ITestS3AContractMultipartUploader 
(ITestS3AContractMultipartUploader.java:teardown(108)) - 

[jira] [Updated] (HADOOP-15991) testMultipartUpload

2018-12-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15991:
---
Description: 
2018-12-10 09:58:56,482 [Thread-746] INFO contract.AbstractFSContractTestBase 
(AbstractFSContractTestBase.java:setup(184)) - Test filesystem = 
s3a://jack-testlambda implemented by S3AFileSystem{uri=s3a://jack-testlambda, 
workingDir=s3a://jack-testlambda/user/jack, inputPolicy=normal, 
partSize=104857600, enableMultiObjectsDelete=true, maxKeys=5000, 
readAhead=65536, blockSize=33554432, multiPartThreshold=2147483647, 
serverSideEncryptionAlgorithm='NONE', 
blockFactory=org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@3111299f, 
metastore=DynamoDBMetadataStore

{region=ap-southeast-1, tableName=test-h}

, authoritative=false, useListV1=false, magicCommitter=false, 
boundedExecutor=BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor

{permitCount=25, available=25, waiting=0}

, activeCount=0}, 
unboundedExecutor=java.util.concurrent.ThreadPoolExecutor@78c596aa[Running, 
pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 20], 
credentials=AWSCredentialProviderList[refcount= 2: 
[SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, 
com.amazonaws.auth.InstanceProfileCredentialsProvider@4a214309], statistics 
\{18901220 bytes read, 18912956 bytes written, 580 read ops, 0 large read ops, 
843 write ops}, metrics {

{Context=s3aFileSystem}

{s3aFileSystemId=94fd09ed-8145-4d1e-b11b-678151785e0b} 
\{bucket=jack-testlambda} \{stream_opened=28} \{stream_close_operations=28} 
\{stream_closed=28} \{stream_aborted=0} \{stream_seek_operations=0} 
\{stream_read_exceptions=0} \{stream_forward_seek_operations=0} 
\{stream_backward_seek_operations=0} \{stream_bytes_skipped_on_seek=0} 
\{stream_bytes_backwards_on_seek=0} \{stream_bytes_read=18901220} 
\{stream_read_operations=2734} \{stream_read_fully_operations=0} 
\{stream_read_operations_incomplete=2171} \{stream_bytes_read_in_close=0} 
\{stream_bytes_discarded_in_abort=0} \{files_created=49} \{files_copied=20} 
\{files_copied_bytes=9441684} \{files_deleted=79} 
\{fake_directories_deleted=630} \{directories_created=98} 
\{directories_deleted=19} \{ignored_errors=71} \{op_copy_from_local_file=0} 
\{op_create=55} \{op_create_non_recursive=6} \{op_delete=74} \{op_exists=80} 
\{op_get_file_checksum=8} \{op_get_file_status=923} \{op_glob_status=19} 
\{op_is_directory=0} \{op_is_file=0} \{op_list_files=8} 
\{op_list_located_status=0} \{op_list_status=51} \{op_mkdirs=114} \{op_open=28} 
\{op_rename=20} \{object_copy_requests=0} \{object_delete_requests=198} 
\{object_list_requests=337} \{object_continue_list_requests=0} 
\{object_metadata_requests=560} \{object_multipart_aborted=0} 
\{object_put_bytes=18904764} \{object_put_requests=147} 
\{object_put_requests_completed=147} \{stream_write_failures=0} 
\{stream_write_block_uploads=0} \{stream_write_block_uploads_committed=0} 
\{stream_write_block_uploads_aborted=0} \{stream_write_total_time=0} 
\{stream_write_total_data=18904764} \{committer_commits_created=0} 
\{committer_commits_completed=0} \{committer_jobs_completed=0} 
\{committer_jobs_failed=0} \{committer_tasks_completed=0} 
\{committer_tasks_failed=0} \{committer_bytes_committed=0} 
\{committer_bytes_uploaded=0} \{committer_commits_failed=0} 
\{committer_commits_aborted=0} \{committer_commits_reverted=0} 
\{committer_magic_files_created=0} 
\{s3guard_metadatastore_put_path_request=166} 
\{s3guard_metadatastore_initialization=1} \{s3guard_metadatastore_retry=0} 
\{s3guard_metadatastore_throttled=0} \{store_io_throttled=0} 
\{object_put_requests_active=0} \{object_put_bytes_pending=0} 
\{stream_write_block_uploads_active=0} \{stream_write_block_uploads_pending=49} 
\{stream_write_block_uploads_data_pending=0} 
\{S3guard_metadatastore_put_path_latencyNumOps=1} 
\{S3guard_metadatastore_put_path_latency50thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency75thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency90thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency95thPercentileLatency=427507288} 
\{S3guard_metadatastore_put_path_latency99thPercentileLatency=427507288} 
\{S3guard_metadatastore_throttle_rateNumEvents=0} 
\{S3guard_metadatastore_throttle_rate50thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate75thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate90thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate95thPercentileFrequency (Hz)=0} 
\{S3guard_metadatastore_throttle_rate99thPercentileFrequency (Hz)=0} }}
 2018-12-10 10:01:50,127 [Thread-746] INFO 
s3a.ITestS3AContractMultipartUploader 
(ITestS3AContractMultipartUploader.java:teardown(108)) - Teardown: aborting 
outstanding uploads under s3a://jack-testlambda/test
 2018-12-10 10:01:50,618 [Thread-746] INFO 
s3a.ITestS3AContractMultipartUploader 

[jira] [Updated] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-12-09 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15990:
---
Status: In Progress  (was: Patch Available)

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15990) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-12-09 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714189#comment-16714189
 ] 

lqjacklee commented on HADOOP-15990:


[~ste...@apache.org] I will do that. thanks.

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15990
> URL: https://issues.apache.org/jira/browse/HADOOP-15990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15409-005.patch
>
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000
> (this is a dupe of HADOOP-15409; moving off git PRs so we can get yetus to 
> test everything)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2018-12-08 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15894:
--

Assignee: lqjacklee

> getFileChecksum() needs to adopt S3Guard
> 
>
> Key: HADOOP-15894
> URL: https://issues.apache.org/jira/browse/HADOOP-15894
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
>
> Encountered a 404 failure in 
> {{ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted}}; newly 
> created file wasn't seen. Even with S3guard enabled, that method isn't doing 
> anything to query the store for it existing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2018-12-08 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15894 started by lqjacklee.
--
> getFileChecksum() needs to adopt S3Guard
> 
>
> Key: HADOOP-15894
> URL: https://issues.apache.org/jira/browse/HADOOP-15894
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
>
> Encountered a 404 failure in 
> {{ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted}}; newly 
> created file wasn't seen. Even with S3guard enabled, that method isn't doing 
> anything to query the store for it existing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-08 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Status: Patch Available  (was: Open)

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-08 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713620#comment-16713620
 ] 

lqjacklee commented on HADOOP-15847:


[~ste...@apache.org] Please help review the patch , and how to resolve the 
exception. 

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-08 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Attachment: HADOOP-15847-001.patch

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-07 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Attachment: (was: HADOOP-15847-001.patch)

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-07 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Assignee: lqjacklee
  Status: Patch Available  (was: Open)

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-07 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Status: Open  (was: Patch Available)

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-07 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713512#comment-16713512
 ] 

lqjacklee commented on HADOOP-15847:


When set the capacity to 1, test output render as below : 
2018-12-08 11:55:57,912 [pool-4-thread-22] INFO 
s3guard.ITestDynamoDBMetadataStoreScale 
(ITestDynamoDBMetadataStoreScale.java:lambda$execute$8(432)) - Operation [0] 
raised a throttled exception 
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: Max retries during batch 
write exceeded (2) for DynamoDB. This may be because the write threshold of 
DynamoDB is set too low.: Throttling (Service: S3Guard; Status Code: 503; Error 
Code: Throttling; Request ID: n/a)
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: Max retries during batch 
write exceeded (2) for DynamoDB. This may be because the write threshold of 
DynamoDB is set too low.: Throttling (Service: S3Guard; Status Code: 503; Error 
Code: Throttling; Request ID: n/a)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:800)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:759)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.innerPut(DynamoDBMetadataStore.java:845)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.put(DynamoDBMetadataStore.java:837)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.put(DynamoDBMetadataStore.java:831)
 at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale.lambda$test_030_BatchedWrite$0(ITestDynamoDBMetadataStoreScale.java:237)
 at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale.lambda$execute$8(ITestDynamoDBMetadataStoreScale.java:428)
 at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
 at java.util.concurrent.FutureTask.run(FutureTask.java)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.AmazonServiceException: Throttling (Service: S3Guard; 
Status Code: 503; Error Code: Throttling; Request ID: n/a)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:791)
 ... 11 more

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-07 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Attachment: HADOOP-15847-001.patch

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15985:
--

Assignee: lqjacklee

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15985-002.patch, HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712217#comment-16712217
 ] 

lqjacklee commented on HADOOP-15985:


[~kihwal] Thanks reminder, I submit the wrong path.  [^HADOOP-15985-002.patch] 

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-002.patch, HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15985:
---
Attachment: HADOOP-15985-002.patch

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-002.patch, HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15870 started by lqjacklee.
--
> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711527#comment-16711527
 ] 

lqjacklee commented on HADOOP-15985:


[~kihwal]  instead of checking if the platform is 32- or 64-bit, it should 
check if Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15870 stopped by lqjacklee.
--
> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15870:
---
Attachment: HADOOP-15870-003.patch

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711522#comment-16711522
 ] 

lqjacklee commented on HADOOP-15870:


 [^HADOOP-15870-003.patch] 

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711524#comment-16711524
 ] 

lqjacklee commented on HADOOP-15920:


 [^HADOOP-15870-003.patch] 

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15920:
---
Attachment: HADOOP-15870-003.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >