[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-20 Thread Sunil Govindan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911939#comment-16911939
 ] 

Sunil Govindan commented on HDFS-14729:
---

Apologies [~anu] on that.

I saw the trunk failure in Ozone Manager w/o this patch itself. Hence thought 
this patch wont impact ozone and its not part of default build pipeline here. I 
should have digged deeper a bit more before committing, Thanks for taking care 
on this. cc [~vivekratnavel]

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-20 Thread Sunil Govindan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-14729:
--
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~vivekratnavel]

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-20 Thread Sunil Govindan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911319#comment-16911319
 ] 

Sunil Govindan commented on HDFS-14729:
---

Thanks [~vivekratnavel] 

Makes sense, I am getting this in now

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread Sunil Govindan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910899#comment-16910899
 ] 

Sunil Govindan commented on HDFS-14729:
---

[~vivekratnavel]  pls check jenkins issues

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread Sunil Govindan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910549#comment-16910549
 ] 

Sunil Govindan commented on HDFS-14729:
---

+1 on latest patch.

[~vivekratnavel] cud u pls submit patch to run jenkins

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2018-11-20 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13732:
--
Fix Version/s: (was: 3.2.0)
   3.2.1

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Fix For: 3.2.1
>
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676777#comment-16676777
 ] 

Sunil Govindan commented on HDFS-13348:
---

Yes. I removed hadoop versions from this. Thanks

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch, HDFS-13348-HDFS-7240.002.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13376) Specify minimum GCC version to avoid TLS support error in Build of hadoop-hdfs-native-client

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13376:
--
Fix Version/s: 3.3.0
   3.2.0

> Specify minimum GCC version to avoid TLS support error in Build of 
> hadoop-hdfs-native-client
> 
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676745#comment-16676745
 ] 

Sunil Govindan edited comment on HDFS-13348 at 11/6/18 1:18 PM:


[~nandakumar131] pls help to check the Fixed Version.


was (Author: sunilg):
[~nandakumar131] pls help to check the Fixed Version. I updated to 3.3.0 and 
3.2.0

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch, HDFS-13348-HDFS-7240.002.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13348:
--
Fix Version/s: (was: 3.3.0)
   (was: 3.2.0)

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch, HDFS-13348-HDFS-7240.002.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676745#comment-16676745
 ] 

Sunil Govindan commented on HDFS-13348:
---

[~nandakumar131] pls help to check the Fixed Version. I updated to 3.3.0 and 
3.2.0

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch, HDFS-13348-HDFS-7240.002.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13348:
--
Fix Version/s: 3.3.0
   3.2.0

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch, HDFS-13348-HDFS-7240.002.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-11807:
--
Fix Version/s: 3.3.0
   3.2.0

> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch, 
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch, 
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch, 
> HDFS-11807.HDFS-8707.007.patch, HDFS-11807.HDFS-8707.008.patch, 
> HDFS-11807.HDFS-8707.009.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676734#comment-16676734
 ] 

Sunil Govindan commented on HDFS-13534:
---

Updated Fixed Version. [~James C] pls help to check if this is correct.

> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13534.000.patch, HDFS-13534.001.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13534:
--
Fix Version/s: 3.3.0
   3.2.0

> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13534.000.patch, HDFS-13534.001.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13338) Update BUILDING.txt for building native libraries

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676326#comment-16676326
 ] 

Sunil Govindan commented on HDFS-13338:
---

Hi [~James C]

Cud u pls review the fixed version which I updated 3.2.0 and 3.3.0. Pls correct 
if this patch is applied to some other branches. Thanks.

> Update BUILDING.txt for building native libraries
> -
>
> Key: HDFS-13338
> URL: https://issues.apache.org/jira/browse/HDFS-13338
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: build, documentation, native
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13338.1.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
> [ERROR] around Ant part ... dir="/.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" 
> executable="cmake">... @ 5:119 in 
> /.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
> around Ant part ... dir="/root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" 
> executable="cmake">... @ 5:119 in 
> /root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676325#comment-16676325
 ] 

Sunil Govindan commented on HDFS-13403:
---

Hi [~James C] pls help to check the Fixed Version.

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13403.000.patch, build_fixes.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13338) Update BUILDING.txt for building native libraries

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13338:
--
Fix Version/s: 3.3.0
   3.2.0

> Update BUILDING.txt for building native libraries
> -
>
> Key: HDFS-13338
> URL: https://issues.apache.org/jira/browse/HDFS-13338
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: build, documentation, native
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13338.1.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
> [ERROR] around Ant part ... dir="/.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" 
> executable="cmake">... @ 5:119 in 
> /.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
> around Ant part ... dir="/root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" 
> executable="cmake">... @ 5:119 in 
> /root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13403:
--
Fix Version/s: 3.3.0
   3.2.0

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-13403.000.patch, build_fixes.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13299) RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)

2018-11-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676323#comment-16676323
 ] 

Sunil Govindan commented on HDFS-13299:
---

[~brahmareddy] [~elgoiri] cud u pls help to update correct fixed version

> RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)
> -
>
> Key: HDFS-13299
> URL: https://issues.apache.org/jira/browse/HDFS-13299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-13299-branch-2-002.patch, HDFS-13299-branch-2.patch
>
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on p
> roject hadoop-hdfs: Compilation failure: Compilation failure:
> [ERROR] 
> /D:/branch-2/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resol
> ver/order/TestLocalResolver.java:[84,16] local variable sb is accessed from 
> within inner class; needs to be declared fin
> al
> [ERROR] 
> /D:/branch-2/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resol
> ver/TestMultipleDestinationResolver.java:[391,27] incompatible types: 
> java.util.TreeSet cannot be conv
> erted to java.util.Set
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13299) RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13299:
--
Fix Version/s: (was: 3.2.0)

> RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)
> -
>
> Key: HDFS-13299
> URL: https://issues.apache.org/jira/browse/HDFS-13299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-13299-branch-2-002.patch, HDFS-13299-branch-2.patch
>
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on p
> roject hadoop-hdfs: Compilation failure: Compilation failure:
> [ERROR] 
> /D:/branch-2/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resol
> ver/order/TestLocalResolver.java:[84,16] local variable sb is accessed from 
> within inner class; needs to be declared fin
> al
> [ERROR] 
> /D:/branch-2/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resol
> ver/TestMultipleDestinationResolver.java:[391,27] incompatible types: 
> java.util.TreeSet cannot be conv
> erted to java.util.Set
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13299) RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)

2018-11-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13299:
--
Fix Version/s: 3.2.0

> RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)
> -
>
> Key: HDFS-13299
> URL: https://issues.apache.org/jira/browse/HDFS-13299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HDFS-13299-branch-2-002.patch, HDFS-13299-branch-2.patch
>
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on p
> roject hadoop-hdfs: Compilation failure: Compilation failure:
> [ERROR] 
> /D:/branch-2/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resol
> ver/order/TestLocalResolver.java:[84,16] local variable sb is accessed from 
> within inner class; needs to be declared fin
> al
> [ERROR] 
> /D:/branch-2/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resol
> ver/TestMultipleDestinationResolver.java:[391,27] incompatible types: 
> java.util.TreeSet cannot be conv
> erted to java.util.Set
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1915) fuse-dfs does not support append

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676220#comment-16676220
 ] 

Sunil Govindan commented on HDFS-1915:
--

Removing Fixed Version as this task is still ongoing

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch, HDFS-1915.004.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-1915:
-
Fix Version/s: (was: 3.2.0)

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch, HDFS-1915.004.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12995) [SPS] : Merge work for HDFS-10285 branch

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-12995:
--
Fix Version/s: (was: 3.2.0)
   (was: HDFS-10285)

> [SPS] : Merge work for HDFS-10285 branch
> 
>
> Key: HDFS-12995
> URL: https://issues.apache.org/jira/browse/HDFS-12995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-01.patch
>
>
> This Jira is to run aggregated HDFS-10285 branch patch against trunk and 
> check for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12995) [SPS] : Merge work for HDFS-10285 branch

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676219#comment-16676219
 ] 

Sunil Govindan commented on HDFS-12995:
---

Removing Fixed version from this task as this is closed as "Information 
Provided"

> [SPS] : Merge work for HDFS-10285 branch
> 
>
> Key: HDFS-12995
> URL: https://issues.apache.org/jira/browse/HDFS-12995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-01.patch
>
>
> This Jira is to run aggregated HDFS-10285 branch patch against trunk and 
> check for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13084) [SPS]: Fix the branch review comments

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reopened HDFS-13084:
---

As there are no patches associated with this task, and same comments are 
handled by other jiras, reopening this Jira to close with correct reason.

> [SPS]: Fix the branch review comments
> -
>
> Key: HDFS-13084
> URL: https://issues.apache.org/jira/browse/HDFS-13084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>Priority: Major
>
> Fix the review comments provided by [~daryn]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13084) [SPS]: Fix the branch review comments

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan resolved HDFS-13084.
---
   Resolution: Won't Fix
Fix Version/s: (was: 3.2.0)
   (was: HDFS-10285)

> [SPS]: Fix the branch review comments
> -
>
> Key: HDFS-13084
> URL: https://issues.apache.org/jira/browse/HDFS-13084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>Priority: Major
>
> Fix the review comments provided by [~daryn]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676211#comment-16676211
 ] 

Sunil Govindan commented on HDFS-13186:
---

For future RM's reference. Commit message for this Jira is (instead of 
HDF-13186, its HADOOP-13186)
{code:java}
Author: Chris Douglas 
Date: Sun Jun 17 11:54:26 2018 -0700

 HADOOP-13186. Multipart Uploader API. Contributed by Ewan Higgs{code}

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676209#comment-16676209
 ] 

Sunil Govindan commented on HDFS-13766:
---

Removing Fixed version from this duplicated Jira as original HADOOP-15576 has 
correct Fixed version set.

> HDFS Classes used for implementation of Multipart uploads to move to 
> hadoop-common JAR
> --
>
> Key: HDFS-13766
> URL: https://issues.apache.org/jira/browse/HDFS-13766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
>
> the multipart upload API uses classes which are only in {{hadoop-hdfs-client}}
> These need to be moved to hadoop-common so that cloud deployments which don't 
> have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can 
> implement and use the API.
> Sorry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13766:
--
Fix Version/s: (was: 3.2.0)

> HDFS Classes used for implementation of Multipart uploads to move to 
> hadoop-common JAR
> --
>
> Key: HDFS-13766
> URL: https://issues.apache.org/jira/browse/HDFS-13766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
>
> the multipart upload API uses classes which are only in {{hadoop-hdfs-client}}
> These need to be moved to hadoop-common so that cloud deployments which don't 
> have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can 
> implement and use the API.
> Sorry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13814) Remove super user privilege requirement for NameNode.getServiceStatus

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676208#comment-16676208
 ] 

Sunil Govindan commented on HDFS-13814:
---

For future RM's reference, commit message missed the Jira id.
{code:java}
Author: Chao Sun 
Date: Fri Aug 10 15:59:39 2018 -0700

 Remove super user privilege requirement for NameNode.getServiceStatus. 
Contributed by Chao Sun.{code}

> Remove super user privilege requirement for NameNode.getServiceStatus
> -
>
> Key: HDFS-13814
> URL: https://issues.apache.org/jira/browse/HDFS-13814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13814.000.patch
>
>
> See details in the discussion of HDFS-13749. Currently 
> {{NameNode#getServiceStatus}} requires super user privilege, which doesn't 
> seem necessary. For comparison: {{DFSAdmin#report}}, as well as 
> {{SAFEMODE_GET}}, doesn't require super privilege.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Set a maximum delay for retrying locateFollowingBlock

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676207#comment-16676207
 ] 

Sunil Govindan commented on HDFS-13882:
---

Hi [~xiaochen]

Updating Fixed Version to 3.3.0 as this is fixed and was not landed on 
branch-3.2/branch-3.2.0 which was closed for RC prep. Pls let me know if any 
issues.

> Set a maximum delay for retrying locateFollowingBlock
> -
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum delay for retrying locateFollowingBlock

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13882:
--
Fix Version/s: (was: 3.2.0)
   3.3.0

> Set a maximum delay for retrying locateFollowingBlock
> -
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676205#comment-16676205
 ] 

Sunil Govindan commented on HDFS-13941:
---

Updating Fixed Version to 3.2.1 as this is fixed was not landed on branch-3.2.0 
which was closed for RC prep.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch, HDFS-13941.branch-3.0.001.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13941:
--
Fix Version/s: (was: 3.2.0)
   3.2.1

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch, HDFS-13941.branch-3.0.001.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13973) getErasureCodingPolicy should log path in audit event

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676204#comment-16676204
 ] 

Sunil Govindan commented on HDFS-13973:
---

Updating Fixed Version to 3.2.1 as this is fixed was not landed on branch-3.2.0 
which was closed for RC prep.

> getErasureCodingPolicy should log path in audit event
> -
>
> Key: HDFS-13973
> URL: https://issues.apache.org/jira/browse/HDFS-13973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HDFS-13973.001.patch, HDFS-13973.002.patch
>
>
> Value for the 'src' field is missing from the audit events for 
> getErasureCodingPolicy().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13973) getErasureCodingPolicy should log path in audit event

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13973:
--
Fix Version/s: (was: 3.2.0)
   3.2.1

> getErasureCodingPolicy should log path in audit event
> -
>
> Key: HDFS-13973
> URL: https://issues.apache.org/jira/browse/HDFS-13973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HDFS-13973.001.patch, HDFS-13973.002.patch
>
>
> Value for the 'src' field is missing from the audit events for 
> getErasureCodingPolicy().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14028) HDFS OIV temporary dir deletes folder

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676199#comment-16676199
 ] 

Sunil Govindan commented on HDFS-14028:
---

Updating Fixed Version to 3.2.1 as this is fixed was not landed on branch-3.2.0 
which was closed for RC prep.

> HDFS OIV temporary dir deletes folder
> -
>
> Key: HDFS-14028
> URL: https://issues.apache.org/jira/browse/HDFS-14028
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14028.001.patch
>
>
> The Hadoop Offline Image Viewer tool has an undocumented 'feature' where it 
> will silently delete the directory passed in with the -t flag. This blew away 
> some important files when someone used a sensible, but ultimately poor choice 
> for this directory as the deletion isn't documented.
> For example, if someone were, as root do: 'hdfs oiv -i 
> fsimage_000307052343 -p Delimited -t / -o image', bad things would 
> happen. This behavior should be documented and probably have a dialog or 
> throwing exception.
> There is a piece of code from PBImageTextWriter where a check can be added:
> {code:java}
> LevelDBMetadataMap(String baseDir) throws IOException {
>   File dbDir = new File(baseDir);
>   if (dbDir.exists()) {
> FileUtils.deleteDirectory(dbDir);
>   }
>   ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14028) HDFS OIV temporary dir deletes folder

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-14028:
--
Fix Version/s: (was: 3.2.0)
   3.2.1

> HDFS OIV temporary dir deletes folder
> -
>
> Key: HDFS-14028
> URL: https://issues.apache.org/jira/browse/HDFS-14028
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14028.001.patch
>
>
> The Hadoop Offline Image Viewer tool has an undocumented 'feature' where it 
> will silently delete the directory passed in with the -t flag. This blew away 
> some important files when someone used a sensible, but ultimately poor choice 
> for this directory as the deletion isn't documented.
> For example, if someone were, as root do: 'hdfs oiv -i 
> fsimage_000307052343 -p Delimited -t / -o image', bad things would 
> happen. This behavior should be documented and probably have a dialog or 
> throwing exception.
> There is a piece of code from PBImageTextWriter where a check can be added:
> {code:java}
> LevelDBMetadataMap(String baseDir) throws IOException {
>   File dbDir = new File(baseDir);
>   if (dbDir.exists()) {
> FileUtils.deleteDirectory(dbDir);
>   }
>   ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan resolved HDFS-12026.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   3.2.0

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671035#comment-16671035
 ] 

Sunil Govindan commented on HDFS-12026:
---

HDFS-14033 is committed. Closing this.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669689#comment-16669689
 ] 

Sunil Govindan commented on HDFS-14033:
---

Committed to trunk/branch-3.2/3.2.0

Thank all of you for helping.

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-14033:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   3.2.0
   Status: Resolved  (was: Patch Available)

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669687#comment-16669687
 ] 

Sunil Govindan commented on HDFS-14033:
---

Thanks [~James C] for confirming patch is good. I am going ahead to commit the 
same.

Thanks.

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-30 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669066#comment-16669066
 ] 

Sunil Govindan commented on HDFS-14033:
---

If there are no major concerns on this approach, i could help to get this in by 
today evening.

I really appreciate a review here as its specific to few compilers. As per me, 
changes looks clean.

Thanks

[~vagarychen] [~shv] [~msingh] [~vinayrpet] [~rakeshr]

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-30 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668512#comment-16668512
 ] 

Sunil Govindan commented on HDFS-14033:
---

cc [~msingh]

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-30 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668240#comment-16668240
 ] 

Sunil Govindan commented on HDFS-14033:
---

[~vagarychen] and [~shv]

I think latest patch covers the case which you mentioned. Could you please 
check this. Thanks.

cc [~vinayrpet]

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-29 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667459#comment-16667459
 ] 

Sunil Govindan commented on HDFS-14033:
---

[~anatoli.shein] test case failures related?

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-29 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667323#comment-16667323
 ] 

Sunil Govindan commented on HDFS-14033:
---

[~James C]  [~vagarychen] and [~shv]

Could u pls help to review this.

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1462#comment-1462
 ] 

Sunil Govindan commented on HDFS-12026:
---

Yes. I also echo the same.

[~anatoli.shein] [~James C] is this possible handle Konstantin's comments by 
disabling libhdfs++ for  those versions of gcc/glibc

 

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1397#comment-1397
 ] 

Sunil Govindan commented on HDFS-12026:
---

ping [~anatoli.shein] [~James C]

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-26 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665447#comment-16665447
 ] 

Sunil Govindan commented on HDFS-12026:
---

Thanks [~anatoli.shein]. This seems fine. I ll definitely wait for [~shv] 
[~vagarychen] and wait for the thoughts from them.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-26 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665289#comment-16665289
 ] 

Sunil Govindan commented on HDFS-12026:
---

Hi [~anatoli.shein]. 

Thanks for the suggestion. What will be the impact in RHEL6 if Libhdfs++ is not 
built? Will it affect any upgrade issue later?

If the impacts are less, i am fine with this approach. [~shv] [~vagarychen] cud 
u pls share your thoughts as well.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-26 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664609#comment-16664609
 ] 

Sunil Govindan edited comment on HDFS-12026 at 10/26/18 1:00 PM:
-

Thanks [~anatoli.shein] [~James C] and [~vagarychen]

Yes, i understand if this is a base patch, and more patches went on top, it ll 
be tougher. [~vagarychen] shared the error now. Cud u pls help to check what 
can be done here. Please let know if any help needed.

This is the last blocker now on 3.2.0 release which is now delayed for few 
weeks. Thanks for the support.


was (Author: sunilg):
Thanks [~anatoli.shein] [~James C] and [~vagarychen]

Yes, i understand if this is a base patch, and more patches went on top, it ll 
be tougher. [~vagarychen] shared the error now. Cud u pls help to check what 
can be done here.

This is the last blocker now on 3.2.0 release which is now delayed for few 
weeks. Thanks for the support.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8707) Implement an async pure c++ HDFS client

2018-10-26 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665132#comment-16665132
 ] 

Sunil Govindan commented on HDFS-8707:
--

{quote} should all subtasks that were committed on the HDFS-8707 branch prior 
to merging to trunk be marked with the same fix version as HDFS-8707 since 
that's when they were resolved on trunk?
{quote}
Thank you. post merge to trunk, 3.2.0 is the first major release. hence all sub 
tickets can be marked as 3.2.0 instead of HDFS-8707

> Implement an async pure c++ HDFS client
> ---
>
> Key: HDFS-8707
> URL: https://issues.apache.org/jira/browse/HDFS-8707
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Owen O'Malley
>Assignee: James Clampffer
>Priority: Major
> Fix For: 3.2.0
>
>
> As part of working on the C++ ORC reader at ORC-3, we need an HDFS pure C++ 
> client that lets us do async io to HDFS. We want to start from the code that 
> Haohui's been working on at https://github.com/haohui/libhdfspp .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8707) Implement an async pure c++ HDFS client

2018-10-26 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665059#comment-16665059
 ] 

Sunil Govindan commented on HDFS-8707:
--

Thanks for merging this in.

Though HDFS-8707 is merged, all substasks were not changed to Fixed version 
which is 3.2.0. [~James C] cud u pls help in this. I ll be putting 3.2.0 as 
fixed version in this major Jira so that it will come as feature in 3.2.0 
release notes etc.

Also it will be better to move all running and pending tasks to a phase 2 Jira 
and move all under this new one. In that case ,this Jira can be closed and then 
it will be more easier for tracking purpose etc. 

Thanks.

> Implement an async pure c++ HDFS client
> ---
>
> Key: HDFS-8707
> URL: https://issues.apache.org/jira/browse/HDFS-8707
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Owen O'Malley
>Assignee: James Clampffer
>Priority: Major
> Fix For: 3.2.0
>
>
> As part of working on the C++ ORC reader at ORC-3, we need an HDFS pure C++ 
> client that lets us do async io to HDFS. We want to start from the code that 
> Haohui's been working on at https://github.com/haohui/libhdfspp .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8707) Implement an async pure c++ HDFS client

2018-10-26 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-8707:
-
Fix Version/s: 3.2.0

> Implement an async pure c++ HDFS client
> ---
>
> Key: HDFS-8707
> URL: https://issues.apache.org/jira/browse/HDFS-8707
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Owen O'Malley
>Assignee: James Clampffer
>Priority: Major
> Fix For: 3.2.0
>
>
> As part of working on the C++ ORC reader at ORC-3, we need an HDFS pure C++ 
> client that lets us do async io to HDFS. We want to start from the code that 
> Haohui's been working on at https://github.com/haohui/libhdfspp .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI

2018-10-26 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665057#comment-16665057
 ] 

Sunil Govindan commented on HDFS-12134:
---

Hi [~James C]

pls set the fix version while committing patch.  HDFS-8707 seems the fix 
version for this. Correct?

> libhdfs++: Add a synchronization interface for the GSSAPI
> -
>
> Key: HDFS-12134
> URL: https://issues.apache.org/jira/browse/HDFS-12134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Attachments: HDFS-12134.HDFS-8707.000.patch, 
> HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch, 
> HDFS-12134.HDFS-8707.003.patch, HDFS-12134.HDFS-8707.004.patch
>
>
> Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe.  There needs to 
> be a way for a client application to share a lock with this library in order 
> to prevent race conditions.  It can be done using event callbacks through the 
> C API but we can provide something more robust (RAII) in the C++ API.
> Proposed client supplied lock, pretty much the C++17 lockable concept. Use a 
> default if one isn't provided.  This would be scoped at the process level 
> since it's unlikely that multiple instances of libgssapi unless someone puts 
> some effort in with dlopen/dlsym.
> {code}
> class LockProvider
> {
>   virtual ~LockProvider() {}
>   // allow client application to deny access to the lock
>   virtual bool try_lock() = 0;
>   virtual void unlock() = 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-25 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664609#comment-16664609
 ] 

Sunil Govindan commented on HDFS-12026:
---

Thanks [~anatoli.shein] [~James C] and [~vagarychen]

Yes, i understand if this is a base patch, and more patches went on top, it ll 
be tougher. [~vagarychen] shared the error now. Cud u pls help to check what 
can be done here.

This is the last blocker now on 3.2.0 release which is now delayed for few 
weeks. Thanks for the support.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-25 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663838#comment-16663838
 ] 

Sunil Govindan commented on HDFS-12026:
---

Thanks [~shv] for pointing.

I think there are no progress. To unblock this compatibility issue, could this 
be reverted?

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14002) TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails

2018-10-18 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656198#comment-16656198
 ] 

Sunil Govindan commented on HDFS-14002:
---

Thanks [~elgoiri]. I am preparing RC now. This is really helpful. 

> TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails
> --
>
> Key: HDFS-14002
> URL: https://issues.apache.org/jira/browse/HDFS-14002
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-14002.1.patch
>
>
> This is the error log.
> {noformat}
> java.lang.AssertionError: Expected feature EXPANDED_STRING_TABLE to have 
> minimum compatible layout version set to itself. expected:<-65> but was:<-61>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.protocol.TestLayoutVersion.testNameNodeFeatureMinimumCompatibleLayoutVersions(TestLayoutVersion.java:141)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14002) TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails

2018-10-18 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655068#comment-16655068
 ] 

Sunil Govindan commented on HDFS-14002:
---

Changed priority to Major as its a test issue.

> TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails
> --
>
> Key: HDFS-14002
> URL: https://issues.apache.org/jira/browse/HDFS-14002
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14002.1.patch
>
>
> This is the error log.
> {noformat}
> java.lang.AssertionError: Expected feature EXPANDED_STRING_TABLE to have 
> minimum compatible layout version set to itself. expected:<-65> but was:<-61>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.protocol.TestLayoutVersion.testNameNodeFeatureMinimumCompatibleLayoutVersions(TestLayoutVersion.java:141)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14002) TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails

2018-10-18 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-14002:
--
Priority: Major  (was: Critical)

> TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails
> --
>
> Key: HDFS-14002
> URL: https://issues.apache.org/jira/browse/HDFS-14002
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14002.1.patch
>
>
> This is the error log.
> {noformat}
> java.lang.AssertionError: Expected feature EXPANDED_STRING_TABLE to have 
> minimum compatible layout version set to itself. expected:<-65> but was:<-61>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.protocol.TestLayoutVersion.testNameNodeFeatureMinimumCompatibleLayoutVersions(TestLayoutVersion.java:141)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14002) TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails

2018-10-18 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654897#comment-16654897
 ] 

Sunil Govindan commented on HDFS-14002:
---

[~vinayrpet]. Thank you, cud u pls check whether this fine.

> TestLayoutVersion#testNameNodeFeatureMinimumCompatibleLayoutVersions fails
> --
>
> Key: HDFS-14002
> URL: https://issues.apache.org/jira/browse/HDFS-14002
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Attachments: HDFS-14002.1.patch
>
>
> This is the error log.
> {noformat}
> java.lang.AssertionError: Expected feature EXPANDED_STRING_TABLE to have 
> minimum compatible layout version set to itself. expected:<-65> but was:<-61>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.protocol.TestLayoutVersion.testNameNodeFeatureMinimumCompatibleLayoutVersions(TestLayoutVersion.java:141)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13982) convertStorageType() in PBHelperClient is not easy to extend when adding new storage types

2018-10-11 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned HDFS-13982:
-

Assignee: Xiang Li

> convertStorageType() in PBHelperClient is not easy to extend when adding new 
> storage types
> --
>
> Key: HDFS-13982
> URL: https://issues.apache.org/jira/browse/HDFS-13982
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> In PBHelperClient, there are 2 functions to convert between StorageTypeProto 
> and StorageType, like:
> {code:java}
> public static StorageTypeProto convertStorageType(StorageType type) {
>   switch(type) {
>   case DISK:
> return StorageTypeProto.DISK;
>   case SSD:
> return StorageTypeProto.SSD;
>   case ARCHIVE:
> return StorageTypeProto.ARCHIVE;
>   case RAM_DISK:
> return StorageTypeProto.RAM_DISK;
>   case PROVIDED:
> return StorageTypeProto.PROVIDED;
>   default:
> throw new IllegalStateException(
> "BUG: StorageType not found, type=" + type);
>   }
> }
> public static StorageType convertStorageType(StorageTypeProto type) {
>   switch(type) {
>   case DISK:
> return StorageType.DISK;
>   case SSD:
> return StorageType.SSD;
>   case ARCHIVE:
> return StorageType.ARCHIVE;
>   case RAM_DISK:
> return StorageType.RAM_DISK;
>   case PROVIDED:
> return StorageType.PROVIDED;
>   default:
> throw new IllegalStateException(
> "BUG: StorageTypeProto not found, type=" + type);
>   }
> }
> {code}
> When there is a need to add a new storage type, we need to add a "case" 
> clause here. It is not quite convenient. And it is easy to forget changing 
> this file, because the newcomers always focus on the change in 
> StorageType.java (to add new storage types).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636350#comment-16636350
 ] 

Sunil Govindan commented on HDFS-13952:
---

Folks, Thanks for quickly responding. Its  my bad.

Command which used to update pom file didnt change one file and I somehow 
missed committing that change post my local compile.

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13937) Multipart Uploader APIs to be marked as private/unstable in 3.2.0

2018-09-24 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13937:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~ste...@apache.org]. Committed to trunk.

> Multipart Uploader APIs to be marked as private/unstable in 3.2.0
> -
>
> Key: HDFS-13937
> URL: https://issues.apache.org/jira/browse/HDFS-13937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HDFS-13937-001.patch
>
>
> HDFS-13717 shows that the MPU stuff isn't yet stable. Mark the interfaces as 
> private/unstable and postpone the rest of that patch until after.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13937) Multipart Uploader APIs to be marked as private/unstable in 3.2.0

2018-09-24 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626041#comment-16626041
 ] 

Sunil Govindan commented on HDFS-13937:
---

Test cases are not needed. Looks good. Committing shortly. Will fix minor 
checkstyle while committing.

Thanks [~ste...@apache.org]

> Multipart Uploader APIs to be marked as private/unstable in 3.2.0
> -
>
> Key: HDFS-13937
> URL: https://issues.apache.org/jira/browse/HDFS-13937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HDFS-13937-001.patch
>
>
> HDFS-13717 shows that the MPU stuff isn't yet stable. Mark the interfaces as 
> private/unstable and postpone the rest of that patch until after.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-24 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625438#comment-16625438
 ] 

Sunil Govindan commented on HDFS-13713:
---

Thanks [~ste...@apache.org] and [~ehiggs] for helping here for 3.2

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HADOOP-13713-004.patch, HADOOP-13713-004.patch, 
> HADOOP-13713-005.patch, HADOOP-13713-006.patch, HDFS-13713.001.patch, 
> HDFS-13713.002.patch, HDFS-13713.003.patch, multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2018-09-17 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-12452:
--
Target Version/s: 3.3.0  (was: 3.2.0)

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: flaky-test
> Attachments: HDFS-12452.001.patch, HDFS-12452.002.patch
>
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2018-09-17 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617257#comment-16617257
 ] 

Sunil Govindan commented on HDFS-12452:
---

As code freeze for 3.2 is crossed, moving this Jira to 3.3.  Please feel free 
to revert if anyone has concerns. Thank you.

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: flaky-test
> Attachments: HDFS-12452.001.patch, HDFS-12452.002.patch
>
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-09-17 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13243:
--
Target Version/s: 3.3.0  (was: 3.2.0)

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch, 
> HDFS-13243-v6.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> 

[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-09-17 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617254#comment-16617254
 ] 

Sunil Govindan commented on HDFS-13243:
---

As code freeze for 3.2 is crossed, moving this Jira to 3.3.  Please feel free 
to revert if anyone has concerns. Thank you.

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch, 
> HDFS-13243-v6.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not 

[jira] [Commented] (HDFS-12049) Recommissioning live nodes stalls the NN

2018-09-17 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617249#comment-16617249
 ] 

Sunil Govindan commented on HDFS-12049:
---

As code freeze for 3.2 is crossed, moving this Jira to 3.3.  Please feel free 
to revert if anyone has concerns. Thank you.

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12049) Recommissioning live nodes stalls the NN

2018-09-17 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-12049:
--
Target Version/s: 3.3.0  (was: 3.2.0)

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11310) Reduce the performance impact of the balancer (trunk port)

2018-09-17 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617246#comment-16617246
 ] 

Sunil Govindan commented on HDFS-11310:
---

As code freeze for 3.2 is crossed, moving this Jira to 3.3. Thank you.

> Reduce the performance impact of the balancer (trunk port)
> --
>
> Key: HDFS-11310
> URL: https://issues.apache.org/jira/browse/HDFS-11310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Daryn Sharp
>Priority: Critical
>
> HDFS-7967 introduced a highly performant balancer getBlocks() query that 
> scales to large/dense clusters.  The simple design implementation depends on 
> the triplets data structure.  HDFS-9260 removed the triplets which 
> fundamentally changes the implementation.  Either that patch must be reverted 
> or the getBlocks() patch needs reimplementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11310) Reduce the performance impact of the balancer (trunk port)

2018-09-17 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-11310:
--
Target Version/s: 3.3.0  (was: 3.2.0)

> Reduce the performance impact of the balancer (trunk port)
> --
>
> Key: HDFS-11310
> URL: https://issues.apache.org/jira/browse/HDFS-11310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Daryn Sharp
>Priority: Critical
>
> HDFS-7967 introduced a highly performant balancer getBlocks() query that 
> scales to large/dense clusters.  The simple design implementation depends on 
> the triplets data structure.  HDFS-9260 removed the triplets which 
> fundamentally changes the implementation.  Either that patch must be reverted 
> or the getBlocks() patch needs reimplementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-17 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617232#comment-16617232
 ] 

Sunil Govindan commented on HDFS-13713:
---

Hi [~elgoiri]  and [~ehiggs]. This patch is closer to commit. Could you please 
get this in today as we crossed 3.2 code freeze cutoff. If this needs more 
time, could we move to 3.2.1. Thanks

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> HDFS-13713.003.patch, multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608719#comment-16608719
 ] 

Sunil Govindan commented on HDFS-13744:
---

Hi [~mackrorysd] Looks like patch is committed, but issue is not closed. Could 
you please close this if its fine. Thank you.

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch, HDFS-13744.02.patch, 
> HDFS-13744.03.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * Delimited processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * 
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * 
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608710#comment-16608710
 ] 

Sunil Govindan commented on HDFS-13243:
---

Ping again : [~gzh1992n]

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch, 
> HDFS-13243-v6.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> 

[jira] [Commented] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608709#comment-16608709
 ] 

Sunil Govindan commented on HDFS-12452:
---

Ping again: [~xyao]

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: flaky-test
> Attachments: HDFS-12452.001.patch, HDFS-12452.002.patch
>
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8893) DNs with failed volumes stop serving during rolling upgrade

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608707#comment-16608707
 ] 

Sunil Govindan commented on HDFS-8893:
--

ping [~daryn] [~shahrs87]

> DNs with failed volumes stop serving during rolling upgrade
> ---
>
> Key: HDFS-8893
> URL: https://issues.apache.org/jira/browse/HDFS-8893
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Daryn Sharp
>Priority: Critical
>
> When a rolling upgrade starts, all DNs try to write a rolling_upgrade marker 
> to each of their volumes. If one of the volumes is bad, this will fail. When 
> this failure happens, the DN does not update the key it received from the NN.
> Unfortunately we had one failed volume on all the 3 datanodes which were 
> having replica.
> Keys expire after 20 hours so at about 20 hours into the rolling upgrade, the 
> DNs with failed volumes will stop serving clients.
> Here is the stack trace on the datanode size:
> {noformat}
> 2015-08-11 07:32:28,827 [DataNode: heartbeating to 8020] WARN 
> datanode.DataNode: IOException in offerService
> java.io.IOException: Read-only file system
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:947)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.setRollingUpgradeMarkers(BlockPoolSliceStorage.java:721)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setRollingUpgradeMarker(DataStorage.java:173)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.setRollingUpgradeMarker(FsDatasetImpl.java:2357)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.signalRollingUpgrade(BPOfferService.java:480)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.handleRollingUpgradeStatus(BPServiceActor.java:626)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:677)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:833)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12049) Recommissioning live nodes stalls the NN

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608706#comment-16608706
 ] 

Sunil Govindan commented on HDFS-12049:
---

Hi [~daryn], Could u please help to check on this issue. As there is no 
progress and code freeze for 3.2.0 is nearing, we can move to 3.3.0 if there 
are no immediate plans.

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11310) Reduce the performance impact of the balancer (trunk port)

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608702#comment-16608702
 ] 

Sunil Govindan commented on HDFS-11310:
---

Thanks [~daryn]. 3.2.0 code freeze is nearby (15th Sept), could u please share 
the plan for this Jira or we may need to move it out.

> Reduce the performance impact of the balancer (trunk port)
> --
>
> Key: HDFS-11310
> URL: https://issues.apache.org/jira/browse/HDFS-11310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Daryn Sharp
>Priority: Critical
>
> HDFS-7967 introduced a highly performant balancer getBlocks() query that 
> scales to large/dense clusters.  The simple design implementation depends on 
> the triplets data structure.  HDFS-9260 removed the triplets which 
> fundamentally changes the implementation.  Either that patch must be reverted 
> or the getBlocks() patch needs reimplementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-09-09 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608693#comment-16608693
 ] 

Sunil Govindan commented on HDFS-13596:
---

Code freeze for 3.2.0 is nearing (15th Sept) and there are no contributors for 
this yet. Since this is a blocker, pinging [~zvenczel] [~hanishakoneru] 
[~rajeshhadoop] [~leftnoteasy] [~rohithsharma] [~vinayrpet] [~rakeshr] 
[~umamaheswararao] for further steps.

If we wont be able to finish this, I think we will need to move this to next 
version.

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> 

[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595938#comment-16595938
 ] 

Sunil Govindan commented on HDFS-13744:
---

Hi [~zvenczel]

As this jira is marked for 3.2 as a critical, cud u pls help to take this 
forward or move out if its not feasible to finish in coming weeks. 3.2 code 
freeze date is nearby in a weeks. Kindly help to check the same.

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * Delimited processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * 
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * 
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595927#comment-16595927
 ] 

Sunil Govindan commented on HDFS-13243:
---

[~gzh1992n]

As this jira is marked for 3.2 as a critical, cud u pls help to take this 
forward or move out if its not feasible to finish in coming weeks. 3.2 code 
freeze date is nearby in a weeks. Kindly help to check the same.

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch, 
> HDFS-13243-v6.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  

[jira] [Commented] (HDFS-8893) DNs with failed volumes stop serving during rolling upgrade

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595919#comment-16595919
 ] 

Sunil Govindan commented on HDFS-8893:
--

[~daryn] [~shahrs87]

As this jira is marked for 3.2 as a critical, cud u pls help to take this 
forward or move out if its not feasible to finish in coming weeks. 3.2 code 
freeze date is nearby in a weeks. Kindly help to check the same.

> DNs with failed volumes stop serving during rolling upgrade
> ---
>
> Key: HDFS-8893
> URL: https://issues.apache.org/jira/browse/HDFS-8893
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Daryn Sharp
>Priority: Critical
>
> When a rolling upgrade starts, all DNs try to write a rolling_upgrade marker 
> to each of their volumes. If one of the volumes is bad, this will fail. When 
> this failure happens, the DN does not update the key it received from the NN.
> Unfortunately we had one failed volume on all the 3 datanodes which were 
> having replica.
> Keys expire after 20 hours so at about 20 hours into the rolling upgrade, the 
> DNs with failed volumes will stop serving clients.
> Here is the stack trace on the datanode size:
> {noformat}
> 2015-08-11 07:32:28,827 [DataNode: heartbeating to 8020] WARN 
> datanode.DataNode: IOException in offerService
> java.io.IOException: Read-only file system
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:947)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.setRollingUpgradeMarkers(BlockPoolSliceStorage.java:721)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setRollingUpgradeMarker(DataStorage.java:173)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.setRollingUpgradeMarker(FsDatasetImpl.java:2357)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.signalRollingUpgrade(BPOfferService.java:480)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.handleRollingUpgradeStatus(BPServiceActor.java:626)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:677)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:833)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595914#comment-16595914
 ] 

Sunil Govindan commented on HDFS-12452:
---

Hi [~xyao]

As this jira is marked for 3.2 as a Critical, cud u pls help to take this 
forward or move out if its not feasible to finish in coming weeks. 3.2 code 
freeze date is nearby in a weeks. Kindly help to check the same.

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: flaky-test
> Attachments: HDFS-12452.001.patch, HDFS-12452.002.patch
>
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12049) Recommissioning live nodes stalls the NN

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595913#comment-16595913
 ] 

Sunil Govindan commented on HDFS-12049:
---

Hi [~daryn]

As this jira is marked for 3.2 as a Critical, cud u pls help to take this 
forward or move out if its not feasible to finish in coming weeks. 3.2 code 
freeze date is nearby in a weeks. Kindly help to check the same.

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595908#comment-16595908
 ] 

Sunil Govindan commented on HDFS-13596:
---

[~zvenczel] [~hanishakoneru] [~rajeshhadoop] [~leftnoteasy] [~rohithsharma] How 
to take this forward as we are nearing 3.2 release.

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Zsolt Venczel
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at 

[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-08-28 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595901#comment-16595901
 ] 

Sunil Govindan commented on HDFS-13713:
---

Hi [~ehiggs].

As this jira is marked for 3.2 as a blocker, cud u pls help to take this 
forward or move out if its not feasible to finish in coming weeks. 3.2 code 
freeze date is nearby in a weeks. Kindly help to check the same.

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org