[jira] [Created] (HDFS-15741) Vulnerability fixes need in openssl for Hadoop native library

2020-12-21 Thread Souryakanta Dwivedy (Jira)
Souryakanta Dwivedy created HDFS-15741:
--

 Summary: Vulnerability fixes need in openssl for Hadoop native 
library
 Key: HDFS-15741
 URL: https://issues.apache.org/jira/browse/HDFS-15741
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 3.1.1
Reporter: Souryakanta Dwivedy
 Attachments: Openssl_CVEs.png

Vulnerability fixes need in openssl for Hadoop native library

Below are the files used for hadoop where CVEs are found

openssl [version 1.1.1g ]

- libcrypto.a
 - libssl.a
 - libcrypto.so.1.1
 - libssl.so.1.1
 
 CVE details :-
 
 
 CVE-2020-1971



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Description: 
Vulnerability fixes need for Hadoop dependency library 

Below are the jars used for hadoop where CVEs are found

Jackson [version 2.10.3 ]
 - jackson-core-2.10.3.jar

CVE details :- [  CVE-2020-25649  ]
 ==

Jackson-core [version 2.4.0 ]
 - htrace-core-3.1.0-incubating.jar

CVE details :- [ CVE-2020-24616 ]
  =

Jetty [version  9.4.20.v20190813 ]
 - jetty-server-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

Jetty-http [version  9.4.20.v20190813 ]
 - jetty-http-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

 

 

 

  was:
Vulnerability fixes need in openssl for Hadoop native library

Below are the files used for hadoop where CVEs are found

openssl [version 1.1.1g ]

- libcrypto.a
 - libssl.a
 - libcrypto.so.1.1
 - libssl.so.1.1
 
 CVE details :-
 
 
 CVE-2020-1971

Summary: Vulnerability fixes need for Hadoop dependency library   (was: 
Vulnerability fixes need in openssl for Hadoop native library)

> Vulnerability fixes need for Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: Openssl_CVEs.png
>
>
> Vulnerability fixes need for Hadoop dependency library 
> Below are the jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
> Jetty [version  9.4.20.v20190813 ]
>  - jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
> Jetty-http [version  9.4.20.v20190813 ]
>  - jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Attachment: (was: Openssl_CVEs.png)

> Vulnerability fixes need for Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
>
> Vulnerability fixes need for Hadoop dependency library 
> Below are the dependent library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
> Jetty [version  9.4.20.v20190813 ]
>  - jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
> Jetty-http [version  9.4.20.v20190813 ]
>  - jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Attachment: CVEs_found.png

> Vulnerability fixes need for Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: CVEs_found.png
>
>
> Vulnerability fixes need for Hadoop dependency library 
> Below are the dependent library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
> Jetty [version  9.4.20.v20190813 ]
>  - jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
> Jetty-http [version  9.4.20.v20190813 ]
>  - jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Description: 
Vulnerability fixes need for Hadoop dependency library 

Below are the dependent library jars used for hadoop where CVEs are found

Jackson [version 2.10.3 ]
 - jackson-core-2.10.3.jar

CVE details :- [  CVE-2020-25649  ]
 ==

Jackson-core [version 2.4.0 ]
 - htrace-core-3.1.0-incubating.jar

CVE details :- [ CVE-2020-24616 ]
  =

Jetty [version  9.4.20.v20190813 ]
 - jetty-server-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

Jetty-http [version  9.4.20.v20190813 ]
 - jetty-http-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

 

 

 

  was:
Vulnerability fixes need for Hadoop dependency library 

Below are the jars used for hadoop where CVEs are found

Jackson [version 2.10.3 ]
 - jackson-core-2.10.3.jar

CVE details :- [  CVE-2020-25649  ]
 ==

Jackson-core [version 2.4.0 ]
 - htrace-core-3.1.0-incubating.jar

CVE details :- [ CVE-2020-24616 ]
  =

Jetty [version  9.4.20.v20190813 ]
 - jetty-server-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

Jetty-http [version  9.4.20.v20190813 ]
 - jetty-http-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

 

 

 


> Vulnerability fixes need for Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
>
> Vulnerability fixes need for Hadoop dependency library 
> Below are the dependent library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
> Jetty [version  9.4.20.v20190813 ]
>  - jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
> Jetty-http [version  9.4.20.v20190813 ]
>  - jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>   =
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Jackson Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Description: 
Vulnerability fixes need for Jackson Hadoop dependency library 

Below are the Jackson library jars used for hadoop where CVEs are found

Jackson [version 2.10.3 ]
 - jackson-core-2.10.3.jar

CVE details :- [  CVE-2020-25649  ]
 ==

Jackson-core [version 2.4.0 ]
 - htrace-core-3.1.0-incubating.jar

CVE details :- [ CVE-2020-24616 ]
  =

 

 

 

 

  was:
Vulnerability fixes need for Hadoop dependency library 

Below are the dependent library jars used for hadoop where CVEs are found

Jackson [version 2.10.3 ]
 - jackson-core-2.10.3.jar

CVE details :- [  CVE-2020-25649  ]
 ==

Jackson-core [version 2.4.0 ]
 - htrace-core-3.1.0-incubating.jar

CVE details :- [ CVE-2020-24616 ]
  =

Jetty [version  9.4.20.v20190813 ]
 - jetty-server-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

Jetty-http [version  9.4.20.v20190813 ]
 - jetty-http-9.4.20.v20190813.jar

CVE details :- [ CVE-2020-27216 ]
  =

 

 

 

Summary: Vulnerability fixes need for Jackson Hadoop dependency library 
  (was: Vulnerability fixes need for Hadoop dependency library )

> Vulnerability fixes need for Jackson Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: CVEs_found.png
>
>
> Vulnerability fixes need for Jackson Hadoop dependency library 
> Below are the Jackson library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Jackson Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Attachment: (was: CVEs_found.png)

> Vulnerability fixes need for Jackson Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: CVEs_found.png
>
>
> Vulnerability fixes need for Jackson Hadoop dependency library 
> Below are the Jackson library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes need for Jackson Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Attachment: CVEs_found.png

> Vulnerability fixes need for Jackson Hadoop dependency library 
> ---
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: CVEs_found.png
>
>
> Vulnerability fixes need for Jackson Hadoop dependency library 
> Below are the Jackson library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15741) Vulnerability fixes needed for Jackson Hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-15741:
---
Summary: Vulnerability fixes needed for Jackson Hadoop dependency library   
(was: Vulnerability fixes need for Jackson Hadoop dependency library )

> Vulnerability fixes needed for Jackson Hadoop dependency library 
> -
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: CVEs_found.png
>
>
> Vulnerability fixes need for Jackson Hadoop dependency library 
> Below are the Jackson library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15742) Vulnerability fixes needed for Jetty hadoop dependency library

2020-12-21 Thread Souryakanta Dwivedy (Jira)
Souryakanta Dwivedy created HDFS-15742:
--

 Summary: Vulnerability fixes needed for Jetty hadoop dependency 
library
 Key: HDFS-15742
 URL: https://issues.apache.org/jira/browse/HDFS-15742
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: Vulnerability fixes needed for Jetty hadoop dependency 
library

The jetty jars where CVEs are found are ,

 =

Jetty [version 9.4.20.v20190813 ]

jetty-server-9.4.20.v20190813.jar
CVE details :- [ CVE-2020-27216 ]
 =

Jetty-http [version 9.4.20.v20190813 ]

jetty-http-9.4.20.v20190813.jar
CVE details :- [ CVE-2020-27216 ]
 =
Reporter: Souryakanta Dwivedy
 Attachments: Jetty_CVEs.png





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HDFS-15743:
---

 Summary: Fix -Pdist build failure of hadoop-hdfs-native-client
 Key: HDFS-15743
 URL: https://issues.apache.org/jira/browse/HDFS-15743
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ hadoop-hdfs-native-client 
---
tar: ./*: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
Checking to bundle with:
bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
Checking to bundle with:
bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
Checking to bundle with:
bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
Checking to bundle with:
bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
Checking to bundle with:
bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
Bundling bin files failed
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15742) Update Jetty hadoop dependency

2020-12-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-15742:
--
Summary: Update Jetty hadoop dependency  (was: Vulnerability fixes needed 
for Jetty hadoop dependency library)

> Update Jetty hadoop dependency
> --
>
> Key: HDFS-15742
> URL: https://issues.apache.org/jira/browse/HDFS-15742
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
> Environment: Vulnerability fixes needed for Jetty hadoop dependency 
> library
> The jetty jars where CVEs are found are ,
>  =
> Jetty [version 9.4.20.v20190813 ]
> jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>  =
> Jetty-http [version 9.4.20.v20190813 ]
> jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>  =
>Reporter: Souryakanta Dwivedy
>Priority: Major
> Attachments: Jetty_CVEs.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15742) Vulnerability fixes needed for Jetty hadoop dependency library

2020-12-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-15742:
--
Component/s: build

> Vulnerability fixes needed for Jetty hadoop dependency library
> --
>
> Key: HDFS-15742
> URL: https://issues.apache.org/jira/browse/HDFS-15742
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
> Environment: Vulnerability fixes needed for Jetty hadoop dependency 
> library
> The jetty jars where CVEs are found are ,
>  =
> Jetty [version 9.4.20.v20190813 ]
> jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>  =
> Jetty-http [version 9.4.20.v20190813 ]
> jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>  =
>Reporter: Souryakanta Dwivedy
>Priority: Major
> Attachments: Jetty_CVEs.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15742) Update Jetty hadoop dependency

2020-12-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-15742:
--
Affects Version/s: 3.3.0
   3.2.1

> Update Jetty hadoop dependency
> --
>
> Key: HDFS-15742
> URL: https://issues.apache.org/jira/browse/HDFS-15742
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0, 3.2.1
> Environment: Vulnerability fixes needed for Jetty hadoop dependency 
> library
> The jetty jars where CVEs are found are ,
>  =
> Jetty [version 9.4.20.v20190813 ]
> jetty-server-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>  =
> Jetty-http [version 9.4.20.v20190813 ]
> jetty-http-9.4.20.v20190813.jar
> CVE details :- [ CVE-2020-27216 ]
>  =
>Reporter: Souryakanta Dwivedy
>Priority: Major
> Attachments: Jetty_CVEs.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15308) TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently

2020-12-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17252947#comment-17252947
 ] 

Ahmed Hussein commented on HDFS-15308:
--

{quote}[~ahussein] any pointers, why do you think consider load should solve 
this problem?
{quote}
I did not think it is solving the problem. Probably, my question was not very 
clear.
{quote}is this jira still valid or should we close it?
{quote}
I meant "should we push forward merging the patch to close that jira"?

 

> TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently
> -
>
> Key: HDFS-15308
> URL: https://issues.apache.org/jira/browse/HDFS-15308
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.3.0
>Reporter: Toshihiko Uchida
>Assignee: Hemanth Boyina
>Priority: Major
>  Labels: flaky-test
> Attachments: HDFS-15308.001.patch, HDFS-15308.002.patch
>
>
> In HDFS-14353, TestReconstructStripedFile.testNNSendsErasureCodingTasks 
> failed once due to pending reconstruction timeout as follows.
> {code}
> java.lang.AssertionError: Found 4 timeout pending reconstruction tasks
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:502)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:458)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The error occurred on the following assertion.
> {code}
> // Make sure that all pending reconstruction tasks can be processed.
> while (ns.getPendingReconstructionBlocks() > 0) {
>   long timeoutPending = ns.getNumTimedOutPendingReconstructions();
>   assertTrue(String.format("Found %d timeout pending reconstruction tasks",
>   timeoutPending), timeoutPending == 0);
>   Thread.sleep(1000);
> }
> {code}
> The failure could not be reproduced in the reporter's docker environment 
> (start-build-environment.sh).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?focusedWorklogId=526863&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526863
 ]

ASF GitHub Bot logged work on HDFS-15743:
-

Author: ASF GitHub Bot
Created on: 21/Dec/20 16:32
Start Date: 21/Dec/20 16:32
Worklog Time Spent: 10m 
  Work Description: iwasakims opened a new pull request #2569:
URL: https://github.com/apache/hadoop/pull/2569


   https://issues.apache.org/jira/browse/HDFS-15743
   
   `-Pdist` build failed in hadoop-hdfs-native-client on calling 
dev-support/bin/dist-copynativelibs.
   
   ```
   [DEBUG] Executing command line: [bash, 
/home/centos/srcs/hadoop/hadoop-project-dist/../dev-support/bin/dist-copynativelibs,
 --version=3.4.0-SNAPSHOT, 
--builddir=/home/centos/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target,
 --artifactid=hadoop-hdfs-native-client, --isalbundle=false, --isallib=, 
--openssllib=, --opensslbinbundle=false, --openssllibbundle=false, 
--snappylib=, --snappylibbundle=false, --zstdbinbundle=false, --zstdlib=, 
--zstdlibbundle=false]
   tar: ./*: Cannot stat: No such file or directory
   tar: Exiting with failure status due to previous errors
   ```
   
   The cause is empty target/bin dir (and the code for Windows platform) based 
on debug output by `set -x`. 
   
   ```
   + [[ -d 
/home/centos/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/bin
 ]]
   + mkdir -p 
/home/centos/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/hadoop-hdfs-native-client-3.4.0-SNAPSHOT/bin
   + cd 
/home/centos/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/bin
   + tar cf - './*'
   tar: ./*: Cannot stat: No such file or directory
   tar: Exiting with failure status due to previous errors
   ```
   
   I guess target/bin is created or emptied by recent commit. Adding empty dir 
check to the conditional worked for me.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 526863)
Remaining Estimate: 0h
Time Spent: 10m

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15743:
--
Labels: pull-request-available  (was: )

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15741) Vulnerability fixes needed for Jackson Hadoop dependency library

2020-12-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17252955#comment-17252955
 ] 

Wei-Chiu Chuang commented on HDFS-15741:


According to https://github.com/FasterXML/jackson-databind/issues/2589, 

fix is included in
{quote}
2.6.7.4
2.9.10.7
2.10.5.1
2.11.0 and later
{quote}

The htrace -- we'll have to remove that dependency. CC [~smeng]

> Vulnerability fixes needed for Jackson Hadoop dependency library 
> -
>
> Key: HDFS-15741
> URL: https://issues.apache.org/jira/browse/HDFS-15741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: CVEs_found.png
>
>
> Vulnerability fixes need for Jackson Hadoop dependency library 
> Below are the Jackson library jars used for hadoop where CVEs are found
> Jackson [version 2.10.3 ]
>  - jackson-core-2.10.3.jar
> CVE details :- [  CVE-2020-25649  ]
>  ==
> Jackson-core [version 2.4.0 ]
>  - htrace-core-3.1.0-incubating.jar
> CVE details :- [ CVE-2020-24616 ]
>   =
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-15743:

Status: Patch Available  (was: Open)

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?focusedWorklogId=526889&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526889
 ]

ASF GitHub Bot logged work on HDFS-15743:
-

Author: ASF GitHub Bot
Created on: 21/Dec/20 17:21
Start Date: 21/Dec/20 17:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2569:
URL: https://github.com/apache/hadoop/pull/2569#issuecomment-749095287


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  6s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 16s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  48m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2569/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2569 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 9a0aaef4c3ca 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a35fc3871b0 |
   | Max. process+thread count | 535 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2569/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 526889)
Time Spent: 20m  (was: 10m)

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15308) TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently

2020-12-21 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253061#comment-17253061
 ] 

Íñigo Goiri commented on HDFS-15308:


Let's go ahead with this then.

> TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently
> -
>
> Key: HDFS-15308
> URL: https://issues.apache.org/jira/browse/HDFS-15308
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.3.0
>Reporter: Toshihiko Uchida
>Assignee: Hemanth Boyina
>Priority: Major
>  Labels: flaky-test
> Attachments: HDFS-15308.001.patch, HDFS-15308.002.patch
>
>
> In HDFS-14353, TestReconstructStripedFile.testNNSendsErasureCodingTasks 
> failed once due to pending reconstruction timeout as follows.
> {code}
> java.lang.AssertionError: Found 4 timeout pending reconstruction tasks
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:502)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:458)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The error occurred on the following assertion.
> {code}
> // Make sure that all pending reconstruction tasks can be processed.
> while (ns.getPendingReconstructionBlocks() > 0) {
>   long timeoutPending = ns.getNumTimedOutPendingReconstructions();
>   assertTrue(String.format("Found %d timeout pending reconstruction tasks",
>   timeoutPending), timeoutPending == 0);
>   Thread.sleep(1000);
> }
> {code}
> The failure could not be reproduced in the reporter's docker environment 
> (start-build-environment.sh).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15569) Speed up the Storage#doRecover during datanode rolling upgrade

2020-12-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253082#comment-17253082
 ] 

Hadoop QA commented on HDFS-15569:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
47s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 17s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
1s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; 
considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} 
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 64 unchanged - 3 
fixed = 64 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green}{c

[jira] [Updated] (HDFS-15308) TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently

2020-12-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15308:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thanx Everyone!!!

> TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently
> -
>
> Key: HDFS-15308
> URL: https://issues.apache.org/jira/browse/HDFS-15308
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.3.0
>Reporter: Toshihiko Uchida
>Assignee: Hemanth Boyina
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.4.0
>
> Attachments: HDFS-15308.001.patch, HDFS-15308.002.patch
>
>
> In HDFS-14353, TestReconstructStripedFile.testNNSendsErasureCodingTasks 
> failed once due to pending reconstruction timeout as follows.
> {code}
> java.lang.AssertionError: Found 4 timeout pending reconstruction tasks
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:502)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:458)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The error occurred on the following assertion.
> {code}
> // Make sure that all pending reconstruction tasks can be processed.
> while (ns.getPendingReconstructionBlocks() > 0) {
>   long timeoutPending = ns.getNumTimedOutPendingReconstructions();
>   assertTrue(String.format("Found %d timeout pending reconstruction tasks",
>   timeoutPending), timeoutPending == 0);
>   Thread.sleep(1000);
> }
> {code}
> The failure could not be reproduced in the reporter's docker environment 
> (start-build-environment.sh).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?focusedWorklogId=526986&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526986
 ]

ASF GitHub Bot logged work on HDFS-15743:
-

Author: ASF GitHub Bot
Created on: 21/Dec/20 22:19
Start Date: 21/Dec/20 22:19
Worklog Time Spent: 10m 
  Work Description: iwasakims merged pull request #2569:
URL: https://github.com/apache/hadoop/pull/2569


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 526986)
Time Spent: 0.5h  (was: 20m)

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-15743:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15743) Fix -Pdist build failure of hadoop-hdfs-native-client

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15743?focusedWorklogId=526987&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526987
 ]

ASF GitHub Bot logged work on HDFS-15743:
-

Author: ASF GitHub Bot
Created on: 21/Dec/20 22:20
Start Date: 21/Dec/20 22:20
Worklog Time Spent: 10m 
  Work Description: iwasakims commented on pull request #2569:
URL: https://github.com/apache/hadoop/pull/2569#issuecomment-749229282


   Thanks, @goiri. I merged this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 526987)
Time Spent: 40m  (was: 0.5h)

> Fix -Pdist build failure of hadoop-hdfs-native-client
> -
>
> Key: HDFS-15743
> URL: https://issues.apache.org/jira/browse/HDFS-15743
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ 
> hadoop-hdfs-native-client ---
> tar: ./*: Cannot stat: No such file or directory
> tar: Exiting with failure status due to previous errors
> Checking to bundle with:
> bundleoption=false, liboption=snappy.lib, pattern=libsnappy. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=zstd.lib, pattern=libzstd. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=openssl.lib, pattern=libcrypto. libdir=
> Checking to bundle with:
> bundleoption=false, liboption=isal.lib, pattern=libisal. libdir=
> Checking to bundle with:
> bundleoption=, liboption=pmdk.lib, pattern=pmdk libdir=
> Bundling bin files failed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15739) Missing Javadoc for a param in DFSNetworkTopology

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15739?focusedWorklogId=527021&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-527021
 ]

ASF GitHub Bot logged work on HDFS-15739:
-

Author: ASF GitHub Bot
Created on: 22/Dec/20 01:25
Start Date: 22/Dec/20 01:25
Worklog Time Spent: 10m 
  Work Description: ferhui merged pull request #2566:
URL: https://github.com/apache/hadoop/pull/2566


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 527021)
Time Spent: 1h 20m  (was: 1h 10m)

> Missing Javadoc for a param in DFSNetworkTopology
> -
>
> Key: HDFS-15739
> URL: https://issues.apache.org/jira/browse/HDFS-15739
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: zhanghuazong
>Assignee: zhanghuazong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-15739.0.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
>  Only add missing Javadoc for a param in method chooseRandomWithStorageType 
> of DFSNetworkTopology.java.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15739) Missing Javadoc for a param in DFSNetworkTopology

2020-12-21 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei resolved HDFS-15739.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Missing Javadoc for a param in DFSNetworkTopology
> -
>
> Key: HDFS-15739
> URL: https://issues.apache.org/jira/browse/HDFS-15739
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: zhanghuazong
>Assignee: zhanghuazong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15739.0.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
>  Only add missing Javadoc for a param in method chooseRandomWithStorageType 
> of DFSNetworkTopology.java.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15739) Missing Javadoc for a param in DFSNetworkTopology

2020-12-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15739?focusedWorklogId=527022&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-527022
 ]

ASF GitHub Bot logged work on HDFS-15739:
-

Author: ASF GitHub Bot
Created on: 22/Dec/20 01:26
Start Date: 22/Dec/20 01:26
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2566:
URL: https://github.com/apache/hadoop/pull/2566#issuecomment-749285996


   Merged to trunk! @langlaile1221 Thanks for your contribution, @ayushtkn 
Thanks for review !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 527022)
Time Spent: 1.5h  (was: 1h 20m)

> Missing Javadoc for a param in DFSNetworkTopology
> -
>
> Key: HDFS-15739
> URL: https://issues.apache.org/jira/browse/HDFS-15739
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: zhanghuazong
>Assignee: zhanghuazong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-15739.0.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
>  Only add missing Javadoc for a param in method chooseRandomWithStorageType 
> of DFSNetworkTopology.java.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15739) Missing Javadoc for a param in DFSNetworkTopology

2020-12-21 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253199#comment-17253199
 ] 

Hui Fei commented on HDFS-15739:


Merged to trunk! [~zhanghuazong] Thanks for contribution, [~ayushtkn] 
[~marvelrock] Thanks for review !

> Missing Javadoc for a param in DFSNetworkTopology
> -
>
> Key: HDFS-15739
> URL: https://issues.apache.org/jira/browse/HDFS-15739
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: zhanghuazong
>Assignee: zhanghuazong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15739.0.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
>  Only add missing Javadoc for a param in method chooseRandomWithStorageType 
> of DFSNetworkTopology.java.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)
Haibin Huang created HDFS-15744:
---

 Summary: Use cumulative counting way to improve the accuracy of 
slow disk detection
 Key: HDFS-15744
 URL: https://issues.apache.org/jira/browse/HDFS-15744
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haibin Huang
Assignee: Haibin Huang
 Attachments: image-2020-12-22-11-37-14-734.png, 
image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png

11461 support the datanode disk outlier detection, we can use it to find out 
slow disk via SlowDiskReport(11551).However i found the slow disk information 
may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15744:

Description: 
11461 support the datanode disk outlier detection, we can use it to find out 
slow disk via SlowDiskReport(11551).However i found the slow disk information 
may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!

  was:
[11461|https://issues.apache.org/jira/browse/HDFS-11461]11461 support the 
datanode disk outlier detection, we can use it to find out slow disk via 
SlowDiskReport([11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
 i found the slow disk information may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!


> Use cumulative counting way to improve the accuracy of slow disk detection
> --
>
> Key: HDFS-15744
> URL: https://issues.apache.org/jira/browse/HDFS-15744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haibin Huang
>Assignee: Haibin Huang
>Priority: Major
> Attachments: image-2020-12-22-11-37-14-734.png, 
> image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png
>
>
> 11461 support the datanode disk outlier detection, we can use it to find out 
> slow disk via SlowDiskReport(11551).However i found the slow disk information 
> may not be accurate enough in practice.
> Because a large number of short-term writes can lead to miscalculation. Here 
> is the example, this disk is health, when it encounters a lot of writing in a 
> few minute, it's write io does get slow, and will be considered to be slow 
> disk.The disk just slow in a few minute but SlowDiskReport will keep it until 
> the information becomes invalid. This scenario confuse us since we want to 
> use SlowDiskReport to detect the real bad disk.
> !image-2020-12-22-11-37-14-734.png!
> !image-2020-12-22-11-37-35-280.png!
> To improve the deteciton accuracy, we use a cumulative counting way to detect 
> slow disk. If within the reportValidityMs interval, a disk is considered to 
> be outlier over 50% times, than it should be a real bad disk.
> Here is an exsample, if reportValidityMs is one hour and detection interval 
> is five minute, there will be 12 times disk outlier detection in one hour. If 
> a disk is considered to be outlier over 6 times, it should be a real bad 
> disk. We use this way to detect bad disk in cluster, it can reach over 90% 
> accuracy.
> !image-2020-12-22-11-46-48-817.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-

[jira] [Updated] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15744:

Description: 
[11461|https://issues.apache.org/jira/browse/HDFS-11461]11461 support the 
datanode disk outlier detection, we can use it to find out slow disk via 
SlowDiskReport([11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
 i found the slow disk information may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!

  was:
11461 support the datanode disk outlier detection, we can use it to find out 
slow disk via SlowDiskReport(11551).However i found the slow disk information 
may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!


> Use cumulative counting way to improve the accuracy of slow disk detection
> --
>
> Key: HDFS-15744
> URL: https://issues.apache.org/jira/browse/HDFS-15744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haibin Huang
>Assignee: Haibin Huang
>Priority: Major
> Attachments: image-2020-12-22-11-37-14-734.png, 
> image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png
>
>
> [11461|https://issues.apache.org/jira/browse/HDFS-11461]11461 support the 
> datanode disk outlier detection, we can use it to find out slow disk via 
> SlowDiskReport([11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
>  i found the slow disk information may not be accurate enough in practice.
> Because a large number of short-term writes can lead to miscalculation. Here 
> is the example, this disk is health, when it encounters a lot of writing in a 
> few minute, it's write io does get slow, and will be considered to be slow 
> disk.The disk just slow in a few minute but SlowDiskReport will keep it until 
> the information becomes invalid. This scenario confuse us since we want to 
> use SlowDiskReport to detect the real bad disk.
> !image-2020-12-22-11-37-14-734.png!
> !image-2020-12-22-11-37-35-280.png!
> To improve the deteciton accuracy, we use a cumulative counting way to detect 
> slow disk. If within the reportValidityMs interval, a disk is considered to 
> be outlier over 50% times, than it should be a real bad disk.
> Here is an exsample, if reportValidityMs is one hour and detection interval 
> is five minute, there will be 12 times disk outlier detection in one hour. If 
> a disk is considered to be outlier over 6 times, it should be a real bad 
> disk. We use this way to detect bad disk in cluster, it can reach over 90% 
> accuracy.
> !image-2020-12-22-11-46-48-817.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-

[jira] [Updated] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15744:

Description: 
Hdfs has supported the datanode disk outlier detection in 
[11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it to find 
out slow disk via SlowDiskReport(11551).However i found the slow disk 
information may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!

  was:
11461 support the datanode disk outlier detection, we can use it to find out 
slow disk via SlowDiskReport(11551).However i found the slow disk information 
may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!


> Use cumulative counting way to improve the accuracy of slow disk detection
> --
>
> Key: HDFS-15744
> URL: https://issues.apache.org/jira/browse/HDFS-15744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haibin Huang
>Assignee: Haibin Huang
>Priority: Major
> Attachments: image-2020-12-22-11-37-14-734.png, 
> image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png
>
>
> Hdfs has supported the datanode disk outlier detection in 
> [11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it to 
> find out slow disk via SlowDiskReport(11551).However i found the slow disk 
> information may not be accurate enough in practice.
> Because a large number of short-term writes can lead to miscalculation. Here 
> is the example, this disk is health, when it encounters a lot of writing in a 
> few minute, it's write io does get slow, and will be considered to be slow 
> disk.The disk just slow in a few minute but SlowDiskReport will keep it until 
> the information becomes invalid. This scenario confuse us since we want to 
> use SlowDiskReport to detect the real bad disk.
> !image-2020-12-22-11-37-14-734.png!
> !image-2020-12-22-11-37-35-280.png!
> To improve the deteciton accuracy, we use a cumulative counting way to detect 
> slow disk. If within the reportValidityMs interval, a disk is considered to 
> be outlier over 50% times, than it should be a real bad disk.
> Here is an exsample, if reportValidityMs is one hour and detection interval 
> is five minute, there will be 12 times disk outlier detection in one hour. If 
> a disk is considered to be outlier over 6 times, it should be a real bad 
> disk. We use this way to detect bad disk in cluster, it can reach over 90% 
> accuracy.
> !image-2020-12-22-11-46-48-817.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
F

[jira] [Updated] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15744:

Description: 
Hdfs has supported the datanode disk outlier detection in 
[HDFS-11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it to 
find out slow disk via 
SlowDiskReport([HDFS-11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
 i found the slow disk information may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!

  was:
Hdfs has supported the datanode disk outlier detection in 
[11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it to find 
out slow disk via SlowDiskReport(11551).However i found the slow disk 
information may not be accurate enough in practice.

Because a large number of short-term writes can lead to miscalculation. Here is 
the example, this disk is health, when it encounters a lot of writing in a few 
minute, it's write io does get slow, and will be considered to be slow disk.The 
disk just slow in a few minute but SlowDiskReport will keep it until the 
information becomes invalid. This scenario confuse us since we want to use 
SlowDiskReport to detect the real bad disk.

!image-2020-12-22-11-37-14-734.png!

!image-2020-12-22-11-37-35-280.png!

To improve the deteciton accuracy, we use a cumulative counting way to detect 
slow disk. If within the reportValidityMs interval, a disk is considered to be 
outlier over 50% times, than it should be a real bad disk.

Here is an exsample, if reportValidityMs is one hour and detection interval is 
five minute, there will be 12 times disk outlier detection in one hour. If a 
disk is considered to be outlier over 6 times, it should be a real bad disk. We 
use this way to detect bad disk in cluster, it can reach over 90% accuracy.

!image-2020-12-22-11-46-48-817.png!


> Use cumulative counting way to improve the accuracy of slow disk detection
> --
>
> Key: HDFS-15744
> URL: https://issues.apache.org/jira/browse/HDFS-15744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haibin Huang
>Assignee: Haibin Huang
>Priority: Major
> Attachments: image-2020-12-22-11-37-14-734.png, 
> image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png
>
>
> Hdfs has supported the datanode disk outlier detection in 
> [HDFS-11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it 
> to find out slow disk via 
> SlowDiskReport([HDFS-11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
>  i found the slow disk information may not be accurate enough in practice.
> Because a large number of short-term writes can lead to miscalculation. Here 
> is the example, this disk is health, when it encounters a lot of writing in a 
> few minute, it's write io does get slow, and will be considered to be slow 
> disk.The disk just slow in a few minute but SlowDiskReport will keep it until 
> the information becomes invalid. This scenario confuse us since we want to 
> use SlowDiskReport to detect the real bad disk.
> !image-2020-12-22-11-37-14-734.png!
> !image-2020-12-22-11-37-35-280.png!
> To improve the deteciton accuracy, we use a cumulative counting way to detect 
> slow disk. If within the reportValidityMs interval, a disk is considered to 
> be outlier over 50% times, than it should be a real bad disk.
> Here is an exsample, if reportValidityMs is one hour and detection interval 
> is five minute, there will be 12 times disk outlier detection in one hour. If 
> a disk is considered to be outlier over 6 times, it should be a real bad 
> disk. We use this way to detect bad disk in cluster, it can reach over 90% 
> accuracy.
> !image-2020-12-22-11-46-48-817.png!



--
Th

[jira] [Updated] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15744:

Attachment: HDFS-15744-001.patch

> Use cumulative counting way to improve the accuracy of slow disk detection
> --
>
> Key: HDFS-15744
> URL: https://issues.apache.org/jira/browse/HDFS-15744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haibin Huang
>Assignee: Haibin Huang
>Priority: Major
> Attachments: HDFS-15744-001.patch, image-2020-12-22-11-37-14-734.png, 
> image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png
>
>
> Hdfs has supported the datanode disk outlier detection in 
> [HDFS-11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it 
> to find out slow disk via 
> SlowDiskReport([HDFS-11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
>  i found the slow disk information may not be accurate enough in practice.
> Because a large number of short-term writes can lead to miscalculation. Here 
> is the example, this disk is health, when it encounters a lot of writing in a 
> few minute, it's write io does get slow, and will be considered to be slow 
> disk.The disk just slow in a few minute but SlowDiskReport will keep it until 
> the information becomes invalid. This scenario confuse us since we want to 
> use SlowDiskReport to detect the real bad disk.
> !image-2020-12-22-11-37-14-734.png!
> !image-2020-12-22-11-37-35-280.png!
> To improve the deteciton accuracy, we use a cumulative counting way to detect 
> slow disk. If within the reportValidityMs interval, a disk is considered to 
> be outlier over 50% times, than it should be a real bad disk.
> Here is an exsample, if reportValidityMs is one hour and detection interval 
> is five minute, there will be 12 times disk outlier detection in one hour. If 
> a disk is considered to be outlier over 6 times, it should be a real bad 
> disk. We use this way to detect bad disk in cluster, it can reach over 90% 
> accuracy.
> !image-2020-12-22-11-46-48-817.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15744) Use cumulative counting way to improve the accuracy of slow disk detection

2020-12-21 Thread Haibin Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibin Huang updated HDFS-15744:

Status: Patch Available  (was: Open)

> Use cumulative counting way to improve the accuracy of slow disk detection
> --
>
> Key: HDFS-15744
> URL: https://issues.apache.org/jira/browse/HDFS-15744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haibin Huang
>Assignee: Haibin Huang
>Priority: Major
> Attachments: HDFS-15744-001.patch, image-2020-12-22-11-37-14-734.png, 
> image-2020-12-22-11-37-35-280.png, image-2020-12-22-11-46-48-817.png
>
>
> Hdfs has supported the datanode disk outlier detection in 
> [HDFS-11461|https://issues.apache.org/jira/browse/HDFS-11461], we can use it 
> to find out slow disk via 
> SlowDiskReport([HDFS-11551|https://issues.apache.org/jira/browse/HDFS-11551]).However
>  i found the slow disk information may not be accurate enough in practice.
> Because a large number of short-term writes can lead to miscalculation. Here 
> is the example, this disk is health, when it encounters a lot of writing in a 
> few minute, it's write io does get slow, and will be considered to be slow 
> disk.The disk just slow in a few minute but SlowDiskReport will keep it until 
> the information becomes invalid. This scenario confuse us since we want to 
> use SlowDiskReport to detect the real bad disk.
> !image-2020-12-22-11-37-14-734.png!
> !image-2020-12-22-11-37-35-280.png!
> To improve the deteciton accuracy, we use a cumulative counting way to detect 
> slow disk. If within the reportValidityMs interval, a disk is considered to 
> be outlier over 50% times, than it should be a real bad disk.
> Here is an exsample, if reportValidityMs is one hour and detection interval 
> is five minute, there will be 12 times disk outlier detection in one hour. If 
> a disk is considered to be outlier over 6 times, it should be a real bad 
> disk. We use this way to detect bad disk in cluster, it can reach over 90% 
> accuracy.
> !image-2020-12-22-11-46-48-817.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org