[jira] [Created] (HDFS-14345) fs.BufferedFSInputStream::read is synchronized

2019-03-06 Thread Gopal V (JIRA)
Gopal V created HDFS-14345:
--

 Summary: fs.BufferedFSInputStream::read is synchronized
 Key: HDFS-14345
 URL: https://issues.apache.org/jira/browse/HDFS-14345
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Gopal V


BufferedInputStream::read() has performance issues - this can be fixed by 
wrapping the stream in another non-synchronized buffered inputstream, but that 
incurs memory copy overheads and is sub-optimal.

https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/io/BufferedInputStream.java#L269

Hadoop fs streams aren't thread-safe (except for ReadFully) and are stateful 
for position, so this synchronization is purely a tax without benefit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14344) Erasure Coding: Miss EC block after decommission and restart NN

2019-03-06 Thread maobaolong (JIRA)
maobaolong created HDFS-14344:
-

 Summary: Erasure Coding: Miss EC block after decommission and 
restart NN
 Key: HDFS-14344
 URL: https://issues.apache.org/jira/browse/HDFS-14344
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ec, erasure-coding, namenode
Affects Versions: 3.3.0
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14343) RBF: Fix renaming folders spread across multiple subclusters

2019-03-06 Thread JIRA
Íñigo Goiri created HDFS-14343:
--

 Summary: RBF: Fix renaming folders spread across multiple 
subclusters
 Key: HDFS-14343
 URL: https://issues.apache.org/jira/browse/HDFS-14343
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri


The {{RouterClientProtocol#rename()}} function assumes that we are renaming 
files and only renames one of them (i.e., {{invokeSequential()}}). In the case 
of folders which are in all subclusters (e.g., HASH_ALL) we should rename all 
locations (i.e., {{invokeAll()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1234) Iterate the OM DB snapshot and populate the recon container DB.

2019-03-06 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1234:
---

 Summary: Iterate the OM DB snapshot and populate the recon 
container DB. 
 Key: HDDS-1234
 URL: https://issues.apache.org/jira/browse/HDDS-1234
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


OM DB snapshot contains the Key->ContainerId + BlockId information. Iterate the 
OM snapshot DB and create the reverse map of (ContainerId, Key prefix) -> Key 
count to be stored in the Recon container DB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-06 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1233:
---

 Summary: Create an Ozone Manager Service provider for Recon.
 Key: HDDS-1233
 URL: https://issues.apache.org/jira/browse/HDDS-1233
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


* Implement an abstraction to let Recon make OM specific requests.
* At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1232) Recon Container DB service definition

2019-03-06 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1232:
---

 Summary: Recon Container DB service definition
 Key: HDDS-1232
 URL: https://issues.apache.org/jira/browse/HDDS-1232
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


* Define the Ozone Recon container DB service definition. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-06 Thread 俊平堵
Congrats, Eric!

Thanks,

Junping

Eric Payne  于2019年3月6日周三 上午1:20写道:

> It is my pleasure to announce that Eric Badger has accepted an invitation
> to become a Hadoop Core committer.
>
> Congratulations, Eric! This is well-deserved!
>
> -Eric Payne
>


[jira] [Created] (HDDS-1231) Add ChillMode metrics

2019-03-06 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1231:


 Summary: Add ChillMode metrics
 Key: HDDS-1231
 URL: https://issues.apache.org/jira/browse/HDDS-1231
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to add few of the chill mode metrics:
 # NumberofHealthyPipelinesThreshold
 # currentHealthyPipelinesCount
 # NumberofPipelinesWithAtleastOneReplicaThreshold
 # CurrentPipelinesWithAtleastOneReplicaCount
 # ChillModeContainerWithOneReplicaReportedCutoff
 # CurrentContainerCutoff

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14342) WebHDFS: expose NEW_BLOCK flag in APPEND operation

2019-03-06 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-14342:


 Summary: WebHDFS: expose NEW_BLOCK flag in APPEND operation
 Key: HDFS-14342
 URL: https://issues.apache.org/jira/browse/HDFS-14342
 Project: Hadoop HDFS
  Issue Type: Task
  Components: webhdfs
Reporter: Anatoli Shein
Assignee: Anatoli Shein


After the support for variable length blocks was added (HDFS-3689), we should 
expose the NEW_BLOCK flag of APPEND operation in webhdfs, so that this 
functionality will be usable over the rest api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1230) Update OzoneServiceProvider in s3 gateway to handle OM ha

2019-03-06 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1230:


 Summary: Update OzoneServiceProvider in s3 gateway to handle OM ha
 Key: HDDS-1230
 URL: https://issues.apache.org/jira/browse/HDDS-1230
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


Update OzoneServiceProvider in s3 gateway to handle OM ha



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-06 Thread Sunil G
Hearty Congratulations Eric!

- Sunil


On Tue, Mar 5, 2019 at 10:50 PM Eric Payne 
wrote:

> It is my pleasure to announce that Eric Badger has accepted an invitation
> to become a Hadoop Core committer.
>
> Congratulations, Eric! This is well-deserved!
>
> -Eric Payne
>


Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-06 Thread Shane Kumpf
Congrats, Eric! Well deserved! Thanks for all your hard work.

On Tue, Mar 5, 2019 at 10:20 AM Eric Payne 
wrote:

> It is my pleasure to announce that Eric Badger has accepted an invitation
> to become a Hadoop Core committer.
>
> Congratulations, Eric! This is well-deserved!
>
> -Eric Payne
>


[jira] [Created] (HDFS-14341) Weird handling of plus sign in paths in WebHDFS REST API

2019-03-06 Thread Stefaan Lippens (JIRA)
Stefaan Lippens created HDFS-14341:
--

 Summary: Weird handling of plus sign in paths in WebHDFS REST API
 Key: HDFS-14341
 URL: https://issues.apache.org/jira/browse/HDFS-14341
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.1.1
Reporter: Stefaan Lippens


We're using Hadoop 3.1.1 at the moment and have an issue with the handling of 
paths that contain plus signs (generated by Kafka HDFS Connector).

For example, I created this example directory {{tmp/plus+plus}}
{code:java}
$ hadoop fs -ls tmp/plus+plus
Found 1 items
-rw-r--r--   3 stefaan supergroup   7079 2019-03-06 14:31 
tmp/plus+plus/foo.txt{code}
When trying to list this folder through WebHDFS the naive way:
{code:java}
$ curl 
'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus+plus?user.name=stefaan=LISTSTATUS'
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
 /user/stefaan/tmp/plus plus does not exist."}}{code}
Fair enough, the plus sign {{+}} is a special character in URLs, let's encode 
it as {{%2B}}:
{code:java}
$ curl 
'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%2Bplus?user.name=stefaan=LISTSTATUS'
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
 /user/stefaan/tmp/plus plus does not exist."}}{code}
Doesn't work. 
 After some trial and error I found that I could get it working by encode the 
thing twice ({{"+" -> "%2B" -> "%252B"}}):
{code:java}
 curl 
'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%252Bplus?user.name=stefaan=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":1551882704527,"blockSize":134217728,"childrenNum":0,"fileId":314914,"group":"supergroup","length":7079,"modificationTime":1551882704655,"owner":"stefaan","pathSuffix":"foo.txt","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"}
]}}{code}
Seems like there is some double decoding going on in WebHDFS REST API.

I also tried with some other special characters like {{@}} and {{=}}, and for 
these it seems to work both when encoding once ({{%40}} and {{%3D}} 
respectively) and encoding twice ({{%2540}} and {{%253D}} respectively)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1067/

[Mar 5, 2019 4:41:33 AM] (aajisaka) HADOOP-16162. Remove unused Job Summary 
Appender configurations from
[Mar 5, 2019 5:10:08 AM] (aajisaka) YARN-7243. Moving logging APIs over to 
slf4j in
[Mar 5, 2019 7:03:32 AM] (inigoiri) HDFS-14336. Fix checkstyle for 
NameNodeMXBean. Contributed by Danny
[Mar 5, 2019 7:49:07 AM] (yufei_gu) YARN-9298. Implement FS placement rules 
using PlacementRule interface.
[Mar 5, 2019 12:45:47 PM] (elek) HDDS-1219. 
TestContainerActionsHandler.testCloseContainerAction has an
[Mar 5, 2019 1:56:42 PM] (vinayakumarb) HDFS-7663. Erasure Coding: Append on 
striped file. Contributed by Ayush
[Mar 5, 2019 2:02:34 PM] (stevel) HADOOP-16163. NPE in setup/teardown of 
ITestAbfsDelegationTokens.
[Mar 5, 2019 2:09:00 PM] (stevel) HADOOP-16140. hadoop fs expunge to add 
-immediate option to purge trash
[Mar 5, 2019 2:14:14 PM] (billie) YARN-7129. Application Catalog for YARN 
applications. Contributed by
[Mar 5, 2019 3:35:04 PM] (elek) HDDS-1222. Remove TestContainerSQLCli unit test 
stub. Contributed by
[Mar 5, 2019 4:37:10 PM] (xyao) HDDS-1156. testDelegationToken is failing in 
TestSecureOzoneCluster.
[Mar 5, 2019 4:39:46 PM] (shashikant) HDDS-935. Avoid creating an already 
created container on a datanode in
[Mar 5, 2019 5:17:01 PM] (eyang) YARN-7266.  Fixed deadlock in Timeline Server 
thread initialization.
[Mar 5, 2019 5:24:22 PM] (hanishakoneru) HDDS-1072. Implement RetryProxy and 
FailoverProxy for OM client.
[Mar 5, 2019 6:25:31 PM] (arp) HDDS-1218. Do the dist-layout-stitching for 
Ozone after the test-compile
[Mar 5, 2019 6:32:00 PM] (eyang) HADOOP-16150. Added concat method to 
ChecksumFS as unsupported
[Mar 5, 2019 7:31:09 PM] (aengineer) HDDS-1171. Add benchmark for OM and OM 
client in Genesis. Contributed by
[Mar 5, 2019 7:46:36 PM] (7813154+ajayydv) HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
[Mar 5, 2019 8:04:57 PM] (7813154+ajayydv) HDDS-919. Enable prometheus 
endpoints for Ozone datanodes (#502)
[Mar 5, 2019 11:54:29 PM] (arp) HDDS-1188. Implement a skeleton patch for Recon 
server with initial set
[Mar 6, 2019 1:39:52 AM] (inigoiri) HDFS-14326. Add CorruptFilesCount to JMX. 
Contributed by Danny Becker.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp
 
   Dead store to download in 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.incrementDownload(SolrDocument,
 long) At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.incrementDownload(SolrDocument,
 long) At AppCatalogSolrClient.java:[line 306] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.deployApp(String,
 Service) At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.deployApp(String,
 Service) At AppCatalogSolrClient.java:[line 266] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.findAppStoreEntry(String)
 At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.findAppStoreEntry(String)
 At AppCatalogSolrClient.java:[line 192] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.getRecommendedApps()
 At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.getRecommendedApps()
 At AppCatalogSolrClient.java:[line 98] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.search(String)
 At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.search(String)
 At AppCatalogSolrClient.java:[line 131] 
   Write to static field 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.urlString 
from instance method new 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient() At 
AppCatalogSolrClient.java:from instance method new 

4 Apache Events in 2019: DC Roadshow soon; next up Chicago, Las Vegas, and Berlin!

2019-03-06 Thread Rich Bowen
Dear Apache Enthusiast,

(You’re receiving this because you are subscribed to one or more user
mailing lists for an Apache Software Foundation project.)

TL;DR:
 * Apache Roadshow DC is in 3 weeks. Register now at
https://apachecon.com/usroadshowdc19/
 * Registration for Apache Roadshow Chicago is open.
http://apachecon.com/chiroadshow19
 * The CFP for ApacheCon North America is now open.
https://apachecon.com/acna19
 * Save the date: ApacheCon Europe will be held in Berlin, October 22nd
through 24th.  https://apachecon.com/aceu19


Registration is open for two Apache Roadshows; these are smaller events
with a more focused program and regional community engagement:

Our Roadshow event in Washington DC takes place in under three weeks, on
March 25th. We’ll be hosting a day-long event at the Fairfax campus of
George Mason University. The roadshow is a full day of technical talks
(two tracks) and an open source job fair featuring AWS, Bloomberg, dito,
GridGain, Linode, and Security University. More details about the
program, the job fair, and to register, visit
https://apachecon.com/usroadshowdc19/

Apache Roadshow Chicago will be held May 13-14th at a number of venues
in Chicago’s Logan Square neighborhood. This event will feature sessions
in AdTech, FinTech and Insurance, startups, “Made in Chicago”, Project
Shark Tank (innovations from the Apache Incubator), community diversity,
and more. It’s a great way to learn about various Apache projects “at
work” while playing at a brewery, a beercade, and a neighborhood bar.
Sign up today at https://www.apachecon.com/chiroadshow19/

We’re delighted to announce that the Call for Presentations (CFP) is now
open for ApacheCon North America in Las Vegas, September 9-13th! As the
official conference series of the ASF, ApacheCon North America will
feature over a dozen Apache project summits, including Cassandra,
Cloudstack, Tomcat, Traffic Control, and more. We’re looking for talks
in a wide variety of categories -- anything related to ASF projects and
the Apache development process. The CFP closes at midnight on May 26th.
In addition, the ASF will be celebrating its 20th Anniversary during the
event. For more details and to submit a proposal for the CFP, visit
https://apachecon.com/acna19/ . Registration will be opening soon.

Be sure to mark your calendars for ApacheCon Europe, which will be held
in Berlin, October 22-24th at the KulturBrauerei, a landmark of Berlin's
industrial history. In addition to innovative content from our projects,
we are collaborating with the Open Source Design community
(https://opensourcedesign.net/) to offer a track on design this year.
The CFP and registration will open soon at https://apachecon.com/aceu19/ .

Sponsorship opportunities are available for all events, with details
listed on each event’s site at http://apachecon.com/.

We look forward to seeing you!

Rich, for the ApacheCon Planners
@apachecon


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Eric Badger is now a committer!

2019-03-06 Thread Abhishek Modi
Congratulations Eric.

On Wed, Mar 6, 2019 at 10:18 AM Manikandan R  wrote:

> Congratulations, Eric.
>
> On Wed, Mar 6, 2019 at 4:16 AM Wangda Tan  wrote:
>
> > Congratulations, Eric.
> >
> > Welcome aboard!
> >
> > Best,
> > Wangda
> >
> >
> > On Tue, Mar 5, 2019 at 2:26 PM Sree V 
> > wrote:
> >
> > > Congratulations, Eric.
> > >
> > >
> > >
> > > Thank you./Sree
> > >
> > >
> > >
> > > On Tuesday, March 5, 2019, 12:50:20 PM PST, Ayush Saxena <
> > > ayush...@gmail.com> wrote:
> > >
> > >  Congratulations Eric!!!
> > >
> > > -Ayush
> > >
> > > > On 05-Mar-2019, at 11:34 PM, Chandni Singh 
> > > wrote:
> > > >
> > > > Congratulations Eric!
> > > >
> > > > On Tue, Mar 5, 2019 at 9:32 AM Jim Brennan
> > > >  wrote:
> > > >
> > > >> Congratulations Eric!
> > > >>
> > > >> On Tue, Mar 5, 2019 at 11:20 AM Eric Payne  > > >> .invalid>
> > > >> wrote:
> > > >>
> > > >>> It is my pleasure to announce that Eric Badger has accepted an
> > > invitation
> > > >>> to become a Hadoop Core committer.
> > > >>>
> > > >>> Congratulations, Eric! This is well-deserved!
> > > >>>
> > > >>> -Eric Payne
> > > >>>
> > > >>
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


-- 
With Regards,
Abhishek Modi


[jira] [Created] (HDFS-14340) Lower the log level when can't get postOpAttr

2019-03-06 Thread Anuhan Torgonshar (JIRA)
Anuhan Torgonshar created HDFS-14340:


 Summary: Lower the log level when can't get postOpAttr
 Key: HDFS-14340
 URL: https://issues.apache.org/jira/browse/HDFS-14340
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.8.5, 3.1.0
Reporter: Anuhan Torgonshar
 Attachments: RpcProgramNfs3.java

I think should lower the log level when can't get postOpAttr in 
_*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
 

 
{code:java}
//the problematic log level ERROR, at line 1044
try {
   dirWcc = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(preOpDirAttr),
   dfsClient, dirFileIdPath, iug);
} catch (IOException e1) {
   LOG.error("Can't get postOpDirAttr for dirFileId: "
   + dirHandle.getFileId(), e1);
}

//other practice in similar code snippets, line number is 475, the log assigned 
with INFO level
try { 
   wccData = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(preOpAttr), 
dfsClient,   fileIdPath, iug); 
} catch (IOException e1) { 
   LOG.info("Can't get postOpAttr for fileIdPath: " + fileIdPath, e1); 
}

//other practice in similar code snippets, line number is 1405, the log 
assigned with INFO level
try {
   fromDirWcc = Nfs3Utils.createWccData(
   Nfs3Utils.getWccAttr(fromPreOpAttr), dfsClient, fromDirFileIdPath,iug);
   toDirWcc = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(toPreOpAttr),
   dfsClient, toDirFileIdPath, iug);
} catch (IOException e1) {
   LOG.info("Can't get postOpDirAttr for " + fromDirFileIdPath + " or"
   + toDirFileIdPath, e1);
}


{code}
Therefore, I think the logging practices should be consistent in similar 
contexts. When the code catches _*IOException*_ for *_getWccAttr()_* method, it 
more likely prints a log message with _*INFO*_ level, a lower level.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14339) Inconsistent log level practices

2019-03-06 Thread Anuhan Torgonshar (JIRA)
Anuhan Torgonshar created HDFS-14339:


 Summary: Inconsistent log level practices
 Key: HDFS-14339
 URL: https://issues.apache.org/jira/browse/HDFS-14339
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.8.5, 3.1.0
Reporter: Anuhan Torgonshar
 Attachments: RpcProgramNfs3.java

There are *inconsistent* log level practices in 
_*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
 
{code:java}
//following log levels are inconsistent with other practices which seems to 
more appropriate
//from line 1814 to 1819 & line 1831 to 1836 in Hadoop-2.8.5 version
try { 
attr = writeManager.getFileAttr(dfsClient, childHandle, iug); 
} catch (IOException e) { 
LOG.error("Can't get file attributes for fileId: {}", fileId, e); continue; 
}

//other same practices in this file
//from line 907 to 911 & line 2102 to 2106 
try {
postOpAttr = writeManager.getFileAttr(dfsClient, handle, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpAttr for fileId: {}", e1);
}

//other similar practices
//from line 1224 to 1227 & line 1139 to 1143  1309 to 1313
try {
postOpDirAttr = Nfs3Utils.getFileAttr(dfsClient, dirFileIdPath, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpDirAttr for {}", dirFileIdPath, e1);
} 


{code}
Therefore, when the code catches _*IOException*_ for _*getFileAttr()*_ method, 
it more likely prints a log message with _*INFO*_ level, a lower level, a 
higher level may be scary the users in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.TestFileCreation 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestFileCorruption 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/252/artifact/out/xml.txt
  [20K]

   findbugs:

   

[jira] [Created] (HDDS-1229) Concurrency issues with Background Block Delete

2019-03-06 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1229:
---

 Summary: Concurrency issues with Background Block Delete
 Key: HDDS-1229
 URL: https://issues.apache.org/jira/browse/HDDS-1229
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Supratim Deka


HDDS-1163 takes a simplistic approach to deal with concurrent block deletes on 
a container,
when the metadata scanner is checking existence of chunks for each block in the 
Container Block DB.

As part of HDDS-1663 checkBlockDB() just does a retry if any inconsistency is 
detected during a concurrency window. The retry is expected to succeed because 
the new DB iterator will not include any of the blocks being processed by the 
concurrent background delete. If retry fails, then the inconsistency is ignored 
expecting the next iteration of the metadata scanner will avoid running 
concurrently with the same container.

This Jira is raised to explore a more predictable (yet simple) mechanism to 
deal with this concurrency.
 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1228) Chunk Scanner Checkpoints

2019-03-06 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1228:
---

 Summary: Chunk Scanner Checkpoints
 Key: HDDS-1228
 URL: https://issues.apache.org/jira/browse/HDDS-1228
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Supratim Deka


Checkpoint the progress of the chunk verification scanner.
Save the checkpoint persistently to support scanner resume from checkpoint - 
after a datanode restart.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org