[ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14992431#comment-14992431
 ] 

James Clampffer commented on HDFS-9103:
---------------------------------------

Thanks for the comments Bob!

"Include an abstraction (either by template or base class) for the clock to 
make time testable."
How about I make those typedefs public (still inside the class) so they can be 
used for tests?

"Can we re-use the mock objects in our tests rather than re-declaring them?"
I would assume so but I'm not sure.  I'm still learning how gmock really works, 
I'll spend some time looking at the docs.  Do you consider this a blocker?

"bad_datanode_test.cc defines FileHandleTest. We should have the test name 
match the filename."
Nice catch, will change.

"Recoverable error test should be calling ASSERT_FALSE for clarity"
Another good one, for whatever reason I didn't see ASSERT_FALSE in the gmock 
docs.

"I would like to see the FileHandle::Pread method implement the retry logic 
internally so we have a simple "read all this data or completely fail" method 
rather than forcing partial read and retry onto our consumer. Understanding the 
logic that these errors mean you should retry, but this error means that you 
shouldn't retry could be abstracted away as a kindness to the consumer."
I agree with this.  I think the BadDataNodeTracker should be part of the 
filesystem; it seems like it complicates the API to have the user declare it.  
With the set<string> for exclusion I think it was reasonable to pass in but now 
that it's a more complicate class that needs to be passed it might not be a 
good fit for the API.  

If I recall [~wheat9] wanted the passing of failed nodes to be very explicit by 
design.  Do you have an opinion now that I've changed how failures are tracked 
Haohui?  I think a reasonable middle ground might be keeping the failed DN 
tracking mechanism internal but providing a hook to ask for failed datanodes 
that were tried during the read.  Optionally passing in a pointer to a vector 
of strings might work well for this.

"Need tests for: BadDataNodeTracker as a testable unit. Hit all of its cases; 
it's pretty small and it should be easy. We especially currently don't have any 
test coverage for RemoveAllExpired.
-Doing a retry-and-recover"
I think if I drop the exluded node timeout to something on the order of 
milliseconds I could get this sort of test coverage.  I'll give it a shot in my 
next patch.

"While I'm all about cleaning up formatting as we go, this is going to cause a 
lot of conflicts with HDFS-9144 with no semantic difference. It does risk 
reviewers missing something that changed because it looked like a formatting 
fix. Can we punt some of these until HDFS-9144 lands? If it will be onerous to 
pull out (e.g. not the last change you made in your local log), I can deal with 
it on merge."
Sure, I should be able to back the majority of those changes out without too 
much effort.  How about I write a script to automate that whole process and 
submit it in another jira.  That way once things settle down after HDFS-9144 
everyone is looking at the exact same code.  I know I do a few things out of 
habit that clang-format fixes for me but running it manually per file is a pain.

"Reset the counter so there is never a chance for overflow"
This was me breaking the "clever is the opposite of maintainable" rule.  It's 
unsigned and getting modded by 1024 (so it gets reduced to a bitshift) so it 
really shouldn't matter but if it saves anyone from spending time wondering 
it's worth fixing.

"remove_counter_ seems like the wrong name; perhaps check_counter?"
Agreed, if I can think of something better I'll add that in the next patch.

"We're currently traversing the map up to 3 times in count(), get(), and 
erase(); perhaps one lookup and keep the iterator?"
If it's not a blocker I'd like to keep it the way it is for now because I think 
it's a little more clear.  I intend to do a lot of profiling in the next couple 
weeks and if these tree traversals cause significant overhead I'll hold onto 
the iterator or go back to [~wheat9]'s suggestion to just use a vector.

it = datanodes_.erase(it);
Nice one.  I forgot this was allowed in C++11. It makes that block a little 
more intuitive so I'll add this in the next patch.

> Retry reads on DN failure
> -------------------------
>
>                 Key: HDFS-9103
>                 URL: https://issues.apache.org/jira/browse/HDFS-9103
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: Bob Hansen
>            Assignee: James Clampffer
>             Fix For: HDFS-8707
>
>         Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.006.patch, HDFS-9103.HDFS-8707.007.patch, 
> HDFS-9103.HDFS-8707.3.patch, HDFS-9103.HDFS-8707.4.patch, 
> HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to