[
https://issues.apache.org/jira/browse/ACCUMULO-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13883812#comment-13883812
]
Sean Busbey commented on ACCUMULO-2264:
---------------------------------------
I think this points to an underlying problem with the tserver wal
configuration. Hadoop can be configured to set an arbitrary minimum blocksize
(the property is dfs.namenode.fs-limits.min-block-size).
The tserver should probably be checking for this and then either issuing a
WARN/ERROR to the client and using the minimum or failing loudly and refusing
to start.
> KilledTabletServerSplitTest fails on Hadoop2
> --------------------------------------------
>
> Key: ACCUMULO-2264
> URL: https://issues.apache.org/jira/browse/ACCUMULO-2264
> Project: Accumulo
> Issue Type: Bug
> Components: test
> Affects Versions: 1.5.0
> Reporter: Josh Elser
> Assignee: Josh Elser
> Priority: Minor
> Fix For: 1.5.1, 1.6.0
>
>
> KilledTabletServerSplitTest tries to set tserver.walog.max.size to 50K which
> will cause infinite loop as Hadoop won't create files with a blocksize less
> than 1M by default in 2.2.0
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)