[
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13611311#comment-13611311
]
Siddharth Seth commented on HBASE-7904:
---------------------------------------
bq. Usually configs are out in xml files that we read into our context? The
hbase Configuration adds the hbase-*.xml files to its base set. If tests need
specific configs., we add them in xml and then make sure the Configuration
reads them in before test runs. What can't we do that in this case rather than
do this config. copy?
To aceess the MiniMRCluster, certain client configs are required. This, along
with a lot of defaults, are provided by the cluster after it is setup. Is it
possible to use this, and then apply addiional configs on top. Something like
- config from HDFS is the base (I see fs.default.name being set manually)
- If MiniMR is required, set it up using the base HDFS config.
- HBase configs, and test specific configs on top of this (without attempting
to set MR/HDFS parameters)
I'm sure HBase has similar client configs - which need to be loaded to access
the MiniHBaseCluster. How are clients expected to use this ?
bq. Why we even specifying addresses? Why are there not defaults that just work
in say the standalone case so downstream projects don't even have to be
concerned w/ setting this stuff?
To allow parallel MiniClusters (different ports).
bq. After reading reams of commentary by you pasting logs both here and over in
yarn issues, I am still not clear what the problem is. It seems like my comment
above on exit code 137 being code for OOME seems to be pertinent given cited
YARN issue to disable mem checks (could we up the heap and have this stuff
pass?) but I have no clue on how far we are from resolution or what action to
take upstream. I think I now know what is wanted regards config reading over in
YARN-449 seeing Siddharth Seth comments there but that issue seems to have run
off into the weeds.... Figuring what is needed should not be this hard.
137 implies a kill -9 afaik. That's what the NodeManager/TT will eventually
send out to containers. Note: This could be seen in case of successful MR tasks
as well.
"Up the heap" is not the main issue here. JVMs are not going OOM, they're being
killed by YARN resource monitoring.
After MR-5083 is committed (and snapshot published to mvn), can we trigger
another run of the HBase tests - that should eliminate potential random
failures caused by parallel tests.
> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> --------------------------------------------
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
> Issue Type: Task
> Reporter: Ted Yu
> Assignee: Ted Yu
> Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2-hadoop-2.0.txt, 7904-v2.txt,
> 7904-v4-hadoop-2.0.txt, 7904-v4.txt, 7904-v4.txt, 7904-v5-hadoop-2.0.txt,
> 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira