Hi everybody,
I developped an application using Hadoop. It runs perfectly with a
stand-alone mode but when I try to run it with the pseudo-distributed mode I
get an error java.lang.OutOfMemoryError: GC overhead limit exceeded (but
with the stand-alone mode I don't get this error. The
Deadlock in IPC
---
Key: HADOOP-7332
URL: https://issues.apache.org/jira/browse/HADOOP-7332
Project: Hadoop Common
Issue Type: Bug
Components: ipc
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Hi all,
I had a question regarding the setHosts method of the BlockLocation
class in hadoop hdfs. Does this make the block in question to be moved
to the specified host?
Furthermore, where does the getHosts method of block location get the
host names? It does not seem to utilize the rack
Performance improvement in PureJavaCrc32
Key: HADOOP-7333
URL: https://issues.apache.org/jira/browse/HADOOP-7333
Project: Hadoop Common
Issue Type: Improvement
Components: util
Affects
test-patch should check for hard tabs
-
Key: HADOOP-7334
URL: https://issues.apache.org/jira/browse/HADOOP-7334
Project: Hadoop Common
Issue Type: Improvement
Components: build, test
[
https://issues.apache.org/jira/browse/HADOOP-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Owen O'Malley resolved HADOOP-7330.
---
Resolution: Fixed
I just committed this. Thanks, Luke!
The metrics source mbean
See https://builds.apache.org/hudson/job/Hadoop-0.20.203-Build/12/changes
Changes:
[omalley] HADOOP-7330. Fix MetricsSourceAdapter to use the value instead of the
object. (Luke Lu via omalley)
--
[...truncated 3969 lines...]
[exec] checking host