Vinayakumar B created HADOOP-11296:
--
Summary: hadoop-daemons.sh throws 'host1: bash: host3: command not
found...'
Key: HADOOP-11296
URL: https://issues.apache.org/jira/browse/HADOOP-11296
Project:
sanjay gupta created HADOOP-11297:
-
Summary: how to get all the details of live node and dead node
using hadoop api in java
Key: HADOOP-11297
URL: https://issues.apache.org/jira/browse/HADOOP-11297
[
https://issues.apache.org/jira/browse/HADOOP-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Haohui Mai resolved HADOOP-11297.
-
Resolution: Invalid
how to get all the details of live node and dead node using hadoop api in
+1 binding
-patched slider pom to build against 2.6.0
-verified build did download, which it did at up to ~8Mbps. Faster than a
local build.
-full clean test runs on OS/X Linux
Windows 2012:
Same thing. I did have to first build my own set of the windows native
binaries, by checking out
Hi Arun!
We are very close to completion on YARN-1964 (DockerContainerExecutor). I'd
also like HDFS-4882 to be checked in. Do you think these issues merit another
RC?
ThanksRavi
On Tuesday, November 11, 2014 11:57 AM, Steve Loughran
ste...@hortonworks.com wrote:
+1 binding
Hi Arun,
We were testing the RC and ran into a problem with the recent fixes that
were done for POODLE for Tomcat (HADOOP-11217 for KMS and HDFS-7274 for
HttpFS). Basically, in disabling SSLv3, we also disabled SSLv2Hello, which
is required for older clients (e.g. Java 6 with openssl 0.9.8x) so
[
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Kanter reopened HADOOP-11217:
Disable SSLv3 in KMS
Key: HADOOP-11217
Thanks Arun for creating the RC.
Tried to build and deploy it on a local distributed cluster, can
successfully run distributed shell job with node labels.
+1 (non-binding),
Thanks,
Wangda
On Tue, Nov 11, 2014 at 2:06 PM, Robert Kanter rkan...@cloudera.com wrote:
Hi Arun,
We were testing
Hi all,
We're working on adding erasure coding to HDFS. One very nice optimization
is local reconstruction codes (LRC), which originates from Microsoft
Research. We'd like to implement LRC in HDFS too, but Kai reached out to
Microsoft and apparently this idea is patented. This is being tracked in
+1 (non-binding)
* Downloaded the source tar ball, and built binaries from it successfully.
* Ran DS apps and MR jobs with emitting timeline data enabled successfully.
* Verified the generic history information, DS-specific and MR-specific
metrics were available.
* Ran the timeline server in
10 matches
Mail list logo