Hi, Siddharth.
I was also a bit frustrated at what I found to be scant documentation on how to
use the distributed cache in Hadoop 2. The DistributedCache class itself was
deprecated in Hadoop 2, but there don’t appear to be very clear instructions on
the alternative. I think it’s actually
Hello,
I'm attempting to deploy a Backup Node [1] on a dev cluster where we
specify that all HTTP communication must happen over SSL (dfs.http.policy =
HTTPS_ONLY). The Backup Node fails to start with this exception:
2016-06-07 14:01:01,243 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
Hi Krishna!
I don't see why you couldn't start Hadoop in this configuration.
Performance would obviously be suspect. Maybe by configuring your network
toppology script, you could even improve the performance.
Most mobiles are ARM processor. I know some cool people ran Hadoop v1 on
Raspberry Pis
unsubscribe
If you use the Instance of Job class, you can add files to distributed cache
like this:
Job job = Job.getInstanceOf(conf);
job.addCacheFiles(filepath);
Sent from my iPhone
> On Jun 7, 2016, at 5:17 AM, Siddharth Dawar
> wrote:
>
> Hi,
>
> I wrote a program which
Hi,
I wrote a program which creates Map-Reduce jobs in an iterative fashion as
follows:
while (true) {
JobConf conf2 = new JobConf(getConf(),graphMining.class);
Hi,
I want to use the distributed cache to allow my mappers to access data in
Hadoop 2.7.2. In main, I'm using the command
String hdfs_path="hdfs://localhost:9000/bloomfilter";InputStream in =
new BufferedInputStream(new
FileInputStream("/home/siddharth/Desktop/data/bloom_filter"));Configuration