If this is not Bigtop package installed, please see src/BUILDING.txt
to build your proper native libraries. The tarball doesn't ship with
globally usable native libraries, given the OS/Arch variants out
there.
On Wed, Jul 3, 2013 at 3:54 AM, Chui-Hui Chiu cch...@tigers.lsu.edu wrote:
Hello,
I
This is what I remember: If you disable journalling, running fsck
after a crash will (be required and) take longer. Certainly not a good
idea to have an extra wait after the cluster loses power and is being
restarted, etc.
On Tue, Jul 9, 2013 at 7:42 AM, Chris Embree cemb...@gmail.com wrote:
Hey
Hi Sandy,
Yes, I have been using AMRMClient APIs. I am planning to shift to
whatever way is this white list feature is supported with. But am not sure
what is meant by submitting ResourceRequests directly to RM. Can you please
elaborate on this or give me a pointer to some example code on how
*Hey, guys, *
*
*
*I'm trying to build my own hadoop(1.1.2) plugin for eclipse(3.7.2), and it
is always saying that some eclipse packages do not exist. Actually the
eclipse path is explicitly written in both build.xml and build-contrib.xml.
And I double-checked that the path is correct and all the
Hi Chris,
You should use a utility like iozone http://www.iozone.org/; for benchmarking
drives while tuning your filesystem. You may be surprised at what measured
values can show you. :)
We use ext4 for storing HDFS blocks on our compute nodes and journaling has
been left on. We also have
Hi,
I was wondering if I can still use the DistributedCache class in the latest
release of Hadoop (Version 2.0.5).
In my driver class, I use this code to try and add a file to the distributed
cache:
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import
You should use Job#addCacheFile()
Cheers
On Tue, Jul 9, 2013 at 3:02 PM, Botelho, Andrew andrew.bote...@emc.comwrote:
Hi,
** **
I was wondering if I can still use the DistributedCache class in the
latest release of Hadoop (Version 2.0.5).
In my driver class, I use this code to
Siddharth,
The error msgs pointing to file system issues. Make sure that the file system
locations you specified in the config files are accurate and accessible.
-Sreedhar
From: siddharth mathur sidh1...@gmail.com
To: user@hadoop.apache.org
Sent:
Hi ,
I am running hibench on my Hadoop setup
Not able to initialize History viewer.
Caused by java.io.Exception: Not a valid history directory output/log/_history
I did not find much on the internet. Any idea what is going wrong. My Hadoop
cluster is running the terasort benchmark properly.
Hi Siddharth,
While running the multi-node we need to take care of the local host of the
slave machine from the error messages the task tracker root directory not
able to get to the masters. Please check and rerun it.
Thanks,
Kiran
On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur
It should be like this:
Configuration conf = new Configuration();
Job job = new Job(conf, test);
job.setJarByClass(Test.class);
DistributedCache.addCacheFile(new Path(your hdfs path).toUri(),
job.getConfiguration());
but the best example is test cases:
Hi,
Here NM is failing to connect to Resource Manager.
Have you started the Resource Manager successfully? Or Do you see any problem
while starting Resource Manager in RM log..
If you have started the Resource Manager in different machine other than NM,
you need to set this configuration
Hi,
I am using Cloudera Manager 4.1.2 not having hive as a service, so I
was installed hive and configured mysql as metastore. Using Cloudera
Manager i was installed HUE. In the Hue, Beeswax (Hive UI) which is using
by default derby database i want configure metastore same as what hive is
Hi users.
I start my HDFS by using :start-dfs.sh. And add the node start
successfully.
However the stop-dfs.sh dose not work when I want to stop the HDFS.
It shows : no namdenode to stop
no datanode to stop.
I have to stop it by the command: kill -9 pid.
So I wonder that how the
You can try the following
Sudo netstat -plten | grep java
This will give you all the java process which have a socket connection open.
You can easily figure out based on the port no you have mentioned in config
files like core-site.xml and kill the process
Thanks Regards,
Deepak Rosario
15 matches
Mail list logo