Adding hdfs users.
On Aug 1, 2013, at 6:03 PM, Mapred Learn mapred.le...@gmail.com wrote:
Hi,
I added node to exclude list in hdfs site n ran dfsadmin -refreshNodes
But name node is not able to start decommissioning.
Do I need to bounce name node ?
Thanks,
JJ
I use hadoop-2.0.5, and QJM for HA.
When Standby NameNode do checkpoint,there are below exception in Standby
NameNode:
2013-08-01 13:43:07,965 INFO
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Triggering
checkpoint because there have been 763426 txns since the last checkpoint,
Hi,
Please share with me if you anyone has an answer or clues to my question
regarding the start up performance.
Also, one more thing I have observed today is the time taken to run a
command on a container went up by more than a second in this latest version.
When using 2.0.4-alpha, it used to
I recently updated from 1.0.4 to 2.0.5. Since then, streaming jobs have
been failing to launch due to what seems like an incorrect staging path:
# /opt/hadoop2/bin/hadoop jar
/opt/hadoop2/share/hadoop/tools/lib/hadoop-streaming-2.0.5-alpha.jar
-input foo -output bar -mapper baz -reducer
Hi Pierre,
As per the below information, we see Job is running in local mode and trying to
use the local file system for staging dir. Could you please configure
'mapreduce.framework.name' 'fs.defaultFS' and check.
Hi all,
I have a very simple M/r scenario -join subscribers records (50 M
records/20TB) with subscriber events (1 B records/5TB). The goal is to
update the subscribers records with incoming events.
Few possible solutions:
1. Reduce side join. In map, omit subscriber id as key. Reduce will get the
Hi,
is there smth. available to do it on the commandline?
I need to cleanup some old files from our system.
Best Regards,
Christian.
Hi Pavan,
thanks, those two commands i know. I mean a way to do it in a
exploitative way. Like with nc.
Best Regards,
Christian.
2013/8/1 Pavan Sudheendra pavan0...@gmail.com
Yes.
$HADOOP_HOME/bin/hadoop fs -ls / (to view all the files in HDFS)
$HADOOP_HOME/bin/hadoop fs -rmr
Good catch. I hadn't noticed the file:/ instead of hdfs:/. Setting the
framework to yarn got rid of the problem.
Thanks,
Pierre
On 08/01/2013 11:08 AM, Devaraj k wrote:
Hi Pierre,
As per the below information, we see Job is running in local mode and
trying to use the local file system
that's ok ,but why i can not use
com.hadoop.compression.lzo.DistributedLzoIndexer ?
# hadoop jar /usr/lib/hadoop/lib/hadoop-lzo-0.4.15.jar
com.hadoop.compression.lzo.LzoIndexer /alex/ttt.lzo
13/08/02 09:11:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/08/02 09:11:09 INFO
i use yarn ,and i commented the following option and error is different
vi /etc/hadoop/conf/mapred-site.xml
!--
property
namemapred.job.tracker/name
valueCH22:8088/value
/property
--
# hadoop jar /usr/lib/hadoop/lib/hadoop-lzo-0.4.15.jar
I recently got a mini cluster corrupted after my inappropriate process.
This mini cluster's dfs.replication was set to 1.
After irregular restart of OS, I cannot wait to leave safemode, the block
ratio is 0.9862, 0.999.
In the http://ip:50075/blockScannerReport, I notice there is rate limit to
In Hadoop 2.0 some of the classes have changed from an abstract class to an
interface.
You'll have to compile again. In addition, you need to use a version of
hadoop-lzo that is compatible with Hadoop 2.0 (Yarn).
See: https://github.com/twitter/hadoop-lzo/issues/56
and the announcement of a newer
Hi,
The DataBlockScanner isn't responsible for the DN block reports at
startup, which is a wholly different thread/process - it is a NN
independent operation that merely verifies blocks in the background
for the DN's own health. Depending on what the outage caused, it is
likely that you are
Hi Harsh, thanks for reply.
Yes, dfs.replication was set to 1, but no missing mount.
Another questions: will a single replication factor offen lead to block
missing?
After the startup, the ratio reported in admin ui, e.g. 0.9826, will not
change? Even the DataBlockScanner is still running?
15 matches
Mail list logo