Rekha
From: Azuryy Yu azury...@gmail.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Thursday 21 November 2013 5:19 PM
To: user@hadoop.apache.org user@hadoop.apache.org
Cc: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org
Subject: Re: HDFS upgrade problem of fsImage
. Does
not discard any record.
Thanks.
De: Azuryy Yu azury...@gmail.com
Responder a: user@hadoop.apache.org user@hadoop.apache.org
Fecha: jueves, 21 de noviembre de 2013 07:31
Para: user@hadoop.apache.org user@hadoop.apache.org
Asunto: Re: Missing records from HDFS
what's your hadoop
() throws IOException {
if (in != null) {
in.close();
}
}
}
Thanks.
De: Azuryy Yu azury...@gmail.com
Responder a: user@hadoop.apache.org user@hadoop.apache.org
Fecha: viernes, 22 de noviembre de 2013 12:19
Para: user@hadoop.apache.org user@hadoop.apache.org
#Upgrading_from_older_release_to_0.23_and_configuring_federation
From: Azuryy Yu azury...@gmail.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Thursday 21 November 2013 9:48 AM
To: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org,
user@hadoop.apache.org user@hadoop.apache.org
Subject: HDFS upgrade problem
://wiki.apache.org/hadoop/Hadoop_Upgrade
https://twiki.grid.iu.edu/bin/view/Storage/HadoopUpgrade
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html#Upgrading_from_older_release_to_0.23_and_configuring_federation
From: Azuryy Yu azury...@gmail.com
Reply-To: user
fot MRv1, Its impossible.
On 2013-11-22 5:46 AM, Ivan Tretyakov itretya...@griddynamics.com
wrote:
Thank you for your replies!
We are using MR version 1 and my question is regarding this version.
Omkar,
are you talking about MR1 or MR2? I didn't find property to limit number
of running
Hi Dear,
I have a small test cluster with hadoop-2.0x, and HA configuraded, but I
want to upgrade to hadoop-2.2.
I dont't want to stop cluster during upgrade, so my steps are:
1) on standby NN: hadoop-dameon.sh stop namenode
2) remove HA configuration in the conf
3) hadoop-daemon.sh start
what's your hadoop version? and which InputFormat are you used?
these files under one directory or there are lots of subdirectory? how ddi
you configure input path in your main?
On Thu, Nov 21, 2013 at 12:25 AM, ZORAIDA HIDALGO SANCHEZ zora...@tid.eswrote:
Hi all,
my job is not reading
Hi Nicholas,
This is not hadoop releated.
edu.harvard.seas.scifs.ScifsStandard, which is your cutomized class, so:
you need to include this class in your ScifsStandard.jar
On Thu, Nov 21, 2013 at 4:15 AM, Nicholas Murphy halcyo...@gmail.comwrote:
I'm trying to use the aggregate framework but
the StackOverflow site, it is there (see the second
code block where I list the contents of ScifsStandard.jar).
Nick
On Nov 21, 2013, at 1:37 AM, Azuryy Yu azury...@gmail.com wrote:
Hi Nicholas,
This is not hadoop releated.
edu.harvard.seas.scifs.ScifsStandard, which is your cutomized
hi,
please recommend a good maven repo to compile hadoop source code.
It complain cannot find jdbm:bundle:2.0.0:m15 during compile trunk.
thanks.
:2.0.0-M15:compile
[INFO] | \-
org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2:compile
On Mon, Nov 18, 2013 at 3:27 AM, Azuryy Yu azury...@gmail.com wrote:
hi,
please recommend a good maven repo to compile hadoop source code.
It complain cannot find jdbm:bundle:2.0.0:m15 during
/Linux
[hortonzy@kiyo hadoop]$ mvn -version
Apache Maven 3.0.3 (r1075438; 2011-02-28 17:31:09+)
Cheers
On Mon, Nov 18, 2013 at 10:51 AM, Azuryy Yu azury...@gmail.com wrote:
Ted,
I am on Linux.
On 2013-11-19 1:30 AM, Ted Yu yuzhih...@gmail.com wrote:
Which platform did you perform
dfs.ha.namenodes.mycluster
nn.domain,snn.domain
it should be:
dfs.ha.namenodes.mycluster
nn1,nn2
On Aug 27, 2013 11:22 PM, Smith, Joshua D. joshua.sm...@gd-ais.com
wrote:
Harsh-
Here are all of the other values that I have configured.
hdfs-site.xml
-
dfs.webhdfs.enabled
privacy, so that’s why I didn’t post the actual
values.
** **
So, I think I have the equivalent of nn1,nn2 do I not?
** **
*From:* Azuryy Yu [mailto:azury...@gmail.com]
*Sent:* Tuesday, August 27, 2013 11:32 AM
*To:* user@hadoop.apache.org
*Subject:* RE: HDFS Startup Failure due
because we may use multi-threads to write a single file.
On Aug 8, 2013 2:54 PM, Sathwik B P sath...@apache.org wrote:
Hi,
LineRecordWriter.write(..) is synchronized. I did not find any other
RecordWriter implementations define the write as synchronized.
Any specific reason for this.
,
sathwik
On Thu, Aug 8, 2013 at 7:06 AM, Azuryy Yu azury...@gmail.com wrote:
because we may use multi-threads to write a single file.
On Aug 8, 2013 2:54 PM, Sathwik B P sath...@apache.org wrote:
Hi,
LineRecordWriter.write(..) is synchronized. I did not find any other
RecordWriter implementations
if you want HA, then do you want to deploy journal node on the DN?
On Aug 8, 2013 5:09 PM, ch huang justlo...@gmail.com wrote:
hi,all:
My company need build a 10 node hadoop cluster (2 namenode and
8 datanode node manager ,for both data storage and data analysis ) ,we
have hbase
is it necessary, then perhaps the answer is no.
On Aug 8, 2013 3:43 PM, Azuryy Yu azury...@gmail.com wrote:
its not hadoop forked threads, we may create a line record writer, then
call this writer concurrently.
On Aug 8, 2013 4:00 PM, Sathwik B P sathwik...@gmail.com wrote:
Hi,
Thanks for your reply
Manish,
you stop HDFS then start HDFS on the standby name node right?
please looked at https://issues.apache.org/jira/browse/HDFS-5058
there are two solutions:
1) start HDFS on the active name node, nor SBN
2) copy {namenode.name.dir}/* to the SBN
I advice #1.
On Wed, Aug 7, 2013 at 3:00
Hi Dears,
Can I specify how many slots to use for reduce?
I know we can specify reduces tasks, but is there one task occupy one slot?
it it possible that one tak occupy more than one slot in Hadoop-1.1.2.
Thanks.
AM, Azuryy Yu azury...@gmail.com wrote:
Hi Dears,
Can I specify how many slots to use for reduce?
I know we can specify reduces tasks, but is there one task occupy one
slot?
it it possible that one tak occupy more than one slot in Hadoop-1.1.2.
Thanks.
map slot and if the task is reduce
task the it will use one reduce slot from the configured ones.
Thanks
Devaraj k
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: 08 August 2013 08:27
To: user@hadoop.apache.org
Subject: Re: specify Mapred tasks and slots
My
All the differences are listed on the last URL you provided:
https://github.com/twitter/hadoop-lzo
Did you read it carefully?
On Thu, Jul 25, 2013 at 11:28 AM, 周梦想 abloz...@gmail.com wrote:
Hello,
In the page:http://hadoop.apache.org/docs/r1.1.2/deployment_layout.html
- LZO - LZ0
hi,
can you using
'hdfs namenode -initializeSharedEdits' on the active NN, remember start all
journal nodes before try this.
On Jul 19, 2013 5:17 PM, lei liu liulei...@gmail.com wrote:
I use hadoop-2.0.5 version and use QJM for HA.
I use ./hdfs namenode -bootstrapStandby for
thats not on /tmp?
On Mon, Jul 15, 2013 at 2:43 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
I don't have it in my hdfs-site.xml, in which case probably the default
value is taken..
On Mon, Jul 15, 2013 at 2:29 PM, Azuryy Yu azury...@gmail.com wrote:
please check
Hi Bertrand,
I guess you configured two racks totally. one IDC is a rack, and another IDC is
another rack.
so if you want to don't replicate populate during one IDC down, you had to
change the replicate placement policy,
if there are minimum blocks on one rack, then don't do anything. (here
hi,
from the log:
NameNode low on available disk space. Entering safe mode.
this is the root cause.
On Jul 15, 2013 2:45 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
I am doing no activity on my single node cluster which is using
2.1.0-beta, and still observed that it
please check dfs.datanode.du.reserved in the hdfs-site.xml
On Jul 15, 2013 4:30 PM, Aditya exalter adityaexal...@gmail.com wrote:
Hi Krishna,
Can you please send screenshots of namenode web UI.
Thanks Aditya.
On Mon, Jul 15, 2013 at 1:54 PM, Krishna Kishore Bonagiri
yes. it is useful.
On Jul 16, 2013 5:40 AM, Niels Basjes ni...@basjes.nl wrote:
Hi,
When giving users access to an Hadoop cluster they need a few XML config
files (like the hadoop-site.xml ).
They put these somewhere on they PC and start running their jobs on the
cluster.
Now when you're
the conf that client running on will take effect.
On Jul 13, 2013 4:42 PM, Kiran Dangeti kirandkumar2...@gmail.com wrote:
Shalish,
The default block size is 64MB which is good at the client end. Make sure
the same at your end also in conf. You can increase the size of each block
to 128MB or
sorry for typo,
mahout, not mahou. sent from mobile
On Jul 11, 2013 9:40 PM, Azuryy Yu azury...@gmail.com wrote:
hi,
put all mahou jars under hadoop_home/lib, then restart cluster.
On Jul 11, 2013 8:45 PM, Margusja mar...@roo.ee wrote:
Hi
I have tow nodes:
n1 (master, salve) and n2
you didn't set yarn.nodemanager.address in your yarn-site.xml
On Wed, Jul 10, 2013 at 4:33 PM, Francis.Hu francis...@reachjunction.comwrote:
Hi,All
** **
I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have Resource
Manager and all data nodes started and can access web
It should be like this:
Configuration conf = new Configuration();
Job job = new Job(conf, test);
job.setJarByClass(Test.class);
DistributedCache.addCacheFile(new Path(your hdfs path).toUri(),
job.getConfiguration());
but the best example is test cases:
Thanks Harsh, always detailed answers each time.
Yes, this is an unsupported scenarios, Load balancer is very slow even
after I set bandwidthPerSec to a large value, so I want to take this way
to slove the problem quickly.
On Mon, Jul 8, 2013 at 1:46 PM, Viral Bajaria
can remember
all blocks' structure on local, so the block file owner ship would be
confirmed at the starting period
and even if some pieces of blk_ files losts, then NN an find it's under
replicated, am I right? Thanks.
On Mon, Jul 8, 2013 at 2:07 PM, Azuryy Yu azury...@gmail.com wrote
, 2013 at 2:07 PM, Azuryy Yu azury...@gmail.com wrote:
Thanks Harsh, always detailed answers each time.
Yes, this is an unsupported scenarios, Load balancer is very slow even
after I set bandwidthPerSec to a large value, so I want to take this
way to
slove the problem quickly
\staging\Sudhir1731506911\.staging to 0700
java.io.IOException: Failed to set permissions of path:
\tmp\hadoop-Sudhir\mapred\staging\Sudhir1731506911\.staging to 0700
Thanks
*Sudhir *
--
*From:* Azuryy Yu azury...@gmail.com
*To:* user@hadoop.apache.org; sudhir543
Using InputFormat under mapreduce package. mapred package is very old
package. but generally you can extend from FileInputFormat under
o.a.h.mapreduce package.
On Fri, Jul 5, 2013 at 1:23 PM, Devaraj k devara...@huawei.com wrote:
Hi Ahmed,
** **
Hadoop 0.20.0 included
I filed this issue at :
https://issues.apache.org/jira/browse/HDFS-4959
On Fri, Jul 5, 2013 at 1:06 PM, Azuryy Yu azury...@gmail.com wrote:
Client hasn't any connection problem.
On Fri, Jul 5, 2013 at 12:46 PM, Devaraj k devara...@huawei.com wrote:
And also could you check whether
hadoop fs -chmod -R 755 \tmp\hadoop-Sudhir\mapred\staging
Then It should works.
On Sat, Jul 6, 2013 at 1:27 PM, sudhir543-...@yahoo.com
sudhir543-...@yahoo.com wrote:
I am new to hadoop, just started reading 'hadoop the definitive guide'.
I downloaded hadoop 1.1.2 and tried to run a sample
Hi Bing,
HA is not confilct with HDFS federation.
for example, you have two name services: cluster1, cluster2,
then,
property
namedfs.namenode.shared.edits.dir/name
valueqjournal://n1.com:8485;n2.com:8485/cluster1/value
/property
property
namedfs.namenode.shared.edits.dir/name
This is because you don't use the same clusterID. all data nodes and
namenodes should use the same clusterID.
On Thu, Jul 4, 2013 at 3:12 PM, Bing Jiang jiangbinglo...@gmail.com wrote:
Hi, all
We try to use hadoop-2.0.5-alpha, using two namespaces, one is for hbase
cluster, and the other
Additional,
If these are two new clusters, then on each namenode, using hdfs namenode
-format -clusterID yourID
But if you want to upgrade these two clusters from NonHA to HA, then using
bin/start-dfs.sh -upgrade -clusterID yourID
On Thu, Jul 4, 2013 at 3:14 PM, Azuryy Yu azury...@gmail.com
It's random.
On Jul 4, 2013 3:33 PM, Bing Jiang jiangbinglo...@gmail.com wrote:
If not set cluster id in formatting the Namenode, is there a policy in
hdfs to guarantee the even of distributing DataNodes into different
Namespace, or just randomly?
2013/7/4 Azuryy Yu azury...@gmail.com
Hi,
I am using hadoop-2.0.5-alpha, and I added 5 datanodes into dfs_exclude,
hdfs-site.xml:
property
namedfs.hosts.exclude/name
value/usr/local/hadoop/conf/dfs_exclude/value
/property
then:
hdfs dfsadmin -refreshNodes
but there is no decomssion nodes showed on the webUI. and not any
** **
*From:* Azuryy Yu [mailto:azury...@gmail.com]
*Sent:* 05 July 2013 08:12
*To:* user@hadoop.apache.org
*Subject:* Decomssion datanode - no response
** **
Hi,
I am using hadoop-2.0.5-alpha, and I added 5 datanodes into dfs_exclude, *
***
hdfs-site.xml:
property
for switching
datanodes from live to dead.
** **
*发件人:* Azuryy Yu [mailto:azury...@gmail.com]
*发送时间:* Friday, July 05, 2013 10:42
*收件人:* user@hadoop.apache.org
*主题:* Decomssion datanode - no response
** **
Hi,
I am using hadoop-2.0.5-alpha, and I added 5 datanodes into dfs_exclude
the file with new hosts and refreshNodes command can be issued,
then newly updated the DN’s will be decommissioned.
** **
Thanks
Devaraj k
** **
*From:* Azuryy Yu [mailto:azury...@gmail.com]
*Sent:* 05 July 2013 08:48
*To:* user@hadoop.apache.org
*Subject:* Re: Decomssion
Client hasn't any connection problem.
On Fri, Jul 5, 2013 at 12:46 PM, Devaraj k devara...@huawei.com wrote:
And also could you check whether the client is connecting to NameNode or
any failure in connecting to the NN.
** **
Thanks
Devaraj k
** **
*From:* Azuryy Yu
Hi Manuel,
2013-07-03 15:03:16,427 WARN
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place
enough replicas, still in need of 3
2013-07-03 15:03:16,427 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:root cause:java.io.IOException: File
Hi Uma,
I think there is minimum performance degration if set
dfs.datanode.synconclose to true.
On Tue, Jul 2, 2013 at 3:31 PM, Uma Maheswara Rao G mahesw...@huawei.comwrote:
Hi Dave,
Looks like your analysis is correct. I have faced similar issue some time
back.
See the discussion
Hi Dear all,
I just fount it occasionally, maybe all you know that, but I just show here
again.
Yet Another Resource Negotiator—YARN
from:
http://adtmag.com/blogs/watersworks/2012/08/apache-yarn-promotion.aspx
It's not HDFS issue.
dfs.replication is a client side configuration, not server side. so you
need to set it to '2' on your client side( your application running on).
THEN execute command such as : hdfs dfs -put or call HDFS API in java
application.
On Tue, Jul 2, 2013 at 12:25 PM, Francis.Hu
From the log: libGfarmFSNative.so: libgfarm.so.1: cannot open shared
object file: No such file or directory
I don't think you put libgfarm.* under
$HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
OS) on all nodes.
On Thu, Jun 27, 2013 at 10:44 AM, Harsh J
there is no MN, NM is node manager.
--Send from my Sony mobile.
On Jun 26, 2013 6:31 AM, yuhe justlo...@gmail.com wrote:
I plan to use CDH3u4,and what is MN?
--
使用语盒发送 @2013-06-25 22:36
http://www.yuchs.com
-- 原始邮件 --
user@hadoop.apache.org @2013年06月25日 15:12
What version of Hadoop
Can you paste some error logs here? you can find it on the JT or TT. and
tell us the hadoop version.
On Sun, Jun 23, 2013 at 9:20 PM, Pavan Kumar Polineni
smartsunny...@gmail.com wrote:
Hi all,
first i have a machine with all the demons are running on it. after that i
added two data
I advice community version of Hadoop-1.1.2, which is a stable release,
Hadoop2 hasn't stable release currently, even if all alpha release was
extensive tested.
but from me, I think HDFS2 is stable now.(no?), MR1 is also stable, but
Yarn still need extensive tests(at least I think so),
so our
you had to write a JSONInputFormat, or google first to find it.
--Send from my Sony mobile.
On Jun 23, 2013 7:06 AM, jamal sasha jamalsha...@gmail.com wrote:
Then how should I approach this issue?
On Fri, Jun 21, 2013 at 4:25 PM, Niels Basjes ni...@basjes.nl wrote:
If you try to hammer in
$HADOOP_HOME/bin/hadoop-daemon.sh stop namenode
On Wed, Jun 19, 2013 at 2:38 PM, Pavan Kumar Polineni
smartsunny...@gmail.com wrote:
For Testing The Name Node Crashes and failures. For Single Point of Failure
--
Pavan Kumar Polineni
ps aux|grep java , you can find pid, then just 'kill -9' to stop the
Hadoop process.
On Mon, Jun 17, 2013 at 4:34 PM, Harsh J ha...@cloudera.com wrote:
Just send the processes a SIGTERM signal (regular kill). Its what the
script does anyway. Ensure to change the PID directory before the next
from the log, there is no room on the HDFS.
--Send from my Sony mobile.
On Jun 16, 2013 5:12 AM, sumit piparsania sumitpiparsa...@yahoo.com
wrote:
Hi,
I am getting the below error while executing the command. Kindly assist me
in resolving this issue.
$ bin/hadoop fs -put conf input
you need to add lucene index tar.gz in the distributed cache as archive,
then create index reader in the mapper's setup.
--Send from my Sony mobile.
On Jun 12, 2013 12:50 AM, parnab kumar parnab.2...@gmail.com wrote:
Hi ,
I need to read an existing lucene index in a map.can someone
if you want work with HA, yes , all these configuration needed.
--Send from my Sony mobile.
On Jun 11, 2013 8:05 AM, Praveen M lefthandma...@gmail.com wrote:
Hello,
I'm a hadoop n00b, and I had recently upgraded from hadoop 0.20.2 to
hadoop 2 (chd-4.2.1)
For a client configuration to
not finding it compiled with the global native compile option?
Do you face a specific error?
Per the pom.xml of hadoop-hdfs, it will build fuse-dfs if native
profile is turned on, and you can assert for fuse-requirement with
-Drequire.fuse=true.
On Sun, Jun 9, 2013 at 11:03 AM, Azuryy Yu azury
hi,
Does anybody can tell me how to compile hdfs-fuse based on Hadoop-2.0-*,
Thanks.
ClientProtocol namenode = DFSClient.createNamenode(conf);
HdfsFileStatus hfs = namenode.getFileInfo(your_hdfs_file_name);
LocatedBlocks lbs = namenode.getBlockLocations(your_hdfs_file_name, 0,
hfs.getLen());
for (LocatedBlock lb : lbs.getLocatedBlocks()) {
DatanodeInfo[] info =
can you upgrade to 1.1.2, which is also a stable release, and fixed the bug
you facing now.
--Send from my Sony mobile.
On Jun 2, 2013 3:23 AM, Shahab Yunus shahab.yu...@gmail.com wrote:
Thanks Harsh for the reply. I was confused too that why security is
causing this.
Regards,
Shahab
On
PM, Azuryy Yu azury...@gmail.com wrote:
can you upgrade to 1.1.2, which is also a stable release, and fixed the
bug you facing now.
--Send from my Sony mobile.
On Jun 2, 2013 3:23 AM, Shahab Yunus shahab.yu...@gmail.com wrote:
Thanks Harsh for the reply. I was confused too that why
that on the JIRA?
On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu azury...@gmail.com wrote:
yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo matteo.lan...@lrz.de
wrote:
Hi Azuryy,
thanks for the update. Sorry for the silly
This should be fixed in hadoop-1.1.2 stable release.
if we determine completedMapsInputSize is zero, then job's map tasks MUST
be zero, so the estimated output size is zero.
below is the code:
long getEstimatedMapOutputSize() {
long estimate = 0L;
if (job.desiredMaps() 0) {
total map output will be + estimate);
}
return estimate;
}
}
On Sun, Jun 2, 2013 at 12:34 AM, Azuryy Yu azury...@gmail.com wrote:
This should be fixed in hadoop-1.1.2 stable release.
if we determine completedMapsInputSize is zero, then job's map tasks MUST
be zero, so
maybe network issue, datanode received an incomplete packet.
--Send from my Sony mobile.
On May 24, 2013 1:39 PM, Stephen Boesch java...@gmail.com wrote:
On a smallish (10 node) cluster with only 2 mappers per node after a few
minutes EOFExceptions are cropping up on the datanodes: an example
nohup ./your_bash 1temp.log 21
--Send from my Sony mobile.
On May 21, 2013 6:32 PM, zheyi rong zheyi.r...@gmail.com wrote:
Hi all,
I would like to run my hadoop job in a bash file for several times, e.g.
#!/usr/bin/env bash
for i in {1..10}
do
my-hadoop-job
done
you would look at chain reducer java doc, which meet your requirement.
--Send from my Sony mobile.
On Apr 20, 2013 11:43 PM, Vikas Jadhav vikascjadha...@gmail.com wrote:
Hello,
Can anyone help me in following issue
Writing intermediate key,value pairs to file and read it again
let us say i
I don't think this is easy to answer.
maybe it's not decided. if so, can you tell me what important features are
still deveoping or any other reasons?
Appreciate.
you can use if even if it's depracated.
I can find in
the org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.java,
@Override
public void initialize(InputSplit split,
TaskAttemptContext context
) throws IOException,
Data nodes name or IP changed cannot cause your data loss. only kept
fsimage(under the namenode.data.dir) and all block data on the data nodes,
then everything can be recoveryed when your start the cluster.
On Thu, Apr 18, 2013 at 1:20 AM, Tom Brown tombrow...@gmail.com wrote:
We have a
I supposed you start-mapred by user mapred.
then hadoop fs -chown -R mpared:mapred /home/jbu/hadoop_local_install/
hadoop-1.0.4/tmp/mapred/system
this is caused by fairscheduler, please reach
MAPREDUCE-4398https://issues.apache.org/jira/browse/MAPREDUCE-4398
On Mon, Apr 15, 2013 at 6:43 PM,
This is zookeeper issue.
please paste zookeeper log here. thanks.
On Tue, Apr 16, 2013 at 9:58 AM, dylan dwld0...@gmail.com wrote:
It is hbase-0.94.2-cdh4.2.0.
** **
*发件人:* Ted Yu [mailto:yuzhih...@gmail.com]
*发送时间:* 2013年4月16日 9:55
*收件人:* u...@hbase.apache.org
*主题:* Re: Region has
16, 2013 at 10:37 AM, dylan dwld0...@gmail.com wrote:
How to check zookeeper log?? It is the binary files, how to transform it
to normal log? ** **I find the “
org.apache.zookeeper.server.LogFormatter”, how to run?** **
** **
*发件人:* Azuryy Yu [mailto:azury...@gmail.com]
*发送时间:* 2013
and paste ZK configuration in the zookeerp_home/conf/zoo.cfg
On Tue, Apr 16, 2013 at 10:42 AM, Azuryy Yu azury...@gmail.com wrote:
it located under hbase-home/logs/ if your zookeeper is managed by hbase.
but I noticed you configured QJM, then did your QJM and Hbase share the
same ZK
is initializing
** **
*发件人:* Azuryy Yu [mailto:azury...@gmail.com]
*发送时间:* 2013年4月16日 10:59
*收件人:* user@hadoop.apache.org
*主题:* Re: 答复: 答复: 答复: Region has been CLOSING for too long, this should
eventually complete or the server will expire, send RPC again
** **
did your hbase
] - Expiring session
0x23e0dc5a333000b, timeout of 4ms exceeded
2013-04-16 11:03:44,001 [myid:1] - INFO [ProcessThread(sid:1
cport:-1)::PrepRequestProcessor@476] - Processed session termination for
sessionid: 0x23e0dc5a333000b
** **
** **
** **
*发件人:* Azuryy Yu [mailto:azury
they use the same journal nodes and ZK
nodes?
Thanks.
On Mon, Apr 8, 2013 at 2:57 PM, Azuryy Yu azury...@gmail.com wrote:
Thank you very much, Harsh.
not yet question now.
--Send from my Sony mobile.
On Apr 8, 2013 2:51 PM, Harsh J ha...@cloudera.com wrote:
Hi Azurry,
QJM:
Yes, multiple
agree. just check your app. or paste map code here.
--Send from my Sony mobile.
On Apr 14, 2013 4:08 AM, Edward Capriolo edlinuxg...@gmail.com wrote:
Your application logic is likely stuck in a loop.
On Sat, Apr 13, 2013 at 12:47 PM, Chris Hokamp chris.hok...@gmail.comwrote:
When you say
partitioned on
various data nodes?
On Wed, Apr 10, 2013 at 6:30 PM, Azuryy Yu azury...@gmail.com wrote:
CP command is not parallel, It's just call FileSystem, even if DFSClient
has multi threads.
DistCp can work well on the same cluster.
On Thu, Apr 11, 2013 at 8:17 AM, KayVajj vajjalak
PM, Alexander Pivovarov apivova...@gmail.com
wrote:
if cluster is busy with other jobs distcp will wait for free map slots.
Regular cp is more reliable and predictable. Especialy if you need to copy
just several GB
On Apr 10, 2013 6:31 PM, Azuryy Yu azury...@gmail.com wrote:
CP command
Yes, you can start a job directly from a job.xml
try hadoop job -submit JOB_FILE, replace JOB_FILE with your job.xml.
On Wed, Apr 10, 2013 at 12:25 AM, Jay Vyas jayunit...@gmail.com wrote:
Hi guys: I cant find much info about the life cycle for the job.xml file
in hadoop.
My thoughts are :
Hi Harsh,
Do you mean BackupNameNode is Secondary NameNode in Hadoop1.x?
On Sun, Apr 7, 2013 at 4:05 PM, Harsh J ha...@cloudera.com wrote:
Yes, it need not keep an edits (transactions) stream locally cause
those are passed synchronously to the BackupNameNode, which persists
it on its behalf.
in 2.x today if
you wish to.
On Sun, Apr 7, 2013 at 3:12 PM, Azuryy Yu azury...@gmail.com wrote:
Hi Harsh,
Do you mean BackupNameNode is Secondary NameNode in Hadoop1.x?
On Sun, Apr 7, 2013 at 4:05 PM, Harsh J ha...@cloudera.com wrote:
Yes, it need not keep an edits (transactions
SNN=secondary name node in my last mail.
--Send from my Sony mobile.
On Apr 7, 2013 10:01 PM, Azuryy Yu azury...@gmail.com wrote:
I am confused. Hadoopv2 has NN SNN DN JN(journal node), so whats
Standby Namenode?
--Send from my Sony mobile.
On Apr 7, 2013 9:03 PM, Harsh J ha
. it just runs the NameNode service),
just a naming convention.
On Sun, Apr 7, 2013 at 7:31 PM, Azuryy Yu azury...@gmail.com wrote:
I am confused. Hadoopv2 has NN SNN DN JN(journal node), so whats
Standby Namenode?
--Send from my Sony mobile.
On Apr 7, 2013 9:03 PM, Harsh J ha
Hi dears,
I deployed Hadoopv2 with HA enabled using QJM, so my question is:
1) if we also configured HDFS federation, such as:
NN1 is active, NN2 is standby
NN3 is active, NN4 is standby
they are configured as HDFS federation, then,
Can these four NNs using the same Journal nodes and
download hadoop-0.20.203, there is hadoop-eclipse plugin, which also
supports hadoop-1.0.4
Send from my Sony mobile.
On Apr 5, 2013 11:14 PM, sahil soni whitepaper2...@gmail.com wrote:
Hi All,
I have installed the Hadoop 1.04 on the Red Hat Linux 5 . I want to
install the Eclipse any version
hi,
do you think trunk is also stable as well as released stable?
--Send from my Sony mobile.
On Apr 7, 2013 5:01 AM, Harsh J ha...@cloudera.com wrote:
I don't think we publish nightly or rolling jars anywhere on maven
central from trunk builds.
On Sun, Apr 7, 2013 at 2:17 AM, Jay Vyas
name space, disk space. ns means block numbers limits. ds is total file
size limitation.
On Apr 4, 2013 3:12 PM, Bert Yuan bert.y...@gmail.com wrote:
Bellow is json format of an namednode entry:
{
inode:{
inodepath:'/anotherDir/biggerfile',
replication:3,
not at all. so don't worry about that.
On Wed, Apr 3, 2013 at 2:04 PM, Yanbo Liang yanboha...@gmail.com wrote:
It means that may be some replicas will be stay in under replica state?
2013/4/3 Azuryy Yu azury...@gmail.com
bq. then namenode start to copy block replicates on DN-2 to another
.
from 15 to 8, total spent time is almost 4 days long. ;(
someone mentioned that I don't need to decommission node by node.
for this case, is there no problems if I decommissioned 7 nodes at
the
same time?
2013. 4. 2., 오후 12:14, Azuryy Yu azury...@gmail.com 작성:
I can translate
bq. then namenode start to copy block replicates on DN-2 to another DN,
supposed DN-2.
sorry for typo.
Correct for it:
then namenode start to copy block replicates on DN-1 to another DN,
supposed DN-2.
On Wed, Apr 3, 2013 at 9:51 AM, Azuryy Yu azury...@gmail.com wrote:
It's different
101 - 200 of 246 matches
Mail list logo