l usage of the reserved
> container? thx.
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exem
of
>> this message is not the intended recipient, you are hereby notified that any
>> printing, copying, dissemination, distribution, disclosure or forwarding of
>> this communication is strictly prohibited. If you have received this
>> communication in error, please conta
uestions/21933937/hadoop-2-2-0-streaming-memory-limitation).
> no answer yet. If there should be any answers in one of the forms we will
> sync the answers.
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for th
other ways besides environment
> variables or command line arguments for passing data from the Client to the
> ApplicationMaster?
>
> Thanks,
> Brian
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intend
or interference.
>
>
>
>
>
>
>
> NOTE: This message may contain information that is confidential, proprietary,
> privileged or otherwise protected by law. The message is intended solely for
> the named addressee. If received in error, please destroy and notify the
d display a proper web page.
>
> Did anyone have this problem? Do you know if and how this can be fixed?
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed an
g
> > is said about what they mean and to which value they should be set. It seems
> > it assumes prior knowledge of everything about hadoop.
> >
> > Anyone knows a site with proper documentation about hadoop or it's hopeless
> > and this whole thing is just a piece o
/hadoop-common/blob/50f0de14e377091c308c3a74ed089a7e4a7f0bfe/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
>
>
> Sincerely,
> Robert
>
>
>
>
ubmit a bunch of big MR jobs to Hadoop cluster(each MR job will run
> and NOT in uber mode):
>- Each map task and reduce task of application_1383815949546_0006 will be
> executed in its own container. It means that application_1383815949546_0006
> will have lots of containers.
>
> I
re are some ongoing enhancements to make
it better.
hth,
Arun
> Thanks
> John
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain i
> to a namenode isn't it true? If it does not know where the data reside,
> does a MapReduce application master specify the resource name as "*" which
> means data locality might not be preserved at all? thx,
>
> r
>
>
--
Arun C. Murthy
Hortonworks Inc
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop
> OPERATION=Container Finished - Failed TARGET=ContainerImpl
> RESULT=FAILURE DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE
> APPID=application_1382415258498_0002
> CONTAINERID=con
never use specified
> partitioner TeraSort#TotalOrderPartitioner at all during job execution.
>
> Any one can help provide the reasons?
>
> Thanks very much!
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for
the 2.2.0 release available?
>
> Thanks,
> Justine
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain informat
**
> SHUTDOWN_MSG: Shutting down ResourceManager at node1/192.168.147.101
>
>
>
> **
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of
.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> ... 21 more
>
> it doesn't matter if it is a pure hadoop job or a oozie submitted job. there
> seems to be something wrong in the basic configuration. Anyone an id
unning in all nodes.
>
> Have set the jobtracker default memory size in hadoop-env.sh
>
> HADOOP_HEAPSIZE="1024"
>
> Have set the mapred.child.java.opts value in mapred-site.xml as,
>
>
> mapred.child.java.opts
> -Xmx2048m
>
>
&g
hanks,
> John Lilley
> Chief Architect, RedPoint Global Inc.
> 1515 Walnut Street | Suite 200 | Boulder, CO 80302
> T: +1 303 541 1516 | M: +1 720 938 5761 | F: +1 781-705-2077
> Skype: jlilley.redpoint | john.lil...@redpoint.net | www.redpoint.net
>
--
Arun C.
ission
> timestamp and the job end timestamp. However, I would like to know the
> breakdown of the execution time, such as the time spent actually on reading
> the input files and on writing output files. How can I get the read and write
> time?
>
> Thanks,
> Yong
>
>
it have any wrong implications?
>
> Thanks,
> Kishore
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
since the blocks of my input file are distributed over all nodes.
>
> Maybe someone can give me a hint.
>
> Thanks,
> André Hacker, TU Berlin
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the
to the job, the job fails:
>
> -Dmapred.job.queue.name=
> -D mapreduce.job.acl-view-job=*
>
>
> Is that correct ? If yes, is there a way to set a default queue per user and
> similarly better way to set acl-view property ?
>
>
> Thanks,
> Anurag Tangri
getting is the error I'm
> expecting. I've attached the nodemanger and resourcemanager logs for your
> reference as well.
>
> How can I get started on writing YARN applications beyond the initial
> tutorial?
>
> Thanks for any help/pointers!
>
> Prad
>>
>>> 100 tasks are scheduled
>>>
>>> 10 tasks are complete
>>>
>>> 4 tasks are running and they are (4%, 10%, 50%, 70%) complete
>>>
>>> But, given that YARN tasks are simply executables, how can the AM
>>> even
samples in Yarn.
> Searching for that but unable to find any. Please help me.
>
>
>
> Thanks,
> Manickam P
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
whic
Pls ask CDH lists. Thanks.
On Aug 22, 2013, at 1:39 AM, ch huang wrote:
> hi,all:
> Hadoop 2.0.0-cdh4.3.0 has no hadoop-env.sh , where i can tune JVM options?
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may
start the namenode and data
> nodes?
>
> thanks,
> Rajesh
>
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may c
nagiri
> wrote:
> Hi,
>
> Can someone please point to some example code of how to use the whitelist
> feature of YARN, I have recently got RC1 for hadoop-2.1.0-beta and want to
> use this feature.
>
> It would be great if you can point me to some description of what this
> white listing feature is, I have gone through some JIRA logs related to this
> but more concrete explanation would be helpful.
>
> Thanks,
> Kishore
>
>
>
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
> at java.lang.Thread.start(Thread.java:887)
>
> at java.lang.ProcessInputStream.(UNIXProcess.java:472)
>
> at java.lang.UNIXProcess$1$1$1.run(UNIXProcess.java:157)
>
> at
> java.security.AccessController.doPrivileged(AccessController.java:202)
>
&
so much with cdh I guess.
>
> Or is it possible to implement this using a different scheduler?
>
> Best & thanks,
> Hans-Peter
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
t; giving me containers on the other node? How can I make sure I get a container
> on the node I want?
>
> Note: I am using the default scheduler, i.e. Capacity Scheduler.
>
> Thanks,
> Kishore
>
>
> On Fri, Jun 21, 2013 at 7:25 PM, Arun C Murthy wrote:
> Check if th
xception at this time, the process would simply terminate (via
> System.exit(1) )
>
> appMaster.start() however rightly uses the JobFinishEventHandler and things
> work fine.
>
> Shouldn't a failure on init(..) also send a callback suggesting the job
> failed?
>
> Thanks,
> Prashant
>
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
> 512
> mapred-site.xml
>
>
>
> mapreduce.reduce.memory.mb
> 512
> mapred-site.xml
>
>
>
> mapred.child.java.opts
>
> -Xmx512m -Djava.net.preferIPv4Stack=true -XX:+UseCompressedOops
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/home/sfdc/logs/hadoop/userlogs/@taskid@/
>
> mapred-site.xml
>
>
>
> yarn.app.mapreduce.am.resource.mb
> 1024
> mapred-site.xml
>
>
> Regards,
> Siddhi
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
Start with LinuxResourceCalculatorPlugin.
Arun
On Jun 21, 2013, at 1:30 PM, Yuzhang Han wrote:
> Thank you Arun.
>
> I am trying to study how YARN works. Can someone tell me which class(es)
> monitors containers w.r.t. memory usage?
>
>
> On 6/18/2013 4:07 PM
ppreciate any help on this. Thanks.
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
hich just
> works fine when I change the node name to "*". I am working on a single node
> cluster, and I am giving the name of the single node I have in my cluster.
>
> Is there any specific format that I need to give for setHostName(), why is
> it not working...
>
&
Yes, I did tests and found that the MRv1 jobs could run against YARN
> directly, without recompiling
>
> #2, do you mean the old versions of HBase/Hive can not run agains YARN, and
> only some special versions of them can run against YARN? If yes, how can I
> get the versions for YARN
obs,
hive queries, pig scripts etc.)
Arun
>
> Thanks in advance!
>
>
>
> 2013/6/19 Rahul Bhattacharjee
> Thanks Arun and Devraj , good to know.
>
>
>
> On Wed, Jun 19, 2013 at 11:24 AM, Arun C Murthy wrote:
> Not true, the CapacityScheduler has supp
print or rely on this e-mail. If you have
> received this message in error, please contact the sender immediately and
> irrevocably delete this message and any copies.
>
>
> On Wed, Jun 19, 2013 at 7:59 AM, Arun C Murthy wrote:
> What version of MapReduce are you using? At t
t; received this message in error, please contact the sender immediately and
> irrevocably delete this message and any copies.
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
the general process for ApplicationMaster of Yarn to execute a job?
>
> 2. In Hadoop 1.x, we can set the map/reduce slots by setting
> 'mapred.tasktracker.map.tasks.maximum' and
> 'mapred.tasktracker.reduce.tasks.maximum'
> - For Yarn, above tow parameter do not work any more, as yarn uses container
> instead, right?
> - For Yarn, we can set the whole physical mem for a NodeManager using
> 'yarn.nodemanager.resource.memory-mb'. But how to set the default size of
> physical mem of a container?
> - How to set the maximum size of physical mem of a container? By the
> parameter of 'mapred.child.java.opts'?
>
> Thanks!
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
ed via -Xmx arguments in the scripts generated by YARN to
> invoke the MR program in the containers?
>
> Thank you.
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
and that node is busy, will the request
> fail, or will it give me a container on a different node? In other words is
> the node name a requirement or a hint?
> Thanks
> John
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
tart dfs of hadoop-2.0.0+ version
> and start mapred of mr1-0.20.2+ or something else.
>
> Kindly help me on setting up.
>
> Thanks
> Selva
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
is something to do with the heartbeat as the allocate call must return
> within a predictable time period?
>
> Thanks,
> Rahul
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
can collect all of
> the logs at the end of the run and see what happened to each part. Is there
> a convention for doing this? Does each task get a well-defined and unique
> directory where I can leave log files at the end of the run?
> Thanks,
> john
>
--
t working directory? Does
> stdout/stderr get captured anywhere by default?
> I ask because I am setting up tests with the “distributed shell” AM and want
> to know if basic commands (e.g. ls) will send stdout/stderr somewhere I can
> see at the end.
> Thanks
> John
>
--
Arun C
John,
On Jun 1, 2013, at 7:02 AM, John Lilley wrote:
>
> · Algorithms that are not well-suited to the MR model, such as
> transitive closure. They are more naturally expressed as MPI-like algorithms.
You might be interested in MPICH2 on YARN:
https://github.com/clarkyzl/mpich2-yarn
D
not sure how
> to use it or even if I should.
> Any pointers would be helpful.
>
> Thanks,
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
rly soon. Are there notable differences in
> YARNs classes, interfaces or semantics between 0.23 and 2.0? It seems to be
> supported on both versions.
> Thanks,
> John
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
ation is used.
> I have yet to see anyone comment on the features/benefits of either set of
> methods. Could someone comment on their preferred method for starting a
> MapReduce job from a Java program?
>
>
>
> Thank you.
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
;>
>>LOG.info("Trying to request node: " + containerNodes[0]);
>>ContainerRequest request = new ContainerRequest(capability,
>> containerNodes, null,
>>pri, numContainers);
>>
>> And while the log output shows this code is being execute
gt; hasn't resolved.
> Does anyone have any idea what the problem can be?
>
> Thanks,
> Farrokh
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
PM, wrote:
> Hi,
>
> I am using hadoop which is packaged within hbase -0.94.1. It is hadoop 1.0.3.
> There is some mapreduce job running on my server. After some time, I found
> that my folder /tmp/hadoop-root/mapred/local/archive has 14G size.
>
> How to configure this and limit the size? I do not want to waste my space
> for archive.
>
> Thanks,
>
> Xia
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
ge read
> operations)(0)][(HDFS_WRITE_OPS)(HDFS: Number of write
> operations)(1)]}{(org\.apache\.hadoop\.mapreduce\.TaskCounter)(Map-Reduce
> Framework)[(SPILLED_RECORDS)(Spilled Records)(0)][(CPU_MILLISECONDS)(CPU time
> spent \\(ms\\))(80)][(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
> snapshot)(91693056)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
> snapshot)(575086592)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
> \\(bytes\\))(62324736)]}nullnullnullnullnullnullnullnullnullnullnullnullnull"
>
> ...
>
>
> Best Regards,
> Christian.
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
rg.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:71)
>> at
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:556)
>> at
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:549)
>> at
>> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
>> at
>> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
>> at java.lang.Thread.run(Thread.java:662)
>>
>> Where am I making the mistake?
>>
>> regards
>> tmp
>
>
>
> --
> Harsh J
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
so has changed with the new
> version, it now has the Client being extended from YarnClientImpl and many
> other changes it has.
>
> Is there any guide as to how should I modify my old application to work
> against the new version?
>
> Thanks,
> Kishore
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
tributedCache.
>>>
>>> I saw that with DistributedCache you can give an hdfs path and the
>>> task nodes will get the data on local file system. But which
>>> advantages we have compared with a simple HDFS read with
>>> FSDataInputStream.open() method?
>>>
>>> Thank you very much,
>>> Alberto
>>>
>>>
>>> --
>>> Alberto Cordioli
>>
>>
>>
>> --
>> Harsh J
>
>
>
> --
> Alberto Cordioli
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
ce errors
> warning after cluster start and beyond the run interval. I assume I'm
> missing something but I can't seem to find any good docs on the matter.
>
> --
>
> --tucker
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
nerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:556)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:549)
> at
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(Asy
would be responsible
>> for ensuring the correct set of containers are allocated to the container
>> based on resource usage limits, priorities, etc. [ Again to clarify, OS type
>> scheduling is currently not supported ]. If a script fails, the container's
>> exit code and completion status would be fed back to the controller which
>> would then have to handle retries ( may require asking the RM for a new
>> container ).
>>
>>
>>
>>> Thank you in advance for your support,
>>> Ioan Zeng
>>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
oop but I don't know of the details. Anyone know how they work?
>
> Also, I believe there are tools in Linux that can kill processes in case of
> memory issues and otherwise restrict what a certain user can do. These seem
> like a more flexible solution although they won't cover
submit 24 machine. other machine only for research user.
>
> It's will prevent the memory leak problem.
>
>
> -Dhanasekaran.
> Did I learn something today? If not, I wasted it.
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
gent responsible for delivering this message
> to the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication
> is strictly prohibited. If you have received this communication
> in error, please notify the sender immediately by replying to
> this message and deleting the material from any computer.
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
> org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:985)
> at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:882)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:813)
>
>
>
> Regards,
> samir
>
>
>
>
> --
> Nitin Pawar
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
> smoothly?
> 2.If using MRV1, should our m/r code basing CDH3 be changed?
> 3.Is MRV1 stable enough to be used in production?
>
>
> Thanks & best regards,
> Jing Wang
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
ote a stub to unzip it manually. I was
> positively unable to get the archive unzipped *to a local directory* in any
> way.
>
> Unfortunately it works in local but not on the cluster. I have still to
> discover why. :(
>
> Ciao,
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
you response
>
> Note: It is little bit urgent do any one have exprience in that
> Thanks,
> samir
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
emanager
> how Containers will be allocated according to this settings.
> what about mapred.reduce.child.java.opts?
>
> regards
> Youpeng Yang
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
job
>3) completed job
>
> under job tracker log from Web-UI
> through http://localhost:50030
>
> Note: Is it required any other configuration for that ?
>
>
>
> --
> Harsh J
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
bTracker
> 2732 RunJar
> 2504 StatePusher
> 31902 instance-controller.jar
> 23553 Jps
> 22444 RunJar
> 2077 NameNode
>
> I am not sure how I can impose capacityscheduler on ec2/emr machines.
> -Shaojun
>
> On Fri, Jan 18, 2013 at 1:18 PM, Arun C Murthy wrote:
>&
, but not
> proportional to number of nodes, the processing capacity achieved with 20
> nodes is not the double of the processing capacity achieved with 10 nodes. Is
> there any evaluation about this?
>
> Thank you!
>
> --
> Thiago Vieira
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
(I use streaming, and I pass the config
> in the command line)
>
> -Dmapred.child.java.opts=-Xmx8000m <-- did not bring down the number of
> mappers
>
> -Dmapred.cluster.map.memory.mb=32000 <-- did not bring down the number
> of mappers
>
> Am I missing something here?
> I use
development will merge or is it increasingly likely the
> subteams will continue their separate routes?
>
> Thanks,
> Glen
> --
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
status
> changes (can be done via a daemon thread too) such that the framework does
> not think it has died or gone unresponsive.
>
> But ideally, you'd want to leverage YARN for this. Libraries such as Kitten
> [2] help along in this task.
>
> [1] -
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1/src/examples/org/apache/hadoop/examples/SleepJob.java
> [2] - https://github.com/cloudera/kitten/
>
> --
> Harsh J
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
at ApplicationMaster.readMessage(ApplicationMaster.java:241)
> at
> ApplicationMaster$SectionLeaderRunnable.run(ApplicationMaster.java:825)
> at java.lang.Thread.run(Thread.java:736)
>
>
> Thanks,
> Kishore
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
0.2.
>
> How can I configure eclipse to run Map-Reduce jobs ?
>
> Please suggest Step-by-Step.
>
> Thanks & regards
> Yogesh Kumar
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
de.NameNode.loadNamesystem(NameNode.java:398)
>>
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:432)
>>
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:608)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:589)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
: mode '+t' does not match the expected pattern.
>
> This disagrees with the docs here:
> https://ccp.cloudera.com/display/CDH4DOC/Deploying+MapReduce+v1+%28MRv1%29+on+a+Cluster#DeployingMapReducev1%28MRv1%29onaCluster-Step7
>
> Has anyone else encountered this? Let me kn
f jobs ?
>
> Regards,
> Marco
>
>
>
>
> --
> Have a Nice Day!
> Lohit
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
est 4 GB
> of heap space.
>
> Is it possible to limit the number of tasks (mapper) per computer to 1 or 2
> for
> these kinds of jobs ?
>
> Regards,
> Marco
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
el et toute pièce jointe qui
> l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent
> être couverts par le secret professionnel. Toute utilisation, copie ou
> divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire
> prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur.
> Veuillez penser à l'environnement avant d'imprimer le présent courriel
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
he problems, you can resume the build with the
> command
> [ERROR] mvn -rf :hadoop-common
>
>
> Thanks
> Yu
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
On Sep 20, 2012, at 11:26 AM, Andy Isaacson wrote:
> If, OTOH, someone is experiencing a problem that is probably down to
> core Hadoop and just happens to be using a particular distro, I think
> user@hadoop is a perfectly reasonable place to discuss that, and
> there's no particular need to hide
/api/records/ApplicationSubmissionContext.html#setPriority(org.apache.hadoop.yarn.api.records.Priority).
> Is this what you're asking about?
>
> On Wed, Sep 19, 2012 at 4:32 PM, 娄东风 wrote:
>> Hi,list
>> Does hadoop2.0 support application priority when doing the scheduler work?
>>
Please don't cross-post. Please ask Cloudera lists.
Arun
On Aug 30, 2012, at 3:45 AM, Sujit Dhamale wrote:
> Hi ,
> i have Ubuntu Operating System on my Computer and Apache Hadoop 1.0.2. is
> install.
>
> Now i need to Install Cluudera's Hadoop Version which having YARN .
>
> can you Please
e.org/common/releases.html#23+May%2C+2012%3A+Release+2.0.0-alpha+available
>
> Are there limitations on cdh packages?
>
>
> Any advice will be appreciated!
>
>
> Thanks & Best Regards
> Jing Wang
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
-2.0.1-alpha]# hadoop dfsadmin -report
>> DEPRECATED: Use of this script to execute hdfs command is deprecated.
>> Instead use the hdfs command for it.
>>
>> report: FileSystem file:/// is not a distributed file system
>> Usage: java DFSAdmin [-report]
>>
>> Thanks
>> Chandra
>
>
>
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
r these and a number of other mapreduce.* (e.g.,
>>mapreduce.job.reduces) properties are observed by the MR2
>>ApplicationMaster (and how), or not.
>>
>>
>>Can anyone clarify or point to respective documentation?
>>
>>Thanks,
>>Martin
>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
for
> such integration.
>
> Thanks,
> --
> Dhruv
>
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
nal hard drives. I
> understand HDFS has a configurable redundancy feature but what happens if an
> entire drive crashes (physically) for whatever reason? How does Hadoop
> recover, if it can, from this situation? What else should I know before
> setting up my cluster this way? Thanks
Apologies (again) for the cross-post, I've filed
https://issues.apache.org/jira/browse/INFRA-5123 to close down (common, hdfs,
mapreduce)-user@ since user@ is functional now.
thanks,
Arun
On Aug 4, 2012, at 9:59 PM, Arun C Murthy wrote:
> All,
>
> Given our recent di
All,
Given our recent discussion (http://s.apache.org/hv), the new
user@hadoop.apache.org mailing list has been created and all existing users in
(common,hdfs,mapreduce)-user@ have been migrated over.
I'm in the process of changing the website to reflect this (HADOOP-8652).
Henceforth, ple
93 matches
Mail list logo