,
--
Nan Zhu
On Sunday, January 5, 2014 at 11:10 AM, Prashant Sharma wrote:
Are you sure you did what is said here ?
http://spark.incubator.apache.org/docs/latest/spark-standalone.html in case
yes, please more details which env you have set ?
On Sun, Jan 5, 2014 at 7:15 PM, Nan Zhu
cannot access from Montreal, Canada
Best,
--
Nan Zhu
/.gitignore)
3. push files to git repo
4. pull files in the desktop
5. sbt/sbt assembly/assembly, failed with the same error as my last email
any further comments?
Best,
--
Nan Zhu
On Monday, December 23, 2013 at 12:22 PM, Patrick Wendell wrote:
Hey Nan,
You shouldn't copy lib_managed
….
Best,
--
Nan Zhu
On Monday, December 23, 2013 at 4:12 PM, Nan Zhu wrote:
Hi, Patrick
Thanks for the reply
I still failed to compile the code, even I made the following attempts
1. download spark-0.8.1.tgz,
2. decompress, and copy the files to the github local repo directory
Hi, Azuryy,
I’m working on macbook pro
so it is indeed “Users”
Best,
--
Nan Zhu
On Saturday, December 21, 2013 at 9:31 AM, Azuryy wrote:
Hi Nan
I think there is a typo here:
file:///Users/nanzhu/.m2/repository”),
It should be lowercase.
Sent from my iPhone
”)) doesn’t work….
b. in 4, the cllient.jar dependency cannot download core.jar in automatic
(why?) I have to add an explicit dependency on core.jar
Best,
--
Nan Zhu
On Monday, December 16, 2013 at 2:41 PM, Gary Malouf wrote:
Check out the dependencies for the version of hadoop-client you
-core.jar,
but I didn’t find any line specified hadoop-core-1.0.4.jar in pom.xml and
SparkBuild.scala,
Can you explain a bit to me?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 16, 2013 at 3:58 AM, Azuryy Yu wrote:
Hi Nan,
I am also using our
Hi, Gary,
The page says Spark uses hadoop-client.jar to interact with HDFS, but why it
also downloads hadoop-core?
Do I just need to change the dependency on hadoop-client to my local repo?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 16, 2013
finally understand it
solved
--
Nan Zhu
School of Computer Science,
McGill University
On Sunday, December 15, 2013 at 1:43 AM, Nan Zhu wrote:
Hi, all
I’m trying to run Spark on EC2 and using S3 as the data storage service,
I set fs.default.name (http://fs.default.name
daemons(? but what’s that, the
default port should be 7070?)
I didn’t step into the details of spark-ec2, just manually setup a cluster in
ec2, and directly pass s3n:// as the input and output path
everything works now
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Sunday
lines about yarn jars in
ScalaBuild.scala,
Can you tell me which line I should modify to achieve the goal?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
Hi, all
I’m trying to run Spark on EC2 and using S3 as the data storage service,
I set fs.default.name to s3://myaccessid:mysecreteid@bucketid, and I tried to
load a local file with textFile
I found that Spark still tries to find http://mymasterip:9000
I also tried to load a file stored in
] import org.apache.hadoop.hdfs.DFSClient;
What does this mean?
it means that core is compiled before hdfs, so I cannot do this?
Thank you very much!
Best,
--
Nan Zhu
School of Computer Science,
McGill University
Solved by declare an empty somemethod() in FSInputStream and override it in
DFSInputStream
--
Nan Zhu
School of Computer Science,
McGill University
On Saturday, December 14, 2013 at 7:53 PM, Nan Zhu wrote:
Hi, all
I’m modifying FSDataInputStream for some project,
I would
Hi,
I'm not sure if it is the right place to talk about this, if not, I'm very
sorry about that
- 9-9:30am The State of Spark, and Where We’re Going
Nexthttp://spark-summit.org/talk/zaharia-the-state-of-spark-and-where-were-going/
– pptx
OK, ignore it, I found that I did not set wildcard value correctly
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 2, 2013 at 8:27 AM, Nan Zhu wrote:
Hi, all
I’m going to build a match object which only cares about nw_src and nw_dst
Hi, all
I saw there is a commit about using epoll, but, how to enable that through cmd
argument?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
Yes, just found that with grep
I’m testing it
Thank you so much
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 2, 2013 at 5:12 PM, Murphy McCauley wrote:
STS
BTW, what do you mean by STS?
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 2, 2013 at 5:27 PM, Nan Zhu wrote:
Yes, just found that with grep
I’m testing it
Thank you so much
--
Nan Zhu
School of Computer Science,
McGill University
per second….
I will look at this issue by using pox software switch
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 2, 2013 at 5:18 PM, Murphy McCauley wrote:
http://ucb-sts.github.io/sts/
On Dec 2, 2013, at 2:28 PM, Nan Zhu zhunanmcg...@gmail.com
Hi, Murphy,
See my inlined answers
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, December 2, 2013 at 11:03 PM, Murphy McCauley wrote:
On Dec 2, 2013, at 6:40 PM, Nan Zhu zhunanmcg...@gmail.com
(mailto:zhunanmcg...@gmail.com) wrote:
Hi, Murphy
disconnecting (or being disconnected due to idle)...
which might be obvious from reading the log.
At any rate, aborting the OpenFlow loop when out of file descriptors is
probably just not the right thing to do. I've pushed a fix to dart.
-- Murphy
On Nov 30, 2013, at 12:31 PM, Nan Zhu zhunanmcg
Thank you very much for replying and sorry for posting on the wrong list
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Thursday, October 24, 2013 at 1:06 AM, Jun Ping Du wrote:
Move to @user alias.
- Original Message -
From: Jun Ping Du j...@vmware.com
Great!!!
On Wed, Oct 23, 2013 at 9:21 PM, Matei Zaharia matei.zaha...@gmail.comwrote:
Yes, take a look at
http://spark.incubator.apache.org/docs/latest/ec2-scripts.html#accessing-data-in-s3
Matei
On Oct 23, 2013, at 6:17 PM, Nan Zhu zhunanmcg...@gmail.com wrote:
Hi, all
Is there any
on there,
what will happen on Spark? it will recover those tasks with something like
speculative execution? or the job will unfortunately fail?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
running on them, can I rely on
the speculative execution to re-run them on other nodes?)
I cannot use EMR, since I’m running a customized version of Hadoop
Best,
--
Nan Zhu
School of Computer Science,
McGill University
running on them, can I rely on
the speculative execution to re-run them on the other nodes?)
I cannot use EMR, since I’m running a customized version of Hadoop
Best,
--
Nan Zhu
School of Computer Science,
McGill University
Hi, all
I would like to know if YARN supports to manage bandwidth resource for
containers, like minimum bandwidth guarantee?
maybe not in the trunk but is there any under-review patch?
I searched but returned nothing,
Can anyone give some hints?
Best,
Nan
implement this as a NOX/floodlight app?
--
Nan Zhu
School of Computer Science,
McGill University
On Tuesday, 23 July, 2013 at 10:27 PM, Karthik Sharma wrote:
Well I would like to see that programmatically. For example I would like to
trap the control channel and do either one
is not swift:swift?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https
NS3 provides an openflow module based on reference implementation (?), but
never use that…slow packet-level simulation is one of the reasons
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, 11 March, 2013 at 11:27 PM, viral parmar wrote:
Hello,
I have been
/nsdi10/tech/full_papers/al-fares.pdf)
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, 11 March, 2013 at 11:39 PM, viral parmar wrote:
Thanks Nan,
I am Checking the same, and let me know when you come across some another
tool for same purpose...
Date: Mon
http://onrc.stanford.edu/research_openradio.html
it might be useful for you
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Tuesday, 5 March, 2013 at 11:17 PM, Mohammed Balfaqih wrote:
Dear all,
I am trying to implement openflow in wireless network. what
I'm also maintaining an experimental Hadoop cluster, and I need to modify the
Hadoop source code and test it,
so just use NFS to deploy the latest version of code, no problem found yet
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, 18 February, 2013 at 1:09 PM
I think set tasktracker.reduce.tasks.maximum to be 1 may meet your requirement
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Friday, 8 February, 2013 at 10:54 PM, David Parks wrote:
I have a cluster of boxes with 3 reducers per node. I want to limit a
particular
to do
that, you can modify mapred-site.xml to change it from 3 to 1
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Friday, 8 February, 2013 at 11:24 PM, David Parks wrote:
Hmm, odd, I’m using AWS Mapreduce, and this property is already set to 1 on
my cluster by default
this….
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Friday, 8 February, 2013 at 11:46 PM, David Parks wrote:
Looking at the Job File for my job I see that this property is set to 1,
however I have 3 reducers per node (I’m not clear what configuration is
causing
it correctly
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Wednesday, 30 January, 2013 at 11:39 AM, Harsh J wrote:
Yes, if there are missing blocks (i.e. all replicas lost), and the
block availability threshold is set to its default of 0.999f (99.9%
availability required
Hi, all
I'm wondering if HDFS is stopped, and some of the machines of the cluster are
moved, some of the block replication are definitely lost for moving machines
when I restart the system, will the namenode recalculate the data distribution?
Best,
--
Nan Zhu
School of Computer Science
So, we can assume that all blocks are fully replicated at the start point of
HDFS?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Tuesday, 29 January, 2013 at 10:50 PM, Chen He wrote:
Hi Nan
Namenode will stay in safemode before all blocks are replicated. During
have you enabled task preemption?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
Looks like weight for both pools is equal and all map slots are used.
Therefore I don't believe anyone has priority
I think you should do that, so that when the allocation is inconsistent with
fair share, the tasks in the queue which occupies more beyond it's fair share
will be killed, and the available slots would be assigned to the other one
(assuming the weights of them are the same)
Best,
--
Nan Zhu
!
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
I think you should do that, so that when the allocation is inconsistent with
fair share, the tasks in the queue which occupies more beyond it's fair share
will be killed
when the task is failed,
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Monday, 12 November, 2012 at 6:14 PM, Samaneh Shokuhi wrote:
Hi,
I need to do some experience on hadoop source code.
i have modified part of it in MapTask.java class and need to compare the
response
You set maxMaps to 200,
so the maximum running mappers should be no more than 200
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Thursday, 8 November, 2012 at 6:12 PM, Matt Goeke wrote:
Pretty straight forward question but can the fair share factor actually
impact
Hi, Harsh
thanks for your reply,
yes, I'm modifying the source code of MR,
I have dig it out, just as
jip.getMapCounters().getCounter(FileInputFormat.Counter.BYTES_READ);
Thx
--
Nan Zhu
School of Computer Science,
McGill University
On Tuesday, October 30, 2012 at 12:51 AM, Harsh J
Hi, all
When I tried to compile Hadoop 1.0.3, it tells me that
src/core/org/apache/hadoop/fs/kfs/KFSImpl.java:30: package
org.kosmix.kosmosfs.access does not exist
Can anyone tell me why this issue happen?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
, :-(
Best,
--
Nan Zhu
School of Computer Science,
McGill University
On Tuesday, October 16, 2012 at 5:31 PM, Charles Woerner wrote:
I've seen this happen when the native kfs libs aren't in your java library
path. Add them to both LD_LIBRARY_PATH and -Djava.library.path
Sent from my
:474:
/Users/zhunan/codes/hadoop/src/test/lib does not exist.
the version in my side is 1.0.3
Can anyone help me to get out from this issue?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
I added that directory manually, but failed when compiling
compile-hdfs-classes:
build.xml:556: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
the jetty library missing? how to add?
Best,
--
Nan Zhu
School of Computer Science,
McGill University
在 2012年10月13日星期六,上午11:36
I build hadoop.jar
--
Nan Zhu
School of Computer Science,
McGill University
在 2012年10月13日星期六,下午12:15,Chen He 写道:
Hi Nan
you can uncheck the test package and have a try.
Chen
On Sat, Oct 13, 2012 at 10:49 AM, Nan Zhu zhunans...@gmail.com
(mailto:zhunans...@gmail.com) wrote
I just checked jar option
the only related part to web interface I modified is fairschedulerservlet, but
in the passed days, I modified it for many times, no problem happened,
really weird...
Best,
--
Nan Zhu
School of Computer Science,
McGill University
在 2012年10月13日星期六,下午12:17
1.0.3 stable release
--
Nan Zhu
School of Computer Science,
McGill University
在 2012年10月13日星期六,下午12:34,Chen He 写道:
hmmm... Interesting, I used to modify the fair-sharing scheduler in hadoop
0.21, there was no problem. Which version you are working on?
Chen
On Sat, Oct 13, 2012
and
generate the C++ code again,
how should I merge the new version of generated code with my implemented
version? do I need to copy the related part manually(e.g. new callee
declaration)?
Thank you!
Best,
--
Nan Zhu
School of Computer Science,
McGill University
as a header file, and then
implement the functions in a cpp?
Best
--
Nan Zhu
School of Computer Science,
McGill University
在 2012年9月28日星期五,下午1:13,Ted Dunning 写道:
You should never modify the generated code by hand. Implement them in a
separate file.
On Fri, Sep 28, 2012 at 1:07 PM, Nan
Wow, thank you Patrick,
I finally know which file is what I should refer to
Best,
--
Nan Zhu
School of Computer Science,
McGill University
在 2012年9月28日星期五,下午1:38,Patrick Scott 写道:
The c++ generator does generate interfaces. I think the file you are
referring
, Nan Zhu zhunans...@gmail.com wrote:
Hi, Chen,
Thank you for your reply,
but in its README, there is no value which is larger than 100%, it
means
that the size of intermediate results will never be larger than input
size,
it will not be the case, because the input data
+ jobs, including Pig workloads
Can anyone tell me what does keep 10% map, 40% reduce mean here?
Best,
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
: Many user workloads are implemented as pipelined
map/reduce
+ jobs, including Pig workloads
Can anyone tell me what does keep 10% map, 40% reduce mean here?
Best,
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong
11:12 AM, Lingfeng Zhuang wrote:
Hi all,
Does thrift support sending file in message? I want to share files between
two computers using thrift.
Regards,
zlf
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan Road
I think you can send files via certain types,
however, I don't think thrift is optimized for large data transfer
Best
On Fri, May 4, 2012 at 11:22 AM, Nan Zhu zhunans...@gmail.com wrote:
I think you can send files via certain types,
however, I don't think thrift is not optimized for large
from: /usr/local/lib/libthrift-0.9.0-dev.dylib*
*
*
can anyone give some instructions to solve them?
Thank you
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
?
If anyone knows it, please let me know it
Thanks Regards,
Mohmmadanis Moulavi
Student,
MTech (Computer Sci. Engg.)
Walchand college of Engg. Sangli (M.S.) India
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan
commented all the reference to this package in the source file to
see what will happen, I found another error:
in the class org.apache.hadoop.mapred.jobfailures_jsp, cannot find the
symbol JobTracker
any ideas to solve this situation?
Thx
--
Nan Zhu
School of Electronic, Information
are in shuffle stage while there are
mappers which are still in running,
So how can I indicate the phase which the job is belonging to?
Thanks
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans
completion. There is a 'reduce slowstart' feature to control
this - by default, reduces aren't started until 5% of maps are complete.
Users can set this higher.
Arun
On Sep 18, 2011, at 7:24 PM, Nan Zhu wrote:
Hi, all
recently, I was hit by a question, how is a hadoop job divided into 2
2569
Email: gkous...@mail.ntua.gr
Site: http://users.ntua.gr/gkousiou/
National Technical University of Athens
9 Heroon Polytechniou str., 157 73 Zografou, Athens, Greece
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan
2569
Email: gkous...@mail.ntua.gr
Site: http://users.ntua.gr/gkousiou/
National Technical University of Athens
9 Heroon Polytechniou str., 157 73 Zografou, Athens, Greece
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan
and
the topic looks very interesting. Suggestions and presentation by guest are
welcome.
If you are interested to attend, please reply to this thread or contact me
directly.
Regards,
Michael
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
centers?
Best,
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
and slaves ?
Mark
--
Nan Zhu
School of Electronic, Information and Electrical Engineering,229
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
involve this issue, but just introduce the data
replica where talking about blocks in HDFS,
can anyone give me some instructions?
Thanks
Nan
--
Nan Zhu
School of Software,5501
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
Yes, that's it
Thanks
Nan
On Mon, Apr 18, 2011 at 10:16 PM, Harsh J ha...@cloudera.com wrote:
Hello,
On Mon, Apr 18, 2011 at 7:16 PM, Nan Zhu zhunans...@gmail.com wrote:
Hi, all
I'm confused by a question that how does the HDFS decide where to put
the
data blocks
I mean
zhutao
--
Nan Zhu
School of Software,5501
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
I look at, and after the above mentioned
modification, how to collect the statistics?
Thanks,
Bikash
--
Nan Zhu
School of Software,5501
Shanghai Jiao Tong University
800,Dongchuan Road,Shanghai,China
E-Mail: zhunans...@gmail.com
Hi, Chen
How is it going recently?
Actually I think you misundertand the code in assignTasks() in
JobQueueTaskScheduler.java, see the following structure of the interesting
codes:
//I'm sorry, I hacked the code so much, the name of the variables may be
different from the original version
for
obtainNewLocalMapTask(..) method
call.
Bests
Chen
On Mon, Jan 17, 2011 at 8:28 AM, Nan Zhu zhunans...@gmail.com wrote:
Hi, Chen
How is it going recently?
Actually I think you misundertand the code in assignTasks() in
JobQueueTaskScheduler.java, see the following structure
local tasks it can in one afford obtainNewLocalMapTask(..) method
call.
Bests
Chen
On Mon, Jan 17, 2011 at 8:28 AM, Nan Zhu zhunans...@gmail.com wrote:
Hi, Chen
How is it going recently?
Actually I think you misundertand the code in assignTasks
Hi, all
I have a question about the file transmission between Map and Reduce stage,
in current implementation, the Reducers get the results generated by Mappers
through HTTP Get, I don't understand why HTTP is selected, why not FTP, or a
self-developed protocal?
Just for HTTP's simple?
thanks
Why would you like to *simulate* network delay? I haven't got your point,
Bests,
Nan
On Sun, Dec 26, 2010 at 8:25 PM, yipeng yip...@gmail.com wrote:
Hi everyone,
I would like to simulate network delay on 1 node in my cluster, perhaps by
putting the thread to sleep every time it transfers
:56 PM, yipeng yip...@gmail.com wrote:
I'm trying to explore how Hadoop performs certain tasks (data deduplication
actually) under such conditions.
Cheers,
Yipeng
On Sun, Dec 26, 2010 at 8:31 PM, Nan Zhu zhunans...@gmail.com wrote:
Why would you like to *simulate* network delay? I
Hi, all
I'm attempting to invoke the gridmix2 benchmark on hadoop-0.20.2, I have
modify the environment variables in the scripts accordingly, but after I
typed sh rungridmix_2 , the system gives an error like following:
Exception in thread main java.lang.NoClassDefFoundError:
attachment missed?
Nan
On Thu, Nov 4, 2010 at 11:35 PM, Rafael Braga rafaeltelemat...@gmail.comwrote:
Hi everybody,
I follow the tutorial:
http://wiki.apache.org/hadoop/EclipseEnvironment and
saw the screencast: http://vimeo.com/4193623. The buld's.xml ran whitout
problems
but
:17, Nan Zhu wrote:
Hi, all
I'm trapping in a strange problem in this evening,
I have been working on hadoop for several months, including modify the
source code and re-compile it, I have never met any problem, but when I
re-compile the hadoop this evening, it shows that the ivy cannot resolve
I see...
Thank you very much, Steve
Nan
On Wed, Nov 3, 2010 at 1:08 AM, Steve Loughran ste...@apache.org wrote:
On 02/11/10 14:47, Nan Zhu wrote:
Hi, Steve
I haven't added jms and jmxtools, but I added a jxl library to generate
some
statistical result with excel format, but it seems
Hi, all
I'm trapping in a strange problem in this evening,
I have been working on hadoop for several months, including modify the
source code and re-compile it, I have never met any problem, but when I
re-compile the hadoop this evening, it shows that the ivy cannot resolve the
dependencies, the
Hi, all
I'm trapping in a strange problem in this evening,
I have been working on hadoop for several months, including modify the
source code and re-compile it, I have never met any problem, but when I
re-compile the hadoop this evening, it shows that the ivy cannot resolve the
dependencies, the
are behind a proxy , then it might create
these problems (as said by steve). You can try this ..
Thanks
Bharath.V
4th year undergraduate,
IIIT Hyderabad.
On Tue, Nov 2, 2010 at 8:47 AM, Nan Zhu zhunans...@gmail.com wrote:
Hi, all
I'm trapping in a strange problem in this evening
believe. Downgrade the version to 1.1.1 and
everything
should be fine.
On Tue, Nov 02, 2010 at 11:56AM, Nan Zhu wrote:
Hi, Bharath
Thank you for the reply
I removed the entire cache of ivy and change the library.properties file,
the error is still there,
why I run it smoothly
, 2010 at 1:25 PM, Nan Zhu zhunans...@gmail.com wrote:
I checked the library.properties file, the version of my commons-logging is
1.1.1,
I'm still trapped at that point...
Nan
On Tue, Nov 2, 2010 at 1:16 PM, Konstantin Boudnik c...@apache.org wrote:
Your issue is likely to be caused
Hi, all
I'm working with Mumak recently, but I found that the code hasn't been
updated for a long while, I would like to know that if the Mumak is still
active?
Thank you
Nan
Hi, Chen
I think it's due to the disk/network performance, I mean the speed of
reading the content on disk/network into the local memory
if job3 hasn't complete data to start mappers, but job4 does, the scheduler
would select the tasks of job4 from the list to run firstly,
I think the so
Hi, all
I'm a student in University of Nebraska, Lincoln, I'm interested in the
project of mumak and I truly need a simulator of hadoop for my work
I read the source code of hadoop-mapred as well as the mumak itself,
I find that in the slides in http://www.slideshare.net/hadoopusergroup/mumak,
Hi, all
I just wanna running hadoop with pseudo-distributed model in my macbook with
OS x 10.6.4
I once downloaded a hadoop-0.20.2 while I can run it successfully, but for
the latest version of hadoop 0.21, although I can start all daemons of
hadoop, but when I submit a job, there are some
Hi, all
I would like to use Mumak for testing my work
but I can find only few docs from internet, and I even don't know how to
invoke the processes,
can anyone tell me where to find a tutorial of it?
Thank you
Nan
OK, solved it
There may be something happened when I checked out the source code while I
hadn't got the mumak's bin directory
Thank you
Nan
On Tue, Oct 5, 2010 at 12:41 AM, Nan Zhu zhunans...@gmail.com wrote:
Hi, all
I would like to use Mumak for testing my work
but I can find only few
by ssh to every node and kill current
hadoop processes and restart them again. The previous problem will also be
solved( it's my opinion). But I really want to know why the HDFS reports me
previous errors.
On Sat, Sep 25, 2010 at 11:20 PM, Nan Zhu zhunans...@gmail.com wrote:
Hi Chen
Hi, all
I'm not sure which mail list I should send my question to, sorry for any
inconvenience I brought
I'm interested in that how hadoop handles the lost of intermediate data
generated by map tasks currently, as some papers suggest, for the situation
that the data needed by reducers are
Hi Chen,
It seems that you have a bad datanode? maybe you should reformat them?
Nan
On Sun, Sep 26, 2010 at 10:42 AM, He Chen airb...@gmail.com wrote:
Hello Neil
No matter how big the file is. It always report this to me. The file size
is
from 10KB to 100MB.
On Sat, Sep 25, 2010 at 6:08
Hi, Arv,
Actually, several days ago, I deployed a system which is similar with your
requirements
In our cluster environment, since I have to run modified hadoop, we invoked
two namenodes, two jobtrackers, two trackers on each node, and as you
mentioned, two datanodes in single host,
What you
501 - 600 of 600 matches
Mail list logo