Hi,
I checked out hadoop common project 2.2.0 from the svn and I built it using
maven. Now, I don't find any conf folder or any cofiuration files to set to
start running hadoop and using its hdfs? How can I do so?
--
Best Regards,
Karim Ahmed Awara
--
--
This
I am emiting 'A' value and 'B' value from reducer.
I need to do further calculations also.
which is a better way?
1. Do all remamining computations within reducer , after emiting
or
2.Do remaining computation In driver: read A and B value from part file and
do further computations.
Pls suggest a
Hi,
I have solved the problem! Have downloaded the source, compiled it on the
machine and now can successfully link to the library.
Thank you everyone for you quick responses.
Regards..
Salman.
Salman Toor, PhD
salman.t...@it.uu.se
On Nov 4, 2013, at 3:21 PM, Andre Kelpe wrote:
No,
If you have multiple reducers you are doing it in parallel while in the
driver it is surely single threaded so my bet would be on the reducers.
Chris
On 11/5/2013 6:15 AM, unmesha sreeveni wrote:
I am emiting 'A' value and 'B' value from reducer.
I need to do further calculations also.
which
Hi,
Can anyone kindly assist on this ?
Regards,
Indrashish
On Mon, 04 Nov 2013 10:23:23 -0500, Basu,Indrashish wrote:
Hi All,
Any update on the below post ?
I came across some old post regarding the same issue. It explains the
solution as The *nopipe* example needs more documentation.
hi,
I have a cluster of 7 nodes. Every node has 2 maps-lots and 1 reduce slot.
Is it possible to force the jobtracker executing only 2 map jobs or 1
reduce job per time? I have found this configuration option:
mapred.reduce.slowstart.completed.maps. I think this will do exactly what
I want If I
Why do you want to do this?
+Vinod
On Nov 5, 2013, at 9:17 AM, John wrote:
Is it possible to force the jobtracker executing only 2 map jobs or 1 reduce
job per time?
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is
It seems like your pipes mapper is exiting before consuming all the input. Did
you check the task-logs on the web UI?
Thanks,
+Vinod
On Nov 5, 2013, at 7:25 AM, Basu,Indrashish wrote:
Hi,
Can anyone kindly assist on this ?
Regards,
Indrashish
On Mon, 04 Nov 2013 10:23:23 -0500,
I noticed that HDP 2.0 is available for download here:
http://hortonworks.com/products/hdp-2/?b=1#install
Is this the final GA version that tracks Apache Hadoop 2.2?
Sorry I am just a little confused by the different numbering schemes.
Thanks
John
HDP 2.0.6 is the GA version that matches Apache Hadoop 2.2.
From: John Lilley john.lil...@redpoint.net
Sent: Tuesday, November 05, 2013 12:34 PM
To: user@hadoop.apache.org
Subject: HDP 2.0 GA?
I noticed that HDP 2.0 is available for download here:
Because my node swaps the memory if the 2 map slots + 1 reduce is occupied
with my job. Sure I can minimize the max memory for the map/reduce process.
I tried this already, but I got a out of memory exception if set the max
heap size for the map/reduce process to low for my mr job.
kind regards
Please send the questions related to a vendor specific distro to vendor
mailing list. In this case - http://hortonworks.com/community/forums/.
On Tue, Nov 5, 2013 at 10:49 AM, Jim Falgout jim.falg...@actian.com wrote:
HDP 2.0.6 is the GA version that matches Apache Hadoop 2.2.
I am out of the office until 06/11/2013.
I will be out of the office with limited email access. I will check email
regularly but no guarantees I will be able to answer you back promptly.
For any urgent matter please contact Dittmar Haegele
(dittmar.haeg...@de.ibm.com) or Tadhg Murphy
hi,all:
i am reading the source code about datanode starting,when datanode
start ,it will start streaming server and info server,i do not know what
difference between the two server
i am dealing with multiple mappers and 1 reducer. so which is best?
On Tue, Nov 5, 2013 at 6:28 PM, Chris Mawata chris.maw...@gmail.com wrote:
If you have multiple reducers you are doing it in parallel while in the
driver it is surely single threaded so my bet would be on the reducers.
hi all,
how to write the mapreduce code to read the xml files tag info.
thanks,
mallik.
Hi Mallik Arjun,
you can use XmlInputFormat class which is provided by Apache mohut and
not provided by Hadoop.
Here is the link for the code.
17 matches
Mail list logo