How to add a *csv* file to hdfs using Mapreduce Code
Using hadoop fs -copyFromLocal /local/path /hdfs/location i am able to do .
BUt i would like to write mapreduce code.
--
*Thanks Regards*
Unmesha Sreeveni U.B
Junior Developer
http://www.unmeshasreeveni.blogspot.in/
Read file from local file system and write to file in HDFS using
*FSDataOutputStream*
FSDataOutputStream outStream = fs.create(new Path(demo.csv););
outStream.writeUTF(stream);
outStream.close();
On Tue, Jan 14, 2014 at 2:04 PM, unmesha sreeveni
Thank you sudhakar
On Tue, Jan 14, 2014 at 2:51 PM, sudhakara st sudhakara...@gmail.comwrote:
Read file from local file system and write to file in HDFS using
*FSDataOutputStream*
FSDataOutputStream outStream = fs.create(new
Path(demo.csv););
Hello Ashish
It seems job is running in Local job runner(LocalJobRunner) by reading the
Local file system. Can you try by give the full URI path of the input and
output path.
like
$hadoop jar program.jar ProgramName -Dmapreduce.framework.name=yarn
file:///home/input/ file:///home/output/
On
Hello Pedro,
No, Hadoop don't have any command to list the amount of CPU and memory
used. You can use linux 'top', 'iostat', 'nmon' kind of command or a
monitoring tool like nagios to analyze cpu, memory, IO of hadoop process at
the time of you job submission.
On Thu, Jan 9, 2014 at 7:53 PM,
Any help would be appreciated!
2013/12/25 wzc wzc1...@gmail.com
Hi all,
To access a Kerberos-protected cluster, our hadoop clients need to get a
kerberos ticket (kinit user@realm) before submitting jobs. We want our
clients to get rid of kerberos password, so we would like to use keytabs
Hi,
I just want to throw out a discussion topic on federation.
Reading *The Definitive Guide* on HDFS, it sounds like when federating,
every distinct namespace needs a distinct namenode machine instance.
This means if a company wanted three namespaces, say retail, commercial,
government, they
I tried to copy a 2.5 gb to hdfs. it took 3 -4 min.
Are we able to reduce that time.
On Tue, Jan 14, 2014 at 3:07 PM, unmesha sreeveni unmeshab...@gmail.comwrote:
Thank you sudhakar
On Tue, Jan 14, 2014 at 2:51 PM, sudhakara st sudhakara...@gmail.comwrote:
Read file from local file
This is my understanding and i can be wrong: :)
you do not really need a different hardware instance unless your each
namespace is highly busy like a single namespace hdfs cluster.
you can setup multiple namenodes on a single machine with different config
and different namenode directories and
Hi,
I just wanted to share with you guys a post that I've just published about
ways for collecting (BIG) data. I believe this might be of interest for you.
http://goo.gl/Et2KRf
Enjoy,
--
[image: Inline image 1]
*Moty Michaely*
VP RD
Cell: +972 (52) 631-1019
Email: m...@xplenty.com
By the way, it seems that this ended up being a hard-coded environment variable
name LOCAL_DIRS instead of ApplicationConstants.LOCAL_DIR_ENV, which we can't
see defined anywhere.
John
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Monday, October 21, 2013 12:11 PM
Hi,
We are evaluating CDH5 and YARN. Some of our hadoop jobs relied on the value
mapreduce.task.classpath.user.precedence set to true. Now it seems it no longer
has effect. The jars that should be loaded first are packaged into the lib
folder in the job jar.
Can you please suggest how to load
Dear All,
Thanks for the support.
Thanks Regards,
Aijas Mohammed
DISCLAIMER:
This email may contain confidential information and is intended only for the
use of the specific individual(s) to which it is addressed. If you are not the
intended recipient of
Please send email to user-unsubscr...@hadoop.apache.org
See http://hadoop.apache.org/mailing_lists.html#User
On Tue, Jan 14, 2014 at 6:42 PM, Aijas Mohammed
aijas.moham...@infotech-enterprises.com wrote:
Dear All,
Thanks for the support.
Thanks Regards,
Aijas Mohammed
14 matches
Mail list logo