Re: Unsubscribe

2015-12-02 Thread Rich Haase
Sath, Please read the attached link for instructions to properly unsubscribe from the list. https://hadoop.apache.org/mailing_lists.html Sent from my iPhone On Dec 1, 2015, at 7:48 PM, Sath > wrote: Namikaze Its not professional to use

Re: 2 items - RE: Unsubscribe & A better ListSrv for beginners.

2015-12-02 Thread Rich Haase
Here’s a link to the most recent edition: http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/1491901632/ref=dp_ob_title_bk I didn’t realize the 4th edition was out. :) On Dec 2, 2015, at 1:13 PM, Rich Haase <rha...@pandora.com<mailto:rha...@pandora.com>> wrote: Hi Ca

Re: 2 items - RE: Unsubscribe & A better ListSrv for beginners.

2015-12-02 Thread Rich Haase
n or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying t

Re: unsubscribe

2015-10-13 Thread Rich Haase
Please see https://hadoop.apache.org/mailing_lists.html for unsubscribe instructions. On Oct 13, 2015, at 3:55 AM, shanthi k > wrote: Who is thiz? On 13 Oct 2015 15:18, "MANISH SINGLA"

Re: println in MapReduce job

2015-09-24 Thread Rich Haase
To unsubscribe from this list send an email to user-unsubscr...@hadoop.apache.org. https://hadoop.apache.org/mailing_lists.html On Sep 24, 2015, at 9:40 AM, sukesh kumar > wrote: unsubscribe On Thu, Sep

Re: Move blocks between Nodes

2015-07-01 Thread Rich Haase
Rich Haase| Sr. Software Engineer | Pandora m (303) 887-1146 | rha...@pandora.commailto:rha...@pandora.com

Re: Jr. to Mid Level Big Data jobs in Bay Area

2015-05-18 Thread Rich Haase
just looking for an opportunity where I can work all day or most of the day with big data technologies, and contribute and learn from the project at hand. Thanks if anyone can share any information, Adam Rich Haase| Sr. Software Engineer | Pandora m (303) 887-1146 | rha

Re:

2015-05-07 Thread Rich Haase
I’m not a Sqoop user, but it looks like you have an error in your SQL. - Caused by: java.sql.SQLSyntaxErrorException: ORA-00907: missing right parenthesis On May 7, 2015, at 11:34 PM, Kumar Jayapal kjayapa...@gmail.commailto:kjayapa...@gmail.com wrote: ORA-00907 Rich Haase| Sr. Software

Re:

2015-05-07 Thread Rich Haase
If Sqoop is generating the SQL for your import then you may have hit a bug in the way the SQL for Oracle is being generated. I’d recommend emailing the Sqoop user mailing list: u...@sqoop.apache.orgmailto:u...@sqoop.apache.org. On May 7, 2015, at 11:45 PM, Rich Haase rha

Re: Cloudera monitoring Services not starting

2015-03-05 Thread Rich Haase
Please ask cloudera related questions on Cloudera’s forums. community.cloudera.com On Mar 5, 2015, at 11:56 AM, Krish Donald gotomyp...@gmail.com wrote: Hi, I have setup a 4 node cliuster , 1 namenode and 3 datanode using cloudera manager 5.2 . But it is not starting Cloudra Monitorinf

Re: Cloudera Manager Installation is failing

2015-03-02 Thread Rich Haase
Try posting this question on the Cloudera forum. http://community.cloudera.com/ On Mar 2, 2015, at 3:21 PM, Krish Donald gotomyp...@gmail.commailto:gotomyp...@gmail.com wrote: Hi, I am trying to install Cloudera manager but it is failing and below is the log file: I have uninstalled postgres

Re: Enable symlinks

2015-01-22 Thread Rich Haase
Hadoop does not currently support symlinks. Hence the “Symlinks not supported” exception message. You can follow progress on making symlinks production ready via this JIRA: https://issues.apache.org/jira/browse/HADOOP-10019 Cheers, Rich From: Tang

Re: Copying files to hadoop.

2014-12-17 Thread Rich Haase
sharing a file system between the VM and host. This is probably the easiest solution since it will allow you to see the files you have on OS X in your Linux VM and then you can use the hdfs/hadoop/yarn commands on linux (which you already have configured). Cheers, Rich Rich Haase | Sr

Re: Copying files to hadoop.

2014-12-17 Thread Rich Haase
Anil, Happy to help! Cheers, Rich Rich Haase | Sr. Software Engineer | Pandora m 303.887.1146 | rha...@pandora.com From: Anil Jagtap anil.jag...@gmail.commailto:anil.jag...@gmail.com Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org user@hadoop.apache.orgmailto:user

Re: What happens to data nodes when name node has failed for long time?

2014-12-12 Thread Rich Haase
The remaining cluster services will continue to run. That way when the namenode (or other failed processes) is restored the cluster will resume healthy operation. This is part of hadoop’s ability to handle network partition events. Rich Haase | Sr. Software Engineer | Pandora m 303.887.1146

Re: to all this unsubscribe sender

2014-12-05 Thread Rich Haase
+1 Sent from my iPhone On Dec 5, 2014, at 08:29, Ted Yu yuzhih...@gmail.commailto:yuzhih...@gmail.com wrote: +1 On Fri, Dec 5, 2014 at 7:22 AM, Aleks Laz al-userhad...@none.atmailto:al-userhad...@none.at wrote: +1 Am 05-12-2014 16:12, schrieb mark charts: I concur. Good idea. On Friday,

Re: UNSUBSCRIBE

2014-12-02 Thread Rich Haase
Email user-unsubscr...@hadoop.apache.org to unsubscribe. Rich Haase | Sr. Software Engineer | Pandora m 303.887.1146 | rha...@pandora.com From: Naveen teja J N V naveen.teja.j@gmail.commailto:naveen.teja.j@gmail.com Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org user

Re: Regular expressions in fs paths?

2014-09-10 Thread Rich Haase
HDFS doesn't support he full range of glob matching you will find in Linux. If you want to exclude all files from a directory listing that meet a certain criteria try doing your listing and using grep -v to exclude the matching records.

Re: Hadoop Smoke Test: TERASORT

2014-09-10 Thread Rich Haase
You can set the number of reducers used in any hadoop job from the command line by using -Dmapred.reduce.tasks=XX. e.g. hadoop jar hadoop-mapreduce-examples.jar terasort -Dmapred.reduce.tasks=10 /terasort-input /terasort-output

Re: Writing output from streaming task without dealing with key/value

2014-09-10 Thread Rich Haase
In python, or any streaming program just set the output value to the empty string and you will get something like key\t. On Wed, Sep 10, 2014 at 12:03 PM, Susheel Kumar Gadalay skgada...@gmail.com wrote: If you don't want key in the final output, you can set like this in Java.

Re: Writing output from streaming task without dealing with key/value

2014-09-10 Thread Rich Haase
character if you don't want it during the post processing steps you want to perform with *nix commands. On Wed, Sep 10, 2014 at 12:12 PM, Dmitry Sivachenko trtrmi...@gmail.com wrote: On 10 сент. 2014 г., at 22:05, Rich Haase rdha...@gmail.com wrote: In python, or any streaming program just set

Re: Map job not finishing

2014-09-06 Thread Rich Haase
September 2014 16:44, Rich Haase rdha...@gmail.com wrote: How many tasktrackers do you have setup for your single node cluster? Oozie runs each action as a java program on an arbitrary cluster node, so running a workflow requires a minimum of two tasktrackers. On Fri, Sep 5, 2014 at 7:33 AM, Charles

Re: Map job not finishing

2014-09-05 Thread Rich Haase
How many tasktrackers do you have setup for your single node cluster? Oozie runs each action as a java program on an arbitrary cluster node, so running a workflow requires a minimum of two tasktrackers. On Fri, Sep 5, 2014 at 7:33 AM, Charles Robertson charles.robert...@gmail.com wrote: Hi

Re: Datanode can not start with error Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink

2014-09-04 Thread Rich Haase
The reason you can't launch your datanode is: *2014-09-04 10:20:01,677 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain* *java.net.BindException: Port in use: 0.0.0.0:50075 http://0.0.0.0:50075/* It appears that you already have a datanode instance listening on port

Re: No such file or directory

2014-07-29 Thread Rich Haase
Try the same commands, but set the config path. e.g. $ hadoop --config /path/to/hdfs/config/dir dfs ... On Tue, Jul 29, 2014 at 12:16 PM, Bhupathi, Ramakrishna rama.kr...@hp.com wrote: Folks, Can you help me with this ? I am not sure why I am getting this “No such file or directory”

Re: MR JOB

2014-07-18 Thread Rich Haase
File copy operations do not run as map reduce jobs. All hadoop fs commands are run as operations against HDFS and do not use the MapReduce. On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Does the normal operations of hadoop such as uploading and downloading a

Re: MR JOB

2014-07-18 Thread Rich Haase
HDFS handles the splitting of files into multiple blocks. It's a file system operation that is transparent to the user. On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Rich Haase Thanks, But if the copy ops do not occur as a MR job then how does the splitting

Re: unsubscribe

2014-07-18 Thread Rich Haase
To unsubscribe from this list send an email to user-unsubscribe@hadoop. apache.org On Fri, Jul 18, 2014 at 10:54 AM, Zilong Tan z...@rocketfuelinc.com wrote: -- *Kernighan's Law* Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as

Upgrading from 1.1.2 to 2.2.0

2014-07-17 Thread Rich Haase
Has anyone upgraded directly from 1.1.2 to 2.2.0? If so, is there anything I should be concerned about? Thanks, Rich -- *Kernighan's Law* Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not

Re: ListWritable In Hadoop

2014-07-10 Thread Rich Haase
No, but hadoop 2.2 has ArrayWritable. https://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/io/ArrayWritable.html Would you provide more information about your use case you can describe? There maybe an alternative that will meet your needs. On Thu, Jul 10, 2014 at 12:31 AM, unmesha

MapReduce Streaming on Solaris

2014-06-25 Thread Rich Haase
Hi all, I have a 20 node cluster that is running on Solaris x86 (OpenIndian). I'm not really familiar with OpenIndiana having moved from Solaris to Linux many years ago, but it's the OS of choice for the systems administrator at my company. Each worker has 24 700xGB drives, 24 cores and 96 GB