Dear All,
Could someone please help me with this Sqoop issue
I was able to get list of tables but if I try to import Im getting below
error.
risldev@rislhdpmaster001:~> sqoop import --connect
jdbc:oracle:thin:@10.68.231.64:1521/emitrapreprod --username BGEMITRA
--password bgemitra123# --table SR
Hi,
I have three CentOS laptop's at home and I have decided to build a hadoop
cluster on these. I have a switch and a router to setup.
I am planning to setup a DNS server for a FQDN. How do I go about it ? Can any
one share there experiences ?
Regards,Raj
We are facing below mentioned error on storing dataset using HCatStorer.Can
someone please help us
STORE F INTO 'default.CONTENT_SVC_USED' using
org.apache.hive.hcatalog.pig.HCatStorer();
ERROR hive.log - Got exception: java.net.URISyntaxException Malformed
escape pair at index 9: thrift://%H
Hi,
I setup a four node VM cluster for Hadoop 2.2.0 using CentOS. On the machine (
named Monkey) when I am starting the NodeManager I am getting the following
error -
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.net.NoRouteToHostException: No Route to Host from monkey/192.168.
hi all,
I am trying to find documentation relavanet to 'rhadoop' on cdh4. If there are
any one in the group who has experience in 'rhadoop' can you provide me some
details like 1) installation procedure of rhadoop on cdh4.4.
regards,
raj
Hi,
I have a cluster CDH4. How can one perform hadoop admin without root access.
Basically an account like 'john1' on the cluster want to have access to hdfs,
mapred etc.,
Should 'john1' be included in the 'sudoers' file ?
What instructions should I ask System Admin team to have 'john1' acc
give some clue.
Thanks,
Szehon
On Thu, Mar 20, 2014 at 12:29 PM, Raj Hadoop wrote:
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
>
>
>
>
>On Thursday, March 20, 2014 3:09 PM, Raj Hadoop wrote:
>
>Hello everyone,
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
On Thursday, March 20, 2014 3:09 PM, Raj Hadoop wrote:
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1 0.0.0.0:*
LISTEN
I am able to read tables from Hive through Tableau. When executing queries
through Tableau I am getting the followi
All,
I have a 3 node hadoop cluster CDH 4.4 and every few days or when ever I load
some data through sqoop or query through hive , sometimes I get the following
error -
Call From <> to <> failed on connection exception:
java.net.ConnectException: Connection refused
This has become so freque
You should set the
> number of map slots per node:
>
> mapred.tasktracker.map.tasks.maximum=6
>
> Regards,
> Dieter
>
>
> 2014-02-24 11:08 GMT+01:00 Raj hadoop :
>
> Hi All
>>
>> In our Map reduce code, when we are giving more tha
Hi,
My requirement is a typical Datawarehouse and ETL requirement. I need to
accomplish
1) Daily Insert transaction records to a Hive table or a HDFS file. This table
or file is not a big table ( approximately 10 records per day). I don't want to
Partition the table / file.
I am reading a
Thanks
On Sunday, February 9, 2014 11:56 AM, Ted Yu wrote:
For Hive, you can use:
bin/hive --version
Cheers
On Sun, Feb 9, 2014 at 8:48 AM, Raj Hadoop wrote:
>
>Thanks Ted.
>
>
>Also - I am looking for to find the Hive version
>
>
>
>On Sunday, Februa
t;
On Sun, Feb 9, 2014 at 8:32 AM, Raj Hadoop wrote:
All,
>
>
>Is there any way from the command prompt I can find which hive version I am
>using and Hadoop version too?
>
>
>
>Thanks in advance.
>
>
>Regards,
>Raj
All,
Is there any way from the command prompt I can find which hive version I am
using and Hadoop version too?
Thanks in advance.
Regards,
Raj
rote:
Hi Raj,
>
>The 2nd column of any "hadoop fs -ls" output, and the "%r" option of
>the "hadoop fs -stat" commands both reveal the replication factor
>of a given file.
>
>On Sat, Feb 8, 2014 at 11:03 PM, Raj Hadoop wrote:
>> Hi,
>
Hi,
Is there a hadoop command to determine the replication factor of a hdfs file ?
Please advise.
I know that "fs setrep" only changes the replication factor.
Regards,
Raj
Hi Shalish -
This is a really wonderful work. Let me go through it.
And one more thing - Can we use this setup on a Mac computer. Is it OS
dependent ? Please advise.
Thanks,
Raj
On Tuesday, February 4, 2014 3:47 PM, VJ Shalish wrote:
Hi All,
Based on my research and experience
will affect the load and query time.
4. Think about compression as well before hand, as that will govern the data
split, and performance of your queries as well.
Regards,
Manish
Sent from my T-Mobile 4G LTE Device
Original message
From: Raj Hadoop
Date: 11/03/2013 7:
Hi,
I am sending this to the three dist-lists of Hadoop, Hive and Sqoop as this
question is closely related to all the three areas.
I have this requirement.
I have a big table in Oracle (about 60 million rows - Primary Key Customer Id).
I want to bring this to HDFS and then create
a Hive exter
All,
I have a CentOS VM image and want to replicate it four times on my Mac
computer. How
can I set it up so that I can have 4 individual machines that can be used as
nodes
in my Hadoop cluster.
Please advise.
Thanks,
Raj
is this an actual group in Linux at the OS level or just hadoop specific?
From: kun yan
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Wednesday, September 11, 2013 9:19 PM
Subject: Re: 'supergroup' in Hadoop
the user group
Hello All,
When we install Hadoop, when does the user group 'supergroup' gets created.
What is the significance of this ? Do we have any other groups apart from this
group ?
Thanks,
Raj
Hi,
I am trying to setup a multi node hadoop cluster. I am trying to understand
where hadoop clients like (Hive,Pig,Sqoop) would be installed in the Hadoop
Cluster.
Say - I have three Linux machines-
Node 1- Master - (Name Node , Job Tracker and Secondary Name Node)
Node 2 - Slave (
Thanks Harsh. That is a very good explanation.
I am trying to understand how in a production cluster, hadoop user and hadoop
clients would be.
What users should exist in NN, JT, DN ?
Regards,
Rajendra
From: Harsh J
To: ""
Sent: Thursday, August 29, 201
Hello all,
I am getting an error while using sqoop export ( Load HDFS file to Oracle ). I
am not sure the issue might be a Sqoop or Hadoop related one. So I am sending
it to both the dist lists.
I am using -
sqoop export --connect jdbc:oracle:thin:@//dbserv:9876/OKI --table
RAJ.CUSTOMERS -
lease
What about amazon EC2 and cloudera CDH4.. You might want to research a bit more
about these
Regards,
Pavan
On Aug 15, 2013 9:26 PM, "Raj Hadoop" wrote:
Hello All,
>
>
>
>We were planning to rent out hardware for a Hadoop project in our company. Can
>any
Hello All,
We were planning to rent out hardware for a Hadoop project in our company. Can
any one suggest what are some good companies that are offering this type of
service ? Also, any suggestions or best practices to follow when we go for a
"Lease or Rent" option.
Regards,
Raj
Hello Hadoop Community,
I am looking to buy a PDF version of the following book.
Hadoop Essentials: a Quantitative Approach by Henry Liu
I only find a hard copy. Can anyone tell where can I buy the PDF version of
this book?
Thanks,
Raj
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Friday, June 14, 2013 1:44 PM
Subject: Re: HDFS to a different location other than HADOOP HOME
Change the permissions of /SD1/hadoop_data to 755 and restart the process.
Warm Regards,
Ta
ystem an DFS data node should store its blocks. If this is a
>comma-delimited list of directories, then data will be stored in all named
>directories, typically on different devices. Directories that do not exist are
>ignored.
>On Tue, Jun 11, 2013 at 1:08 PM, Raj Hadoop wrote:
>
rrors.
From: Raj Hadoop [hadoop...@yahoo.com]
Sent: 06/13/2013 04:04 PM MST
To: User
Subject: TeaLeaf WebLogs
Hi -
I wanted to know on TeaLeaf WebLog files / database. Is the data from TeaLeaf
proprietary or Is it in a readable foramat by other tools? Can any one one
advis
Hi -
I wanted to know on TeaLeaf WebLog files / database. Is the data from TeaLeaf
proprietary or Is it in a readable foramat by other tools? Can any one one
advise who have experience on this product.
Thanks,
Raj
Hi Tariq,
What is the default value of dfs.data.dir? My hdfs-site.xml doesnt have this
value defined. So what is the default value?
Thanks,
Raj
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Tuesday, June 11, 2013 12:49
Thanks Mustaqeem.
hadoop.tmp.dir - Can this store HDFS files? I mean - is there any difference in
files from the one we create under HADOOP_HOME.
From: Mohammad Mustaqeem <3m.mustaq...@gmail.com>
To: user ; Raj Hadoop
Sent: Tuesday, June 11, 2013
Hi,
I have a one node Hadoop cluster for my POC. The HADOOP_HOME is under a
directory
/usr/home/hadoop
I dont have much space on /usr and want to use other accounts/applications
storage which are located at /app/software/app1 etc.,
How can I use the /app/software/app1 location for my HDFS
To: u...@hive.apache.org; Raj Hadoop
Sent: Friday, May 24, 2013 6:32 PM
Subject: Re: Apache Flume Properties File
so you spammed three big lists there, eh? with a general question for somebody
to serve up a solution on a silver platter for you -- all before you even read
any documentation on the subject m
Hi,
I just installed Apache Flume 1.3.1 and trying to run a small example to test.
Can any one suggest me how can I do this? I am going through the documentation
right now.
Thanks,
Raj
Hi,
With all due to respect to the senior members of this site, I wanted to first
congratulate Lokesh for his interest in Hadoop. I want to know how many fresh
graduates are interested in this technology. I guess not many. So we have to
welcome Lokesh to Hadoop world.
I agree to the seniors.
Hi,
I just finished setting up Apache sqoop 1.4.3. I am trying to test basic sqoop
import on Oracle.
sqoop import --connect jdbc:oracle:thin:@//intelli.dmn.com:1521/DBT --table
usr1.testonetwo --username usr123 --password passwd123
I am getting the error as
13/05/22 17:18:16 INFO manager
Hi,
My hive job logs are being written to /tmp/hadoop directory. I want to change
it to a different location i.e. a sub directory somehere under the 'hadoop'
user home directory.
How do I change it.
Thanks,
Ra
I am setting up a metastore on Oracle for Hive. I executed the script
hive-schema-0.9.0-sql file too succesfully.
When i ran this
hive > show tables;
I am getting the following error.
ORA-01950: no privileges on tablespace
What kind of Oracle privileges are required (Quota wise for Hive)
Sanjay -
This is the first location I tried. But Apache Hive 0.9.0 doesnt have an oracle
folder. It only had mysql and derby.
Thanks,
Raj
From: Sanjay Subramanian
To: "user@hadoop.apache.org" ; Raj Hadoop
; Hive
Sent: Tuesday, May 21, 20
I got it. This is the link.
http://svn.apache.org/viewvc/hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.9.0.oracle.sql?revision=1329416&view=co&pathrev=1329416
____
From: Raj Hadoop
To: Hive ; User
Sent: Tuesday, May 21, 2013 3:08 PM
Subject:
I am trying to get Oracle scripts for Hive Metastore.
http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120423201303.9742b2388...@eris.apache.org%3E
The scripts in the above link has a + at the begining of each line. How should
I supposed to execute scripts like this thro
Thanks Sanjay
From: Sanjay Subramanian
To: bharath vissapragada ;
"u...@hive.apache.org" ; Raj Hadoop
Cc: User
Sent: Tuesday, May 21, 2013 2:27 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
Hi
So that means I need to create a HDFS ( Not an OS physical directory )
directory under Hadoop that need to be used in the Hive config file for this
property. Right?
From: Dean Wampler
To: Raj Hadoop
Cc: Sanjay Subramanian ;
"u...@hive.apache.org&qu
yes thats what i meant. local physical directory. thanks.
From: bharath vissapragada
To: u...@hive.apache.org; Raj Hadoop
Cc: User
Sent: Tuesday, May 21, 2013 1:59 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
Hi
uot;u...@hive.apache.org" ; Raj Hadoop
; Dean Wampler
Cc: User
Sent: Tuesday, May 21, 2013 1:53 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
Notes below
From: Raj Hadoop
Reply-To: "u...@hive.apache.org" , Raj Hadoop
Date: Tuesday, May
Ok.I got it. My questions -
1) Should a local physical directory be created before using this property?
2) Should a HDFS file directory be created from Hadoop before using this
property?
From: Dean Wampler
To: u...@hive.apache.org; Raj Hadoop
Cc: User
Can some one help me on this ? I am stuck installing and configuring Hive with
Oracle. Your timely help is really aprreciated.
From: Raj Hadoop
To: Hive ; User
Sent: Tuesday, May 21, 2013 1:08 PM
Subject: hive.metastore.warehouse.dir - Should it point to a
Hi,
I am configurinig Hive. I ahve a question on the property
hive.metastore.warehouse.dir.
Should this point to a physical directory. I am guessing it is a logical
directory under Hadoop fs.default.name. Please advise whether I need to create
any directory for the variable hive.metastore.wa
Hi,
I had to do a kill -9. I am very surprised even 'Programming Hive' book has not
given details on how to stop the thrift service. It just metioned on how to
start.
Thanks,
Raj
From: Jie Zhou (周杰)
To: "user@hadoop.apache.org" ;
Hi Sanjay,
I am using 0.9 version.
I do not have a sudo access. is there any other command to stop the service.
thanks,
raj
From: Sanjay Subramanian
To: "u...@hive.apache.org" ; Raj Hadoop
; User
Sent: Monday, May 20, 2013 5:11 PM
Subject: Re:
Hi,
I was not able to stopThrift Server after performing the following steps.
$ bin/hive --service hiveserver &
Starting Hive Thrift Server
$ netstat -nl | grep 1
tcp 0 0 :::1 :::* LISTEN
I gave the following to stop. but not working.
hive --service hiveserver --action stop 1
H
Hi Chris,
Thanks for the explaination.
Regards,
Raj
From: Chris Embree
To: user@hadoop.apache.org; Raj Hadoop
Sent: Monday, May 20, 2013 1:51 PM
Subject: Re: Low latency data access Vs High throughput of data
I'll take a swing at this one.
Hi,
I have a basic question on HDFS. I was reading that HDFS doesnt work well with
low latency data access. Rather it is designed for the high throughput
of data. Can you please explain in simple words the difference between
"Low latency data access Vs High throughput of data".
Thanks,
Raj
will defaintely look att
he website you recommedned. But a few working scripts from experts like you -
gets me jump started on this.
Thanks,
Raj
From: David Ritch
To: user@hadoop.apache.org
Cc: Raj Hadoop
Sent: Sunday, May 19, 2013 1:21 PM
Subject: R
Hi,
I want to explore R for Hadoop. Where can I get the download ? Any suggested
material on the website to explore ? Please advise.
Thanks,
Raj
From: Amal G Jose
To: user@hadoop.apache.org
Sent: Saturday, May 18, 2013 11:47 PM
Subject: Re: java heap space
Hi,
I wanted to know whether any one used Hive on Oracle Metastore? Can you please
share your experiences?
Thanks,
Raj
Thanks for the reply.
Can you specify whether which jar file need need to be used ? where can i get
the jar file? does oracle provide one for free? let me know please.
Thanks,
Raj
From: "bejoy...@yahoo.com"
To: u...@hive.apache.org; Raj Had
Hi,
I am planning to install Hive and want to set up Meta store on Oracle. What is
the procedure? Which driver (JDBC) do I need to use it?
Thanks,
Raj
Hi,
I am looking for suggestions from the hadoop and hive user community on the
following -
1) How good is the choice of choosing Oracle for Hive Metastore ?
In my organization, we only use Oracle database and so we wanted to know
whether there are any known issues with Oracle Hive Metastore.
Thanks. It is working on 50070 and 50030.
From: Sandy Ryza
To: user@hadoop.apache.org; Raj Hadoop
Sent: Thursday, May 16, 2013 6:02 PM
Subject: Re: Installed Hadoop on Linux server - not able to see web UI
Hi Raj,
The web UIs are located on different
Hi,
I have installed Hadoop on a Linux server psedo-distributed mode. The map
reduce word count example also succesfully ran. But i was not able to access
the UI from my local windows browser machine when i use like
http://intelliserver:54310/ or http://intelliserver:54311/
$ cat core-sit
/projects/wordcount/input/ error
Did not notice you are trying to do HDFS action... So just check namenode
master service is up & able to connect.
Sent from iPhone: Be the change
On May 16, 2013, at 12:32 PM, Raj Hadoop wrote:
>
>I am getting the following error. Can
I am getting the following error. Can anyone please advise?
$ hadoop fs -mkdir /projects/wordcount/input/
13/05/16 15:28:47 INFO ipc.Client: Retrying connect to server:
intelliserver/172.25.181.117:54310. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Thursday, May 16, 2013 12:02 PM
Subject: Re: Configuring SSH - is it required? for a psedo distriburted mode?
Hello Raj,
ssh is actually 2 things :
1- ssh : The command
e.org"
Cc: Raj Hadoop
Sent: Thursday, May 16, 2013 11:34 AM
Subject: Re: Configuring SSH - is it required? for a psedo distriburted mode?
Actually, I should amend my statement -- SSH is required, but passwordless ssh
(i guess) you can live without if you are willing to enter your password
Hi,
I have a dedicated user on Linux server for hadoop. I am installing it in psedo
distributed mode on this box. I want to test my programs on this machine. But i
see that in installation steps - they were mentioned that SSH needs to be
configured. If it is single node, I dont require it ...
Hi,
Can anyone suggest how to configure Eclipse on Mac for Hadoop? Hadoop is
running like a Pseudo-distributed moded. Please provide any reference articles
or other best practices that need to be followed in this case.
Thanks,
Raj
between a person
who is installing Hadoop and the actual unix admin guy. Please advise.
From: Nitin Pawar
To: user@hadoop.apache.org; Raj Hadoop
Cc: Mohammad Tariq
Sent: Monday, May 13, 2013 10:56 AM
Subject: Re: Install Hadoop on Linux Pseudo Distributed Mode
I am thinking to install both CDH and Apache version. So are you saying if i
install CDH - do i require root privielges?
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Monday, May 13, 2013 10:47 AM
Subject: Re: Install Hadoo
Hi,
I am planning to install Hadoop on Linux in a Pseudo Distributed Mode ( One
Machine ). Do I require 'root' privileges for install ? Please advise.
Thanks,
Raj
Hi,
This is an interesting article on Hadoop Hive & ETL.
http://dbtr.cs.aau.dk/DBPublications/DBTR-31.pdf
Did any one used this kind of a framework? Please advise.
Thanks,
Raj
ubject: Re: Hardware Selection for Hadoop
2 x Quad cores Intel
2-3 TB x 6 SATA
64GB mem
2 NICs teaming
my 2 cents
On Apr 29, 2013, at 9:24 AM, Raj Hadoop
wrote:
Hi,
>
>I have to propose some hardware requirements in my company for a Proof of
>Concept with Hadoop. I was readin
Hi,
I have to propose some hardware requirements in my company for a Proof of
Concept with Hadoop. I was reading Hadoop Operations and also saw Cloudera
Website. But just wanted to know from the group - what is the requirements if I
have to plan for a 5 node cluster. I dont know at this time, t
Sandeep,
Java is also free.
Thanks,
Raj
From: Sandeep Jain
To: "user@hadoop.apache.org"
Sent: Wednesday, April 24, 2013 2:17 AM
Subject: Query on Cost estimates on Hadoop and Java
Dear Hadoopers,
As per my knowledge Hadoop is free but we need to hav
/job_201304201653_0004_conf.xml
-rw-r--r-- 1 hadoop supergroup 60 2013-04-20 18:06
/user/hadoop/output1/part-r-0
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Saturday, April 20, 2013 7:40 PM
Subject: Re: Very basi
-rwx-- 1 hadoop staff 66 Apr 20 18:05 run_raj2.sh
drwxr-xr-x 26 hadoop staff 884 Apr 20 18:05 logs
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Saturday, April 20, 2013 7:27 PM
Subject: Re: Very basic questi
bin/hadoop dfs - mkdir input1
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Saturday, April 20, 2013 7:22 PM
Subject: Re: Very basic question
ok..do u remember the command?
Warm Regards,
Tariq
https://mtar
hanks,
Raj
From: Mohammad Tariq
To: "user@hadoop.apache.org" ; Raj Hadoop
Sent: Saturday, April 20, 2013 6:30 PM
Subject: Re: Very basic question
Hello Raj,
Could you show me the lines where you have set the i/o paths?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfron
Hello All,
I am very new to Hadoop. Just installed and ran the Wordcount program. It ran
succesfully. I created the input directory and also placed the input file
through hadoop commnads.
Where are the output files and input file exist ? I am searching all folders
but could not find it.
Than
Hi,
I am new to Hadoop. I started reading the standard Wordcount program. I got
this basic doubt in Hadoop.
After the Map - Reduce is done, where is the output generated? Does the
reducer ouput sit on individual DataNodes ? Please advise.
Thanks,
Raj
Thank you.I am looking into it now.
On Fri, Jan 25, 2013 at 7:27 AM, Mohit Anchlia wrote:
> Have you looked at distcp?
>
>
> On Thu, Jan 24, 2013 at 5:55 PM, Raj hadoop wrote:
>
>> Hi,
>>
>> Can you please suggest me what is the good way to move 1 peta by
Hi,
Can you please suggest me what is the good way to move 1 peta byte of data
from one cluster to another cluster?
Thanks
Raj
86 matches
Mail list logo