I'm using hadoop 2.5.1 and sqoop 1.4.6.
I am using sqoop import for importing table from mysql database to be used
with hadoop. It is showing following error
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.hadoop.fs.FSOutputSummer
How to handle RAW data type of oracle in SQOOP import
d on your NameNode server.
>
>
> useradd -G hdfs root
>
> On Wed, Oct 5, 2016 at 2:07 PM, Raj hadoop <raj.had...@gmail.com> wrote:
>
>> Im getting it when im trying to start hive
>>
>> hdpmaster001:~ # hive
>> WARNING: Use "yarn jar" to l
Im getting it when im trying to start hive
hdpmaster001:~ # hive
WARNING: Use "yarn jar" to launch YARN applications.
how can I execute the same,
Thanks,
Raj.
On Wed, Oct 5, 2016 at 1:56 PM, Raj hadoop <raj.had...@gmail.com> wrote:
> Hi All,
>
> Could someone he
Hi All,
Could someone help in to solve this issue,
Logging initialized using configuration in
file:/etc/hive/2.4.2.0-258/0/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=root, access=WRITE,
Thanks everyone..
we are raising case with Hortonworks
On Wed, Aug 3, 2016 at 6:44 PM, Raj hadoop <raj.had...@gmail.com> wrote:
> Dear All,
>
> In need or your help,
>
> we have horton works 4 node cluster,and the problem is hive is allowing
> only one user at a time,
&g
Dear All,
In need or your help,
we have horton works 4 node cluster,and the problem is hive is allowing
only one user at a time,
if any second resource need to login hive is not working,
could someone please help me in this
Thanks,
Rajesh
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com
On 4 April 2016 at 20:02, Raj Hadoop <hadoop...@yahoo.com> wrote:
Sorry in a typo with your name - Mich.
On Monday, April 4, 2016 12:01 PM, Raj Hadoop <hadoop...@y
meters for hive.metastore?
HTH
Dr Mich Talebzadeh LinkedIn
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com
On 4 April 2016 at 18:25, Raj Hadoop <hadoop...@yahoo.com> wrote:
Hi,
I have downloaded apache hive 1.1.1 and trying t
Sorry in a typo with your name - Mich.
On Monday, April 4, 2016 12:01 PM, Raj Hadoop <hadoop...@yahoo.com> wrote:
Thanks Mike. If Hive 2.0 is stable - i would definitely go for it. But let me
troubleshoot 1.1.1 issues i am facing now.
here is my hive-site.xml. Can you please
Hi,
I have downloaded apache hive 1.1.1 and trying to setup hive environment in my
hadoop cluster.
On one of the nodes i installed hive and when i set all the variables and
environment i am getting the following error.Please advise.
[hadoop@z1 bin]$ hive
2016-04-04 10:12:45,686 WARN [main]
We are facing below mentioned error on storing dataset using HCatStorer.Can
someone please help us
STORE F INTO 'default.CONTENT_SVC_USED' using
org.apache.hive.hcatalog.pig.HCatStorer();
ERROR hive.log - Got exception: java.net.URISyntaxException Malformed
escape pair at index 9:
I am able to see the data in the table for all the columns when I issue the
following -
SELECT * FROM t1 WHERE dt1='2013-11-20'
But I am unable to see the column data when i issue the following -
SELECT cust_num FROM t1 WHERE dt1='2013-11-20'
The above shows null values.
How should I
) A;
There are better ways of doing this, but this one's quick and dirty :)
Best Regards,
Nishant Kelkar
On Wed, Sep 10, 2014 at 12:48 PM, Raj Hadoop hadoop...@yahoo.com wrote:
sort_array returns in ascending order. so the first element cannot be the
largest date. the last element is the largest date
Hi,
I have a requirement in Hive to remove duplicate records ( they differ only by
one column i.e a date column) and keep the latest date record.
Sample :
Hive Table :
d2 is a higher
cno,sqno,date
100 1 1-oct-2013
101 2 1-oct-2013
100 1 2-oct-2013
102 2 2-oct-2013
Output needed:
100 1
.
Hope this helps.
Best Regards,Nishant
Kelkar
On Wed, Sep 10, 2014 at
10:04 AM, Raj Hadoop hadoop...@yahoo.com
wrote:
Hi,
I have a requirement in Hive to remove duplicate records (
they differ only by one column i.e a date column) and keep
the latest date record.
Sample
The
SORT_ARRAY(COLLECT_SET(date))[0] AS latest_date
is returning the lowest date. I need the largest date.
On Wed, 9/10/14, Raj Hadoop hadoop...@yahoo.com wrote:
Subject: Re: Remove duplicate records in Hive
To: user@hive.apache.org
Date
-MM-DD. For
example, for 2-oct-2013 it will be 2013-10-02.
Best Regards,
Nishant Kelkar
On Wed, Sep 10, 2014 at 11:48 AM, Raj Hadoop hadoop...@yahoo.com wrote:
The
SORT_ARRAY(COLLECT_SET(date))[0] AS latest_date
is returning the lowest date. I need the largest date
Can I update ( delete and insert kind of)just one row keeping the remaining
rows intact in Hive table using Hive INSERT OVERWRITE. There is no partition in
the Hive table.
INSERT OVERWRITE TABLE tablename SELECT col1,col2,col3 from tabx where
col2='abc';
Does the above work ? Please
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1 0.0.0.0:*
LISTEN
I am able to read tables from Hive through Tableau. When executing queries
through Tableau I am getting the
fails, those might
give some clue.
Thanks,
Szehon
On Thu, Mar 20, 2014 at 12:29 PM, Raj Hadoop hadoop...@yahoo.com wrote:
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
On Thursday, March 20, 2014 3:09 PM, Raj Hadoop hadoop...@yahoo.com
, 2014 at 1:59 PM, Raj hadoop raj.had...@gmail.com wrote:
Query in HIVE
I tried merge kind of operation in Hive to retain the existing records
and append the new records instead of dropping the table and populating it
again.
If anyone can come help with any other approach other than
, 2014 at 1:59 PM, Raj hadoop raj.had...@gmail.com wrote:
Query in HIVE
I tried merge kind of operation in Hive to retain the existing records
and append the new records instead of dropping the table and populating it
again.
If anyone can come help with any other approach other than
Hi,
Help required to merge data in hive,
Ex:
Today file
-
Empno ename
1 abc
2 def
3 ghi
Tomorrow file
-
Empno ename
5 abcd
6 defg
7 ghij
Reg: should not drop the
All,
I loaded data from an Oracle query through Sqoop to HDFS file. This is bzip
compressed files partitioned by one column date.
I created a Hive table to point to the above location.
After loading lot of data , I realized the data type of one of the column was
wrongly given.
When I changed
All,
I have a 3 node hadoop cluster CDH 4.4 and every few days or when ever I load
some data through sqoop or query through hive , sometimes I get the following
error -
Call From server 1 to server 2 failed on connection exception:
java.net.ConnectException: Connection refused
This has
I am trying to create a Hive partition like 'tr_date=2014-01-01'
FAILED: ParseException line 1:58 mismatched input '-' expecting ) near '2014'
in add partition statement
hive_ret_val: 64
Errors while executing Hive for bksd table for 2014-01-01
Are hyphen's not allowed in the partition
Thanks. Will try it.
On Tuesday, February 25, 2014 8:23 PM, Kuldeep Dhole kuldeepr...@gmail.com
wrote:
Probably you should use tr_date='2014-01-01'
Considering tr_date partition is there
On Tuesday, February 25, 2014, Raj Hadoop hadoop...@yahoo.com wrote:
I am trying to create a Hive
Hi,
I am loading data to HDFS files through sqoop and creating a Hive table to
point to these files.
The mapper files through sqoop example are generated like this below.
part-m-0
part-m-1
part-m-2
My question is -
1) For Hive query performance , how important or significant is
Thanks for the detailed explanation Yong. It helps.
Regards,
Raj
On Tuesday, February 25, 2014 9:18 PM, java8964 java8...@hotmail.com wrote:
Yes, it is good that the file sizes are evenly close, but not very important,
unless there are files very small (compared to the block size).
The
All,
Is there any way from the command prompt I can find which hive version I am
using and Hadoop version too?
Thanks in advance.
Regards,
Raj
Hi,
My requirement is a typical Datawarehouse and ETL requirement. I need to
accomplish
1) Daily Insert transaction records to a Hive table or a HDFS file. This table
or file is not a big table ( approximately 10 records per day). I don't want to
Partition the table / file.
I am reading
Hi,
How can I just find out the physical location of a partitioned table in Hive.
Show partitions tab name
gives me just the partition column info.
I want the location of the hdfs directory / files where the table is created.
Please advise.
Thanks,
Raj
I am trying to create a Hive sequence file from another table by running the
following -
Your query has the following error(s):
OK
FAILED: ParseException line 5:0 cannot recognize input near 'STORED' 'STORED'
'AS' in constant click the Error Log tab above for details
1
CREATE TABLE temp_xyz as
How to test a Hive GenericUDF which accepts two parameters ListT, T
ListT - Can it be the output of a collect set. Please advise.
I have a generic udf which takes ListT, T. I want to test it how it works
through Hive.
On Monday, January 20, 2014 5:19 PM, Raj Hadoop hadoop...@yahoo.com
I want to do a simple test like this - but not working -
select ComplexUDFExample(List(a, b, c), b) from table1 limit 10;
FAILED: SemanticException [Error 10011]: Line 1:25 Invalid function 'List'
On Tuesday, February 4, 2014 2:34 PM, Raj Hadoop hadoop...@yahoo.com wrote:
How to test
Hi,
I have the following requirement from a Hive table below.
CustNumActivityDatesRates
10010-Aug-13,12-Aug-13,20-Aug-1310,15,20
The data above says that
From 10 Aug to 11 Aug the rate is 10.
From 12 Aug to 19 Aug the rate is 15.
From 20-Aug to till date the rate is 20.
Note : The order is
Hi,
Can someone help me how to delete duplicate records in Hive table,
I know that delete and update are not supported by hive but still,
if some know's some alternative can help me in this
Thanks,
Raj.
table
On Thu, Jan 30, 2014 at 3:19 PM, Raj hadoop raj.had...@gmail.com wrote:
Hi,
Can someone help me how to delete duplicate records in Hive table,
I know that delete and update are not supported by hive but still,
if some know's some alternative can help me in this
Thanks,
Raj
Hi,
I am trying to compile a basic hive UDF java file. I am using all the jar files
in my classpath but I am not able to compile it and getting the following
error. I am using CDH4. Can any one advise please?
$ javac HelloWorld.java
HelloWorld.java:3: package org.apache.hadoop.hive.ql.exec
Ok. I just figured out. I have to set classpath with EXPORT. Its working now.
On Friday, January 17, 2014 3:37 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi,
I am trying to compile a basic hive UDF java file. I am using all the jar files
in my classpath but I am not able to compile
-tweets-part-two-loading-hive-sql-queries/
https://github.com/kevinweil/elephant-bird
On Mon, Jan 6, 2014 at 9:36 AM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi,
I am trying to load a data that is in JSON format to Hive table. Can any one
suggest what is the method I need to follow?
Thanks
Hi,
I am trying to load a data that is in JSON format to Hive table. Can any one
suggest what is the method I need to follow?
Thanks,
Raj
wrote:
It looks like you're essentially doing a pivot function. Your best bet is to
write a custom UDAF or look at the windowing functions available in recent
releases.
Matt
On Dec 28, 2013 12:57 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Dear All Hive Group Members,
I have the following
Dear All Hive Group Members,
I have the following requirement.
Input:
Ticket#|Date of booking|Price
100|20-Oct-13|54
100|21-Oct-13|56
100|22-Oct-13|54
100|23-Oct-13|55
100|27-Oct-13|60
100|30-Oct-13|47
101|10-Sep-13|12
101|13-Sep-13|14
101|20-Oct-13|6
Expected Output:
hi,
how to find number of elements in an array in Hive table?
thanks,
Raj
Thanks Brad
On Monday, December 2, 2013 5:09 PM, Brad Ruderman bruder...@radiumone.com
wrote:
Check out
size
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
Thanks,
Brad
On Mon, Dec 2, 2013 at 5:05 PM, Raj Hadoop hadoop...@yahoo.com wrote:
hi,
how to find
Hi ,
1) My requirement is to load a file ( a tar.gz file which has multiple tab
separated values files and one file is the main file which has huge data –
about 10 GB per day) to an externally partitioned hive table.
2) What I am doing is I have automated the process by extracting
wrote:
Hello
You can use concat function or case to do this like:
Concat ('/data1/customer/', id)
.
Where id 1000
Etc..
Hope this help you ;)
Le 3 nov. 2013 23:51, Raj Hadoop hadoop...@yahoo.com a écrit :
All,
I want to create partitions like the below and create a hive external table
/', id) as path_xxx from your_table
where id 1000
..
Cdt.
2013/11/4 Raj Hadoop hadoop...@yahoo.com
How can i use concat function? I did not get it. Can you please elaborate.
My requirement is to create a HDFS directory like
(cust_id1000 and cust_id2000)
and map this to a Hive
Hi,
I am sending this to the three dist-lists of Hadoop, Hive and Sqoop as this
question is closely related to all the three areas.
I have this requirement.
I have a big table in Oracle (about 60 million rows - Primary Key Customer Id).
I want to bring this to HDFS and then create
a Hive
file format also, as that will affect the load and query time.
4. Think about compression as well before hand, as that will govern the data
split, and performance of your queries as well.
Regards,
Manish
Sent from my T-Mobile 4G LTE Device
Original message
From: Raj Hadoop
Hi,
I am planning for a Hive External Partition Table based on a date.
Which one of the below yields a better performance or both have the same
performance?
1) Partition based on one folder per day
LIKE date INT
2) Partition based on one folder per year / month / day ( So it has three
On Thu, Oct 31, 2013 at 4:34 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi,
I am planning for a Hive External Partition Table based on a date.
Which one of the below yields a better performance or both have the same
performance?
1) Partition based on one folder per day
LIKE date INT
2
Thanks. It worked for me now when i use it as an empty string.
From: Krishnan K kkrishna...@gmail.com
To: user@hive.apache.org user@hive.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Thursday, October 17, 2013 11:11 AM
Subject: Re: Hive Query Questions
All,
When a query is executed like the below
select field1 from table1 where field1 is null;
I am getting the results which have empty values or nulls in field1. How does
is null work in Hive queries.
Thanks,
Raj
Yes, I have it.
Thanks,
Raj
From: Sonal Goyal sonalgoy...@gmail.com
To: user@hive.apache.org user@hive.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Monday, October 7, 2013 1:38 AM
Subject: Re: How to load /t /n file to Hive
Do you have the option
Hi,
I have a file which is delimted by a tab. Also, there are some fields in the
file which has a tab /t character and a new line /n character in some fields.
Is there any way to load this file using Hive load command? Or do i have to use
a Custom Map Reduce (custom) Input format with java ?
Please note that there is an escape chacter in the fields where the /t and /n
are present.
From: Raj Hadoop hadoop...@yahoo.com
To: Hive user@hive.apache.org
Sent: Friday, September 20, 2013 3:04 PM
Subject: How to load /t /n file to Hive
Hi,
I have a file
,
From: Nitin Pawar nitinpawar...@gmail.com
To: user@hive.apache.org user@hive.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Friday, September 20, 2013 3:15 PM
Subject: Re: How to load /t /n file to Hive
If your data contains new line chars, its better you write
...@gmail.com
To: user@hive.apache.org user@hive.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Friday, September 20, 2013 4:43 PM
Subject: Re: How to load /t /n file to Hive
Hi
One way that we used to solve that problem it's to transform the data when you
are creating/loading it, for example we've
Hi,
The hive thrift service is not running continously. I had to execute the
command (hive --service hiveserver ) very frequently . Can any one help me on
this?
Thanks,
Raj
All,
I am trying to determine visits for customer from omniture weblog file using
Hive.
Table: omniture_web_data
Columns: visid_high,visid_low,evar23,visit_page_num
Sample Data:
visid_high,visid_low,evar23,visit_page_num
999,888,1003,10
999,888,1003,14
999,888,1003,6
999,777,1003,12
mapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec;
SET mapreduce.output.fileoutputformat.compress=true;
Thanks
Sanjay
From: Raj Hadoop hadoop...@yahoo.com
Reply-To: user@hive.apache.org user@hive.apache.org, Raj Hadoop
hadoop...@yahoo.com
Date: Thursday, July 25, 2013 5:00 AM
To: Hive user
Hi ,
The log file that I am trying to load throuh Hive has some special characters
The field is shown below and the special characters ¿¿are also shown.
Shockwave Flash
in;Motive ManagementPlug-in;Google Update;Java(TM)Platform SE 7U21;McAfee
SiteAdvisor;McAfee Virtual
hive set hive.io.output.fileformat=CSVTextFile;
hive insert overwrite local directory '/usr/home/hadoop/da1/' select * from
customers
*** customers is a Hive table
From: Edward Capriolo edlinuxg...@gmail.com
To: user@hive.apache.org user@hive.apache.org
Adding to that
- Multiple files can be concatenated from the directory like
Example: cat 0-0 00-1 0-2 final
From: Raj Hadoop hadoop...@yahoo.com
To: user@hive.apache.org user@hive.apache.org; matouk.iftis...@ysance.com
matouk.iftis
Hi,
When I installed Hive earlier on my machine I used a oracle hive meta script.
Please find attached the script. HIVE worked fine for me on this box with no
issues.
I am trying to install Hive on another machine in a different Oracle metastore.
I executed the meta script but I am having
Hi,
My requirement is to load data from a (one column) Hive view to a CSV file.
After loading it, I dont see any file generated.
I used the following commands to load data to file from a view v_june1
hive set hive.io.output.fileformat=CSVTextFile;
hive insert overwrite local directory
on the same box you ran hive?
On Mon, Jul 1, 2013 at 4:01 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi,
My requirement is to load data from a (one column) Hive view to a CSV file.
After loading it, I dont see any file generated.
I used the following commands to load data to file from a view
Hi,
I have Hive metastore created in an Oracle database.
But when i execute my Hive queries , I see following directory and file created.
TempStatsStore (directory)
derby.log
What are this? Can one one suggest why derby log is created even though my
javax.jdo.option.ConnectionURL is
Hi,
I just installed Apache Flume 1.3.1 and trying to run a small example to test.
Can any one suggest me how can I do this? I am going through the documentation
right now.
Thanks,
Raj
Hi,
My hive job logs are being written to /tmp/hadoop directory. I want to change
it to a different location i.e. a sub directory somehere under the 'hadoop'
user home directory.
How do I change it.
Thanks,
Ra
Hi,
I just finished setting up Apache sqoop 1.4.3. I am trying to test basic sqoop
import on Oracle.
sqoop import --connect jdbc:oracle:thin:@//intelli.dmn.com:1521/DBT --table
usr1.testonetwo --username usr123 --password passwd123
I am getting the error as
13/05/22 17:18:16 INFO
Hi,
I am configurinig Hive. I ahve a question on the property
hive.metastore.warehouse.dir.
Should this point to a physical directory. I am guessing it is a logical
directory under Hadoop fs.default.name. Please advise whether I need to create
any directory for the variable
Can some one help me on this ? I am stuck installing and configuring Hive with
Oracle. Your timely help is really aprreciated.
From: Raj Hadoop hadoop...@yahoo.com
To: Hive user@hive.apache.org; User u...@hadoop.apache.org
Sent: Tuesday, May 21, 2013 1:08 PM
create the HDFS directory ?
From: Sanjay Subramanian sanjay.subraman...@wizecommerce.com
To: user@hive.apache.org user@hive.apache.org; Raj Hadoop
hadoop...@yahoo.com; Dean Wampler deanwamp...@gmail.com
Cc: User u...@hadoop.apache.org
Sent: Tuesday, May 21
yes thats what i meant. local physical directory. thanks.
From: bharath vissapragada bharathvissapragada1...@gmail.com
To: user@hive.apache.org; Raj Hadoop hadoop...@yahoo.com
Cc: User u...@hadoop.apache.org
Sent: Tuesday, May 21, 2013 1:59 PM
Subject: Re
So that means I need to create a HDFS ( Not an OS physical directory )
directory under Hadoop that need to be used in the Hive config file for this
property. Right?
From: Dean Wampler deanwamp...@gmail.com
To: Raj Hadoop hadoop...@yahoo.com
Cc: Sanjay
Thanks Sanjay
From: Sanjay Subramanian sanjay.subraman...@wizecommerce.com
To: bharath vissapragada bharathvissapragada1...@gmail.com;
user@hive.apache.org user@hive.apache.org; Raj Hadoop hadoop...@yahoo.com
Cc: User u...@hadoop.apache.org
Sent: Tuesday, May
I am trying to get Oracle scripts for Hive Metastore.
http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120423201303.9742b2388...@eris.apache.org%3E
The scripts in the above link has a + at the begining of each line. How should
I supposed to execute scripts like this
I got it. This is the link.
http://svn.apache.org/viewvc/hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.9.0.oracle.sql?revision=1329416view=copathrev=1329416
From: Raj Hadoop hadoop...@yahoo.com
To: Hive user@hive.apache.org; User u
a file in this directory called hive-schema-0.9.0.oracle.sql
Use this
sanjay
From: Raj Hadoop hadoop...@yahoo.com
Reply-To: u...@hadoop.apache.org u...@hadoop.apache.org, Raj Hadoop
hadoop...@yahoo.com
Date: Tuesday, May 21, 2013 12:08 PM
To: Hive user@hive.apache.org, User u
I am setting up a metastore on Oracle for Hive. I executed the script
hive-schema-0.9.0-sql file too succesfully.
When i ran this
hive show tables;
I am getting the following error.
ORA-01950: no privileges on tablespace
What kind of Oracle privileges are required (Quota wise for Hive)
Hi,
I was not able to stopThrift Server after performing the following steps.
$ bin/hive --service hiveserver
Starting Hive Thrift Server
$ netstat -nl | grep 1
tcp 0 0 :::1 :::* LISTEN
I gave the following to stop. but not working.
hive --service hiveserver --action stop 1
Hi Sanjay,
I am using 0.9 version.
I do not have a sudo access. is there any other command to stop the service.
thanks,
raj
From: Sanjay Subramanian sanjay.subraman...@wizecommerce.com
To: user@hive.apache.org user@hive.apache.org; Raj Hadoop
hadoop
Hi,
I wanted to know whether any one used Hive on Oracle Metastore? Can you please
share your experiences?
Thanks,
Raj
Hi,
I am planning to install Hive and want to set up Meta store on Oracle. What is
the procedure? Which driver (JDBC) do I need to use it?
Thanks,
Raj
Bejoy KS
Sent from remote device, Please excuse typos
From: Raj Hadoop hadoop...@yahoo.com
Date: Fri, 17 May 2013 17:10:07 -0700 (PDT)
To: Hiveuser@hive.apache.org; Useru...@hadoop.apache.org
ReplyTo: user@hive.apache.org
Subject: Hive on Oracle
Hi,
I am
89 matches
Mail list logo