Hi
I would actually do it like this…so that the set on the left of JOIN becomes
smaller
SELECT a.item_id, a.create_dt
FROM
( SELECT
item_id, create_dt
FROM
A
WHERE
item_id = 'I001'
AND
category_n
Nitin,
Hive does not compile with jdk7. You have to use jdk6 for compiling
On Wed, Jun 12, 2013 at 9:42 PM, Nitin Pawar wrote:
> I tried the build on trunk
>
> i did not hit the issue of make-pom but i hit the issue of jdbc with jdk7.
> I will apply the patch and try again
>
>
> On Wed, Jun 12,
Adding hcatalog-dev for help on this.
Thanks
Amareshwari
On Wed, Jun 12, 2013 at 4:48 PM, amareshwari sriramdasu <
amareshw...@gmail.com> wrote:
> Hello,
>
> ant maven-build -Dmvn.publish.repo=local fails to build hcatalog with
> following error :
>
>
> /home/amareshwaris/hive/build.
> xml:121:
Hi,
Which of the two query options is better?
SELECT a.item_id, a.create_dt
FROM a JOIN b
ON (a.item_id = b.item_id)
WHERE a.item_id = 'I001'
ANDa.category_name = 'C001';
- or -
SELECT a.item_id, a.create_dt
FROM a JOIN b
ON (a.item_id = b.item_id AND a.item_id = 'I001')
WHERE
Alter table rename partition column does not recognize the column name. It's ok
I dropped the table and created a new one and executed ADD PARTITION. and
provided LOCATION
Sanjay
Sent from my iPhone
On Jun 12, 2013, at 9:53 AM, "Stephen Sprague"
mailto:sprag...@gmail.com>> wrote:
all you have
Begin forwarded message:
> From: Eric Baldeschwieler
> Date: June 11, 2013 10:46:25 AM PDT
> To: "common-...@hadoop.apache.org"
> Subject: DesignLounge @ HadoopSummit
> Reply-To: common-...@hadoop.apache.org
>
> Hi Folks,
>
> We thought we'd try something new at Hadoop Summit this year to bu
Hi Mark,
i'm running v0.80 too and multiple '%'s work as expected for me. so. we're
gonna need a see a definitive test case from you.
show your full string and show where the like clause fails to match.
thanks,
Stephen.
PS here's my test:
hisql>select city from junk;
+--+
| city
We are using Hive 0.80.
---
Mark E. Sunderlin
Solutions Architect |AOL Networks BDM
P: 703-265-6935 |C: 540-327-6222 | AIM: MESunderlin
22000 AOL Way | Dulles, VA | 20166
From: Stephen Sprague [mailto:sprag...@gmail.com]
Sent: Wednesday, June 12, 2013 1:00 PM
To: user@hive.apache.org
Subject:
that seems pretty hard to believe. what version of hive are you using?
On Wed, Jun 12, 2013 at 6:27 AM, Sunderlin, Mark wrote:
> This seems to work just fine in other SQLs, but doesn't seem work in hive.
>
> I need to have several wild card characters in my 'like' clause as follows.
>
> In othe
all you have to do is create a partitioned test table and run an alter
table command to rename the partition column(s) - and see what happens.
That's about as simple as it gets. It either works or it doesn't. :)
On Tue, Jun 11, 2013 at 9:50 PM, Nitin Pawar wrote:
> currently hive partitions on
I tried the build on trunk
i did not hit the issue of make-pom but i hit the issue of jdbc with jdk7.
I will apply the patch and try again
On Wed, Jun 12, 2013 at 4:48 PM, amareshwari sriramdasu <
amareshw...@gmail.com> wrote:
> Hello,
>
> ant maven-build -Dmvn.publish.repo=local fails to build
Hi All,
I have tried multiple ways to create the HIVE table and retrieve data
using JSONSerDe. But here are the errors I encounter:
hive> select * from jobs;
OK
Failed with exception
java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: j
ava.io.EOFException: No content to map to O
you did not create partitioned table. You just created a bucketed table.
refer to partitioned table created
something like
partitioned by (event_date string)
On Wed, Jun 12, 2013 at 7:17 PM, Hamza Asad wrote:
> i have created table after enabling dynamic partition. i partitioned it on
> date b
i have created table after enabling dynamic partition. i partitioned it on
date but it is not splitting data datewise. Below is the query of table
created and data insert
CREATE TABLE rc_partition_cluster_table(
id int,
event_id int,
user_id BIGINT,
event_date string,
intval_1 int )
CLUST
This seems to work just fine in other SQLs, but doesn't seem work in hive.
I need to have several wild card characters in my 'like' clause as follows.
In other SQLs, I want: where page_url_query like '%?icid=main%dl%'
But in Hive that doesn't match. I have several work arounds. I can w
Hello All,
For some reason hive is calling getRecordReader before the job has been
instantiated. jobConf.get("mapred.task.id") is returning null. Any ideas?
Andrew
Hi
I am working on Hive version hive-service-0.10.0-cdh4.2.1
>From the manual it is clear that Date datatype is not yet supported but
timestamps are supported.
I am having trouble populating timestamp fields.
Tried 2 scenarios
1. Created a table with 1 column as timestamp field and loaded data
Hello,
ant maven-build -Dmvn.publish.repo=local fails to build hcatalog with
following error :
/home/amareshwaris/hive/build.
xml:121: The following error occurred while executing this line:
/home/amareshwaris/hive/build.xml:123: The following error occurred while
executing this line:
Target "ma
Given that I started the original thread it seems appropriate that I should
point out that I also have a bought and paid for (personal) digital copy.
It's a good book.
Peter Marron
Trillium Software UK Limited
Tel : +44 (0) 118 940 7609
Fax : +44 (0) 118 940 7699
E: peter.mar...@trilliumsoftware
Export it in local via Hive, if Hive is not working install it and then
export you data in local , this is ;)
2013/6/12 Hamza Asad
> basically data resides in dfs folder and to repair hadoop, i have to
> remove dfs folder. Now i have the data in dfs-backup folder but how can i
> access it?
>
>
basically data resides in dfs folder and to repair hadoop, i have to remove
dfs folder. Now i have the data in dfs-backup folder but how can i access
it?
On Wed, Jun 12, 2013 at 1:29 PM, Hamza Asad wrote:
> I repaired my hadoop only, and my tables also shown in hive terminal but
> when i physic
I repaired my hadoop only, and my tables also shown in hive terminal but
when i physically checks via browser (hdfs://location/to/my/table), i don't
found it there. Now tell me what can i do?
On Wed, Jun 12, 2013 at 1:12 PM, Matouk IFTISSEN wrote:
> Hello,
> See your mMetastore to know where yo
Hello,
See your mMetastore to know where your data are stored in HDFS, and you can
recover it directely esle dont' change your hive metastore and repar your
hadoop system ;)
Matouk
2013/6/12 Hamza Asad
> My hadoop crashes suddenly and not coming out from safe mode. i take back
> up of my data,
My hadoop crashes suddenly and not coming out from safe mode. i take back
up of my data, format it, and make my hadoop cluster to come out from safe
mode but now i have no table in hive-warehouse. How can i recover/transfer
hive data-warehouse data.
--
*Muhammad Hamza Asad*
I'm not an expert in this, but I do see a Broken pipe writing to a local
file system on your task tracker. Is it possible that you're out of disk
space, or your EBS volume is failing? S3 doesn't appear to be part of that
stack trace.
On Wednesday, June 12, 2013, Ravi Shetye wrote:
> In last 4-5 o
In last 4-5 of day the task tracker on one of my slave machines has gone
down couple of time. It has been working fine from the past 4-5 months
The cluster configuration is
4 machine cluster on AWS
1 m2.xlarge master
3 m2.xlarge slaves
The cluster is dedicated to run hive queries, with the data r
26 matches
Mail list logo