[
https://issues.apache.org/jira/browse/HADOOP-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12626494#action_12626494
]
YoungWoo Kim commented on HADOOP-3601:
--------------------------------------
Hi Ashish,
First, thank you all hive developers for great contribution.
I'm testing hive with hadoop(0.19 dev).
This is what I found:
1. MySQL as a metastore does not work properly. logs below:
ERROR JPOX.Datastore (Log4JLogger.java:error(117)) - Error thrown executing
CREATE TABLE `SD_PARAMS`
(
`STORAGE_DESC_ID_OID` BIGINT NOT NULL,
`PARAM_KEY` VARCHAR(256) BINARY NOT NULL,
`PARAM_VALUE` VARCHAR(1024) BINARY NULL,
PRIMARY KEY (`STORAGE_DESC_ID_OID`,`PARAM_KEY`)
) ENGINE=INNODB : Specified key was too long; max key length is 767 bytes
com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: Specified key was too
long; max key length is 767 bytes
It's not hive's bug but MySQL's own limitation (MyQSL 5.0.x, UTF-8 with InnoDB)
I've changed RDBMS to PostgreSQL. It works fine.
2. "DESCRIBE TABLE 'table name'" statement does not work but "DESCRIBE 'table
name'" statement works.
3. cli can't handle non-english characters for now.
hive> select a.* from test a where a.b='김영우';
Total MapReduce jobs = 1
Starting Job = job_200808271709_0004, Tracking URL =
http://localhost:50030/jobdetails.jsp?jobid=job_200808271709_0004
Kill Command = /usr/local/hadoop/bin/hadoop job
-Dmapred.job.tracker=localhost:54311 -kill job_200808271709_0004
map = 0%, reduce =0%
map = 50%, reduce =0%
map = 100%, reduce =100%
Ended Job = job_200808271709_0004
Moving data to: /tmp/hive-hadoop/5847592.10000
OK
1 ���
hive>
thanks.
-yw kim
> Hive as a contrib project
> -------------------------
>
> Key: HADOOP-3601
> URL: https://issues.apache.org/jira/browse/HADOOP-3601
> Project: Hadoop Core
> Issue Type: Wish
> Affects Versions: 0.17.2
> Environment: N/A
> Reporter: Joydeep Sen Sarma
> Priority: Minor
> Attachments: hive.tgz, hive.tgz, HiveTutorial.pdf
>
> Original Estimate: 1080h
> Remaining Estimate: 1080h
>
> Hive is a data warehouse built on top of flat files (stored primarily in
> HDFS). It includes:
> - Data Organization into Tables with logical and hash partitioning
> - A Metastore to store metadata about Tables/Partitions etc
> - A SQL like query language over object data stored in Tables
> - DDL commands to define and load external data into tables
> Hive's query language is executed using Hadoop map-reduce as the execution
> engine. Queries can use either single stage or multi-stage map-reduce. Hive
> has a native format for tables - but can handle any data set (for example
> json/thrift/xml) using an IO library framework.
> Hive uses Antlr for query parsing, Apache JEXL for expression evaluation and
> may use Apache Derby as an embedded database for MetaStore. Antlr has a BSD
> license and should be compatible with Apache license.
> We are currently thinking of contributing to the 0.17 branch as a contrib
> project (since that is the version under which it will get tested internally)
> - but looking for advice on the best release path.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.