[jira] [Created] (HIVE-8712) Cross schema query fails when table has index

2014-11-03 Thread Bill Oliver (JIRA)
Bill Oliver created HIVE-8712:
-

 Summary: Cross schema query fails when table has index
 Key: HIVE-8712
 URL: https://issues.apache.org/jira/browse/HIVE-8712
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Bill Oliver


I have two schemas, default and accesstesting. 

I create a table in the second schema with an index.

When I query the table using a WHERE clause from the first schema, the query 
fails:

use default;
drop table salary_hive;

use accesstesting;
drop table salary_hive;

use accesstesting;
create table salary_hive (idnum int, salary int, startdate timestamp, enddate 
timestamp, jobcode char(20));
create index salary_hive_idnum_index on table salary_hive(idnum) as 'compact' 
with deferred rebuild;
select * from accesstesting.salary_hive where 0=1;

use default;
select * from accesstesting.salary_hive where 0=1;

FAILED: SemanticException 
org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found 
accesstesting__salary_hive_salary_hive_idnum_index__




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-7134) Implement ResultSet.getBytes()

2014-05-28 Thread Bill Oliver (JIRA)
Bill Oliver created HIVE-7134:
-

 Summary: Implement ResultSet.getBytes()
 Key: HIVE-7134
 URL: https://issues.apache.org/jira/browse/HIVE-7134
 Project: Hive
  Issue Type: Improvement
Reporter: Bill Oliver


I'd like to see an implementation of ResultSet.getBytes().

Here is my (untested) implementation. This could certainly be improved upon.

public byte[] getBytes(int columnIndex) throws SQLException {
Object value = getColumnValue(columnIndex);
if (wasNull) {
return null;
}
if (value instanceof byte[]) {
return (byte[]) value;
}

try {
// this implementation will work on any Object that 
implements java.io.Serializable
// includes Number, Date, Timestamp, or String
ByteArrayOutputStream b = new ByteArrayOutputStream();
ObjectOutputStream o = new ObjectOutputStream(b);
o.writeObject(value);
return b.toByteArray();
} catch (IOException e) {
throw new SQLException(e);
}
}

  public byte[] getBytes(String columnName) throws SQLException {
return getBytes(findColumn(columnName));
  }



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6992) Implement PreparedStatement.getMetaData(), getParmeterMetaData()

2014-04-30 Thread Bill Oliver (JIRA)
Bill Oliver created HIVE-6992:
-

 Summary: Implement PreparedStatement.getMetaData(), 
getParmeterMetaData()
 Key: HIVE-6992
 URL: https://issues.apache.org/jira/browse/HIVE-6992
 Project: Hive
  Issue Type: New Feature
  Components: JDBC
Reporter: Bill Oliver


It would be very helpful to have methods PreparedStatement.getMetaData() and 
also PreparedStatement.getParameterMetaData() implemented.

I especially would like PreparedStatmeent.getMetaData() implemented, as I could 
prepare a SQL statement, and then get information about the result set, as well 
as information that the query is valid.

I am pretty sure this information is available in some form. When you do an 
"EXPLAIN query", the explain operation shows information about the result set 
including the column name/aliases and the column types.

thanks you

-bill



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6969) SQL literal should be a fixed length, not string

2014-04-24 Thread Bill Oliver (JIRA)
Bill Oliver created HIVE-6969:
-

 Summary: SQL literal should be a fixed length, not string
 Key: HIVE-6969
 URL: https://issues.apache.org/jira/browse/HIVE-6969
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
 Environment: I tried this with Hive .12 using cdh5.0. 
Reporter: Bill Oliver


I checked to see if this was already reported, but could not find it.

Consider this simple query with a SQL literal:

hive> explain select 'abc' from pplsqlb0;
OK
ABSTRACT SYNTAX TREE:
  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME pplsqlb0))) (TOK_INSERT 
(TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR 'abc'

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-1
Map Reduce
  Alias -> Map Operator Tree:
pplsqlb0
  TableScan
alias: pplsqlb0
Select Operator
  expressions:
expr: 'abc'
type: string
  outputColumnNames: _col0
  File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  Stage: Stage-0
Fetch Operator
  limit: -1


'abc' is treated as a STRING type. It would be better if it were handled as a 
CHAR(3) type. If the length > 255, I would like to see it as a VARCHAR(n) type.

Also, string functions based on the literal, such as TRIM('abc') should reflect 
the correct length. SUBSTRING('abc', 1, 1) should return CHAR(1).

Thanks for any help on this!

-bill



--
This message was sent by Atlassian JIRA
(v6.2#6252)