[jira] [Created] (HIVE-21151) Fix support of dot in quted identifiers

2019-01-22 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-21151:
---

 Summary: Fix support of dot in quted identifiers
 Key: HIVE-21151
 URL: https://issues.apache.org/jira/browse/HIVE-21151
 Project: Hive
  Issue Type: Improvement
Reporter: Zoltan Haindrich


Dot should be allowed in quoted identifiers; but because there are some methods 
which rely on splitting the "dbtable" string by the dot HIVE-16907 have removed 
that option.

https://github.com/apache/hive/blob/dfd63d97902b359e1643e955a4d070ac983debd5/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L2180




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 69367: Query based compactor for full CRUD Acid tables

2019-01-22 Thread Eugene Koifman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69367/#review212198
---




common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
Lines 2702 (patched)


I think this needs a better description



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestAcidOnTez.java
Lines 848 (patched)


Why is this needed?  Shouldn't the compiler set this?



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 132 (patched)


what is orc.rows.between.memory.checks'='1 for?



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 196 (patched)


I don't understand the logic here.  Since major compaction was done above, 
there should only be base/bucket0 and base/bucket1 so there is nothing for this 
query to group.  Also, I would think SPLIT_GROUPING_MODE should be "query" 
here...  if it's not, where is it set?



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 255 (patched)


nit: this could just do ShowCompactions to see if anything got queued up



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 309 (patched)


How does (3,3,x) end up in bucket0?  with bucketing_version=1 it should be 
(val mod num_buckets)=bucketId.



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 312 (patched)


And smimilarly, (4,4) is in bucket1...



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 315 (patched)


since you just ran a major compaction, there is only 1 file per bucket so 
would split grouper do anything?  would there be > 1 split?



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 325 (patched)


It would be useful to add a test of a table w/o buckets.  Ideally one that 
has > 1 reducer during Insert so that there is > 1 output file.  I think there 
is some propety to specify number of reducers...  not sure if Tez respects it.



itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java
Lines 328 (patched)


should this be set in setUp()?
Alternatively, should conf be cloned?  seems error prone as it modifies 
state outside the method



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java
Lines 189 (patched)


What is this for?  It seems fragile since it forces some behavior on all 
tests.  Do any newly added tests rely on this?



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/SplitGrouper.java
Line 168 (original), 172 (patched)


nit: are empty param decls needed?



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/SplitGrouper.java
Lines 202 (patched)


It should be rowidoffset or splitstart.  For 'original' splits (w/o acid 
meta cols in the file) SyntheticBucketProperties should always be there and so 
rowIdOffset is there.  For 'native' acid files, OrcSplit doesn't have the 1st 
rowid in the split so splitStart is used to sort.



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/SplitGrouper.java
Lines 203 (patched)


Would be useful to describe what that invariant is.



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/SplitGrouper.java
Lines 241 (patched)


This is important to add



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/SplitGrouper.java
Lines 281 (patched)


what is this TODO for?



ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRawRecordMerger.java
Line 1233 (original)


Seems that now the class level JavaDoc is out of sync



ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java
Lines 637 (patched)


What throws the IAE?  Above I see
if (!reader.hasMetadataValue(OrcRecordUpdater.ACID_KEY_INDEX_NAME)) {


[jira] [Created] (HIVE-21150) Don't block Cleaner at the end of the loop

2019-01-22 Thread Jaume M (JIRA)
Jaume M created HIVE-21150:
--

 Summary: Don't block Cleaner at the end of the loop
 Key: HIVE-21150
 URL: https://issues.apache.org/jira/browse/HIVE-21150
 Project: Hive
  Issue Type: Improvement
  Components: Transactions
Affects Versions: 3.1.1
Reporter: Jaume M


After HIVE-21052 the cleaner gets blocked at the end of the loop waiting for 
all the clean tasks to finish, once this happens it can start again and submit 
new clean tasks.
The problem is that a clean tasks takes very long it will get blocked only 
waiting for one task, while it could submit new tasks.
Some ideas about how to implement this in [this comment| 
https://issues.apache.org/jira/browse/HIVE-21052?focusedCommentId=16749161=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16749161]
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 69808: Refactor LlapServiceDriver

2019-01-22 Thread Slim Bouguerra

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69808/#review212207
---




llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
Lines 83 (patched)


Are you tizing the compiler ? i think you can inline that string.



llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
Lines 84 (patched)


please do not print to System channel. Logger is enough



llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
Lines 89 (patched)


final field ?



llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
Lines 130 (patched)


do you mean if(hasException) ?
not sure am getting the logic


- Slim Bouguerra


On Jan. 22, 2019, 11:22 p.m., Miklos Gergely wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/69808/
> ---
> 
> (Updated Jan. 22, 2019, 11:22 p.m.)
> 
> 
> Review request for hive and Ashutosh Chauhan.
> 
> 
> Bugs: HIVE-21149
> https://issues.apache.org/jira/browse/HIVE-21149
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> LlapServiceDriver is one monolith class doing several things, needs to be 
> refactor in order to make it clearer how it works.
> 
> 
> Diffs
> -
> 
>   bin/ext/llap.sh 91a54b3 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapOptionsProcessor.java
>  2445075 
>   llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java 
> ffdd340 
>   llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapSliderUtils.java 
> bdec1c1 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyConfigs.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyLocalJars.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCreateUdfFile.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskDownloadTezJars.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapConfigJsonCreator.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceCommandLine.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapTarComponentGatherer.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/package-info.java
>  PRE-CREATION 
>   
> llap-server/src/test/org/apache/hadoop/hive/llap/cli/service/TestLlapServiceCommandLine.java
>  PRE-CREATION 
>   
> llap-server/src/test/org/apache/hadoop/hive/llap/cli/service/package-info.java
>  PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/69808/diff/1/
> 
> 
> Testing
> ---
> 
> Tested on actual cluster, llap still starts up fine.
> 
> 
> Thanks,
> 
> Miklos Gergely
> 
>



Review Request 69808: Refactor LlapServiceDriver

2019-01-22 Thread Miklos Gergely

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69808/
---

Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-21149
https://issues.apache.org/jira/browse/HIVE-21149


Repository: hive-git


Description
---

LlapServiceDriver is one monolith class doing several things, needs to be 
refactor in order to make it clearer how it works.


Diffs
-

  bin/ext/llap.sh 91a54b3 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapOptionsProcessor.java 
2445075 
  llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java 
ffdd340 
  llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapSliderUtils.java 
bdec1c1 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyConfigs.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyLocalJars.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCreateUdfFile.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskDownloadTezJars.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapConfigJsonCreator.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceCommandLine.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapTarComponentGatherer.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/package-info.java 
PRE-CREATION 
  
llap-server/src/test/org/apache/hadoop/hive/llap/cli/service/TestLlapServiceCommandLine.java
 PRE-CREATION 
  
llap-server/src/test/org/apache/hadoop/hive/llap/cli/service/package-info.java 
PRE-CREATION 


Diff: https://reviews.apache.org/r/69808/diff/1/


Testing
---

Tested on actual cluster, llap still starts up fine.


Thanks,

Miklos Gergely



[jira] [Created] (HIVE-21149) Refactor LlapServiceDriver

2019-01-22 Thread Miklos Gergely (JIRA)
Miklos Gergely created HIVE-21149:
-

 Summary: Refactor LlapServiceDriver
 Key: HIVE-21149
 URL: https://issues.apache.org/jira/browse/HIVE-21149
 Project: Hive
  Issue Type: Improvement
  Components: Hive
Affects Versions: 3.1.2
Reporter: Miklos Gergely
Assignee: Miklos Gergely
 Fix For: 3.1.2


LlapServiceDriver is one monolith class doing several things, needs to be 
refactor in order to make it clearer how it works.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21148) Remove Use StandardCharsets Where Possible

2019-01-22 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HIVE-21148:
--

 Summary: Remove Use StandardCharsets Where Possible
 Key: HIVE-21148
 URL: https://issues.apache.org/jira/browse/HIVE-21148
 Project: Hive
  Issue Type: Improvement
Affects Versions: 4.0.0
Reporter: BELUGA BEHR
 Fix For: 4.0.0


Starting in Java 1.7, JDKs must support a set of standard charsets.  When using 
this facility, instead of passing the name (string) of the character set, there 
is no need to catch a {{UnsupportedEncodingException}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21147) Remove Contrib RegexSerDe

2019-01-22 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HIVE-21147:
--

 Summary: Remove Contrib RegexSerDe
 Key: HIVE-21147
 URL: https://issues.apache.org/jira/browse/HIVE-21147
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 4.0.0
Reporter: BELUGA BEHR
 Fix For: 4.0.0


https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/RegexSerDe.java

https://github.com/apache/hive/blob/ae008b79b5d52ed6a38875b73025a505725828eb/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java

Merge any difference in functionality and remove the version in the 'contrib' 
library



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21146) Enforce TransactionBatch size=1 for blob stores

2019-01-22 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-21146:
-

 Summary: Enforce TransactionBatch size=1 for blob stores
 Key: HIVE-21146
 URL: https://issues.apache.org/jira/browse/HIVE-21146
 Project: Hive
  Issue Type: Bug
  Components: Streaming, Transactions
Affects Versions: 3.0.0
Reporter: Eugene Koifman


Streaming Ingest API supports a concept of {{TransactionBatch}} where N 
transactions can be opened at once and the data in all of them will be written 
to the same delta_x_y directory where each transaction in the batch can be 
committed/aborted independently.  The implementation relies on 
{{FSDataOutputStream.hflush()}} (called from OrcRecordUpdater}} which is 
available on HDFS but is often implemented as no-op in Blob store backed 
{{FileSystem}} objects.

Need to add a check to {{HiveStreamingConnection()}} constructor to raise an 
error if {{builder.transactionBatchSize > 1}} and the target table/partitions 
are backed by something that doesn't support {{hflush()}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21145) Enable cbo to use runtime statistics during re-optimization

2019-01-22 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-21145:
---

 Summary: Enable cbo to use runtime statistics during 
re-optimization
 Key: HIVE-21145
 URL: https://issues.apache.org/jira/browse/HIVE-21145
 Project: Hive
  Issue Type: Improvement
  Components: CBO, Statistics
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich


This could enable to reorder joins according to runtime rowcounts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21144) ODBC with prepared statement fail inside a CTE

2019-01-22 Thread Guillaume (JIRA)
Guillaume created HIVE-21144:


 Summary: ODBC with prepared statement fail inside a CTE
 Key: HIVE-21144
 URL: https://issues.apache.org/jira/browse/HIVE-21144
 Project: Hive
  Issue Type: Bug
  Components: ODBC
Affects Versions: 3.1.0
Reporter: Guillaume


I am trying to execute a very simple query, using python/pyodbc on Windows 
(with a working system-wide odbc DSN: HiveProd):
{code:java}
import pyodbc cnxn = pyodbc.connect('DSN=HiveProd', autocommit=True)
cursor = cnxn.cursor()
# works
q="select ? as lic, ? as cpg"
# fails
q="with init as (select ? as lic, ? as cpg) select * from init" 
cursor.execute(q, '1', 'some string')
for row in cursor:
   print(row.lic, row.cpg)
{code}
Basically, create an odbc connection, run a query with a prepared statement and 
print the result.

A basic query works fine. If I put this query inside a CTE, I get:

{{    cursor.execute("with init as (select ? as lic, ? as cpg) select * from 
init", '1', 'some string') pyodbc.ProgrammingError: ('42000', "[42000] 
[Hortonworks][Hardy] (80) Syntax or semantic analysis error thrown in server  
while executing query. Error message from server: Error while compiling 
statement: FAILED: ParseException line 1:21 can not recognize input near '?' 
'as' 'lic' in select clause (80) (SQLPrepare)")}}

This is not specific to python as I get the same issue with .Net. 

Trying the same with JDBC works fine.

Testing on Hive 3.1.0 from Hdp 3.1.0

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-22 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-21143:
---

 Summary: Add rewrite rules to open/close Between operators
 Key: HIVE-21143
 URL: https://issues.apache.org/jira/browse/HIVE-21143
 Project: Hive
  Issue Type: Improvement
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich


During query compilation it's better to have BETWEEN statements in open form, 
as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21142) Druidhandler may miss results when time constrainted by and/ors

2019-01-22 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-21142:
---

 Summary: Druidhandler may miss results when time constrainted by 
and/ors
 Key: HIVE-21142
 URL: https://issues.apache.org/jira/browse/HIVE-21142
 Project: Hive
  Issue Type: Bug
Reporter: Zoltan Haindrich


For the following query:

{code}
FROM druid_table_alltypesorc
WHERE ('1968-01-01 00:00:00' <= `__time` AND `__time` <= '1970-01-01 00:00:00')
OR ('1968-02-01 00:00:00' <= `__time` AND `__time` <= '1970-04-01 
00:00:00') ORDER BY `__time` ASC LIMIT 10;
{code}

the druid query is:
{code}
druid.query.json 
{"queryType":"scan","dataSource":"default.druid_table_alltypesorc","intervals":["1900-01-01T00:00:00.000Z/1968-02-01T08:00:00.001Z"],"virtualColumns":[{"type":"expression","name":"vc","expression":"\"__time\"","outputType":"LONG"}],"columns":["vc"],"resultFormat":"compactedList"}
{code}

which has an invalid interval: 
{{"intervals":["1900-01-01T00:00:00.000Z/1968-02-01T08:00:00.001Z"}} which 
prevents valid results from 1969 to appear.

note: using between the interval is handled correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hive pull request #520: OpenSession

2019-01-22 Thread gsralex
GitHub user gsralex opened a pull request:

https://github.com/apache/hive/pull/520

OpenSession 

   `TTransport transport = new TSocket("m1", 1);
transport.open();
TCLIService.Client client = new TCLIService.Client(new 
TBinaryProtocol(transport));
TOpenSessionReq openSessionReq = new TOpenSessionReq();
TOpenSessionResp resp = client.OpenSession(openSessionReq); //this 
line throw inner exception`


Exception in thread "main" org.apache.thrift.transport.TTransportException
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:168)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:155)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/hive HIVE-4115

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/520.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #520


commit 6b6ae79cefa1b85fae9c60c8ea609e2d9a788326
Author: Amareshwari Sri Ramadasu 
Date:   2013-03-14T06:14:47Z

Branching for HIVE-4115

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1456340 
13f79535-47bb-0310-9956-ffa450edef68

commit 38852a48c048eb11f8ecd702697dbf12db8850e7
Author: Amareshwari Sri Ramadasu 
Date:   2013-03-14T08:09:31Z

Add cube metastore

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1456360 
13f79535-47bb-0310-9956-ffa450edef68

commit bcd556dcae6c44155f9401bbca78529be372a592
Author: Amareshwari Sri Ramadasu 
Date:   2013-03-14T08:16:24Z

Add cube query processing

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1456361 
13f79535-47bb-0310-9956-ffa450edef68

commit 60b6e53916e62b58811fcb760e30ac7d2772c570
Author: Amareshwari Sri Ramadasu 
Date:   2013-03-25T10:30:41Z

Make CubeMeasure and CubeDimension abstract classes, add 
HierarchicalDimension which extends CubeDimnsion

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1460595 
13f79535-47bb-0310-9956-ffa450edef68

commit 03bb834886d20395902bb5738262c61ca29debe4
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-01T07:00:00Z

Merging r1456340 through r1463086 into HIVE-4115 branch

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1463091 
13f79535-47bb-0310-9956-ffa450edef68

commit d02e5db417ed4c261685e85dc3f7a55ea59769d0
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-02T07:00:22Z

Add partition resolver

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1463407 
13f79535-47bb-0310-9956-ffa450edef68

commit 7fcc1387033353882b8dff288712e615261e0a13
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-03T05:06:47Z

Add storage table resolver

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1463825 
13f79535-47bb-0310-9956-ffa450edef68

commit e68f0401242b3d031783e740c3646f4296aa3a12
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-05T10:34:08Z

Merging r1463087 through r1464904 into branch HIVE-4115

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1464915 
13f79535-47bb-0310-9956-ffa450edef68

commit 82e26730291de424b8aff6ae7bead85e24e9d8ae
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-05T10:39:27Z

Add support for dimension only queries

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1464916 
13f79535-47bb-0310-9956-ffa450edef68

commit 5c4de8ff30b4ceb01a568c4d8432fbba5e7615c4
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-05T10:44:14Z

Add supported storages configuration

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1464920 
13f79535-47bb-0310-9956-ffa450edef68

commit 0f090ac7d799ef696602eaa13209f1a0e25a62cb
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-17T07:29:38Z

Merging r1464905 through r1468761 into HIVE-4115

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1468783 
13f79535-47bb-0310-9956-ffa450edef68

commit 79fd560943f9eb5480416765a120c71928f8ed5a
Author: Amareshwari Sri Ramadasu 
Date:   2013-04-18T10:53:14Z

Add test with monthly partition

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/HIVE-4115@1469273