[ 
https://issues.apache.org/jira/browse/HIVE-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692034#comment-15692034
 ] 

Hive QA commented on HIVE-15277:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12840348/file.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2275/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2275/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2275/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-11-24 03:06:11.758
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-2275/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-11-24 03:06:11.760
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 3dd28fb HIVE-15180: Extend JSONMessageFactory to store 
additional information about metadata objects on different table events 
(Sushanth Sowmyan, Vaibhav Gumashta reviewed by Thejas Nair)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 3dd28fb HIVE-15180: Extend JSONMessageFactory to store 
additional information about metadata objects on different table events 
(Sushanth Sowmyan, Vaibhav Gumashta reviewed by Thejas Nair)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2016-11-24 03:06:12.703
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: patch -p1
patching file common/src/java/org/apache/hadoop/hive/common/JvmPauseMonitor.java
patching file common/src/java/org/apache/hadoop/hive/conf/Constants.java
patching file common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
Hunk #1 succeeded at 1929 (offset 4 lines).
Hunk #2 succeeded at 1943 (offset 4 lines).
patching file druid-handler/README.md
patching file druid-handler/pom.xml
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/HiveDruidOutputFormat.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/HiveDruidQueryBasedInputFormat.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/HiveDruidSplit.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidOutputFormat.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidRecordWriter.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/io/HiveDruidSplit.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/serde/DruidQueryRecordReader.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/serde/DruidSerDe.java
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/serde/DruidSerDeUtils.java
patching file 
druid-handler/src/test/org/apache/hadoop/hive/druid/DruidStorageHandlerTest.java
patching file 
druid-handler/src/test/org/apache/hadoop/hive/druid/TestDerbyConnector.java
patching file 
druid-handler/src/test/org/apache/hadoop/hive/druid/TestHiveDruidQueryBasedInputFormat.java
patching file 
druid-handler/src/test/org/apache/hadoop/hive/ql/io/DruidRecordWriterTest.java
patching file 
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/TaskRunnerCallable.java
patching file pom.xml
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
patching file ql/src/java/org/apache/hadoop/hive/ql/hooks/LineageLogger.java
patching file ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionTimeGranularityOptimizer.java
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
patching file ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
patching file ql/src/java/org/apache/hadoop/hive/ql/plan/TableDesc.java
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/metastore/target/generated-sources/antlr3/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
DataNucleus Enhancer (version 4.1.6) for API "JDO"
DataNucleus Enhancer : Classpath
>>  /usr/share/maven/boot/plexus-classworlds-2.x.jar
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MDatabase
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MFieldSchema
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MType
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MTable
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MConstraint
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MSerDeInfo
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MOrder
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MColumnDescriptor
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MStringList
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MStorageDescriptor
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MPartition
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MIndex
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MRole
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MRoleMap
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MGlobalPrivilege
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MDBPrivilege
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MTablePrivilege
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MPartitionPrivilege
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MTableColumnPrivilege
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MPartitionEvent
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MMasterKey
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MDelegationToken
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MTableColumnStatistics
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MVersionTable
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MResourceUri
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MFunction
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MNotificationLog
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MNotificationNextId
DataNucleus Enhancer completed with success for 30 classes. Timings : input=169 
ms, enhance=204 ms, total=373 ms. Consult the log for full details
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveLexer.g
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
Generating vector expression code
Generating vector expression test code
[ERROR] Failed to execute goal on project hive-druid-handler: Could not resolve 
dependencies for project org.apache.hive:hive-druid-handler:jar:2.2.0-SNAPSHOT: 
The following artifacts could not be resolved: 
io.druid:druid-server:jar:0.9.3-SNAPSHOT, 
io.druid:java-util:jar:0.9.3-SNAPSHOT, 
io.druid:druid-processing:jar:0.9.3-SNAPSHOT, 
io.druid.extensions:druid-hdfs-storage:jar:0.9.3-SNAPSHOT, 
io.druid.extensions:mysql-metadata-storage:jar:0.9.3-SNAPSHOT, 
io.druid.extensions:postgresql-metadata-storage:jar:0.9.3-SNAPSHOT, 
io.druid:druid-indexing-hadoop:jar:0.9.3-SNAPSHOT: Could not find artifact 
io.druid:druid-server:jar:0.9.3-SNAPSHOT in apache.snapshots 
(http://repository.apache.org/snapshots) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hive-druid-handler
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12840348 - PreCommit-HIVE-Build

> Teach Hive how to create/delete Druid segments 
> -----------------------------------------------
>
>                 Key: HIVE-15277
>                 URL: https://issues.apache.org/jira/browse/HIVE-15277
>             Project: Hive
>          Issue Type: Bug
>          Components: Druid integration
>    Affects Versions: 2.2.0
>            Reporter: slim bouguerra
>            Assignee: slim bouguerra
>         Attachments: file.patch
>
>
> We want to extend the DruidStorageHandler to support CTAS queries.
> In this implementation Hive will generate druid segment files and insert the 
> metadata to signal the handoff to druid.
> The syntax will be as follows:
> CREATE TABLE druid_table_1
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES ("druid.datasource" = "datasourcename")
> AS <select `timecolumn` as `___time`, `dimension1`,`dimension2`,  `metric1`, 
> `metric2`....>;
> This statement stores the results of query <input_query> in a Druid 
> datasource named 'datasourcename'. One of the columns of the query needs to 
> be the time dimension, which is mandatory in Druid. In particular, we use the 
> same convention that it is used for Druid: there needs to be a the column 
> named '__time' in the result of the executed query, which will act as the 
> time dimension column in Druid. Currently, the time column dimension needs to 
> be a 'timestamp' type column.
> metrics can be of type long, double and float while dimensions are strings. 
> Keep in mind that druid has a clear separation between dimensions and 
> metrics, therefore if you have a column in hive that contains number and need 
> to be presented as dimension use the cast operator to cast as string. 
> This initial implementation interacts with Druid Meta data storage to 
> add/remove the table in druid, user need to supply the meta data config as 
> --hiveconf hive.druid.metadata.password=XXX --hiveconf 
> hive.druid.metadata.username=druid --hiveconf 
> hive.druid.metadata.uri=jdbc:mysql://host/druid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to