[ 
https://issues.apache.org/jira/browse/HIVE-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14490923#comment-14490923
 ] 

Hive QA commented on HIVE-10242:
--------------------------------



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724718/HIVE-10242.2.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3375/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3375/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3375/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-3375/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_5.q.java1.7.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_query_oneskew_2.q.out'
Reverted 
'ql/src/test/results/clientpositive/list_bucket_query_multiskew_2.q.out'
Reverted 'ql/src/test/results/clientpositive/input_part9.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_12.q.java1.7.out'
Reverted 'ql/src/test/results/clientpositive/index_stale_partitioned.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_1.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_4.q.java1.7.out'
Reverted 'ql/src/test/results/clientpositive/rand_partitionpruner3.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_3.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_query_oneskew_1.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_11.q.java1.7.out'
Reverted 
'ql/src/test/results/clientpositive/list_bucket_query_multiskew_1.q.out'
Reverted 'ql/src/test/results/clientpositive/input42.q.out'
Reverted 'ql/src/test/results/clientpositive/union_view.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_query_oneskew_3.q.out'
Reverted 
'ql/src/test/results/clientpositive/list_bucket_query_multiskew_3.q.out'
Reverted 'ql/src/test/results/clientpositive/annotate_stats_part.q.out'
Reverted 'ql/src/test/results/clientpositive/truncate_column_list_bucket.q.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_2.q.java1.7.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_9.q.java1.7.out'
Reverted 'ql/src/test/results/clientpositive/list_bucket_dml_13.q.java1.7.out'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/thirdparty 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-jmh/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target itests/qtest-spark/target hcatalog/target 
hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
accumulo-handler/target hwi/target common/target common/src/gen 
spark-client/target service/target contrib/target serde/target beeline/target 
odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1672859.

At revision 1672859.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724718 - PreCommit-HIVE-TRUNK-Build

> ACID: insert overwrite prevents create table command
> ----------------------------------------------------
>
>                 Key: HIVE-10242
>                 URL: https://issues.apache.org/jira/browse/HIVE-10242
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 1.0.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>         Attachments: HIVE-10242.2.patch, HIVE-10242.patch
>
>
> 1. insert overwirte table DB.T1 select ... from T2: this takes X lock on 
> DB.T1 and S lock on T2.
> X lock makes sense because we don't want anyone reading T1 while it's 
> overwritten. S lock on T2 prevents if from being dropped while the query is 
> in progress.
> 2. create table DB.T3: takes S lock on DB.
> This S lock gets blocked by X lock on T1. S lock prevents the DB from being 
> dropped while create table is executed.
> If the insert statement is long running, this blocks DDL ops on the same 
> database.  This is a usability issue.  
> There is no good reason why X lock on a table within a DB and S lock on DB 
> should be in conflict.  
> (this is different from a situation where X lock is on a partition and S lock 
> is on the table to which this partition belongs.  Here it makes sense.  
> Basically there is no SQL way to address all tables in a DB but you can 
> easily refer to all partitions of a table)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to