[jira] [Commented] (IMPALA-9664) Insert events on transactional tables need to call addWriteNotificationLog API

2020-09-10 Thread Vihang Karajgaonkar (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193950#comment-17193950
 ] 

Vihang Karajgaonkar commented on IMPALA-9664:
-

http://gerrit.cloudera.org:8080/16439 preliminary patch. I am still testing the 
changes.

> Insert events on transactional tables need to call addWriteNotificationLog API
> --
>
> Key: IMPALA-9664
> URL: https://issues.apache.org/jira/browse/IMPALA-9664
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> According to what we see in Hive source code, for transactional tables, the 
> insert events are fired with a different API {{addWriteNotificationLog}}. 
> Currently Impala fires {{firelistenerEvent}} for both transactional and 
> non-transactional tables. We should look at what is the difference between 
> the two APIs and see if we need to handle transactional tables differently.
> References:
> https://github.com/apache/hive/blob/c3afb57bdb1041f566fbbd896f625328fc9656a0/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2402
> https://github.com/apache/hive/blob/c3afb57bdb1041f566fbbd896f625328fc9656a0/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2236



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9470) Use Parquet bloom filters

2020-09-10 Thread Shant Hovsepian (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193941#comment-17193941
 ] 

Shant Hovsepian commented on IMPALA-9470:
-

It would be nice but not required to support pushing runtime bloom filters down 
to parquet bloom filters if the bloom filter bitmaps can be made compatible.

> Use Parquet bloom filters
> -
>
> Key: IMPALA-9470
> URL: https://issues.apache.org/jira/browse/IMPALA-9470
> Project: IMPALA
>  Issue Type: New Feature
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>
> PARQUET-41 has been closed recently. This means Parquet-MR is capable of 
> writing and reading bloom filters.
> Currently bloom filters are per column chunk entries, i.e. with their help we 
> can filter out entire row groups.
> We already filter row groups in HdfsParquetScanner::NextRowGroup() based on 
> column chunk statistics and dictionaries. Skipping row groups based on bloom 
> filters could be also added to this funciton.
> Impala could also write bloom filters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9351) AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path

2020-09-10 Thread Tim Armstrong (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193928#comment-17193928
 ] 

Tim Armstrong commented on IMPALA-9351:
---

Hit this again here - 
https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/12023/testReport/junit/org.apache.impala.analysis/AnalyzeDDLTest/TestCreateTableLikeFileOrc/

> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path
> -
>
> Key: IMPALA-9351
> URL: https://issues.apache.org/jira/browse/IMPALA-9351
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Quanlong Huang
>Priority: Blocker
>  Labels: broken-build, flaky-test
> Fix For: Impala 3.4.0
>
>
> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to a non-existing path. 
> Specifically, we see the following error message.
> {code:java}
> Error Message
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
> {code}
> The stack trace is provided in the following.
> {code:java}
> Stacktrace
> java.lang.AssertionError: 
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.impala.common.FrontendFixture.analyzeStmt(FrontendFixture.java:397)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:244)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:185)
>   at 
> org.apache.impala.analysis.AnalyzeDDLTest.TestCreateTableLikeFileOrc(AnalyzeDDLTest.java:2045)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> This test was recently added by [~norbertluksa], and [~boroknagyz] gave a +2, 
> maybe [~boroknagyz] could provide some insight into this? Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IMPALA-9740) TSAN data race in hdfs-bulk-ops

2020-09-10 Thread Sahil Takiar (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar resolved IMPALA-9740.
--
Fix Version/s: Impala 4.0
   Resolution: Fixed

> TSAN data race in hdfs-bulk-ops
> ---
>
> Key: IMPALA-9740
> URL: https://issues.apache.org/jira/browse/IMPALA-9740
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: Impala 4.0
>
>
> hdfs-bulk-ops usage of a local connection cache (HdfsFsCache::HdfsFsMap) has 
> a data race:
> {code:java}
>  WARNING: ThreadSanitizer: data race (pid=23205)
>   Write of size 8 at 0x7b24005642d8 by thread T47:
> #0 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::add_node(boost::unordered::detail::node_constructor  const, hdfs_internal*> > > >&, unsigned long) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:329:26
>  (impalad+0x1f93832)
> #1 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace_impl >(std::string 
> const&, std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:420:41
>  (impalad+0x1f933ed)
> #2 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:384:20
>  (impalad+0x1f932d1)
> #3 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:241:27
>  (impalad+0x1f93238)
> #4 boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::insert(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:390:26
>  (impalad+0x1f92038)
> #5 impala::HdfsFsCache::GetConnection(std::string const&, 
> hdfs_internal**, boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > >*) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/runtime/hdfs-fs-cache.cc:115:18
>  (impalad+0x1f916b3)
> #6 impala::HdfsOp::Execute() const 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:84:55
>  (impalad+0x23444d5)
> #7 HdfsThreadPoolHelper(int, impala::HdfsOp const&) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:137:6
>  (impalad+0x2344ea9)
> #8 boost::detail::function::void_function_invoker2 impala::HdfsOp const&), void, int, impala::HdfsOp 
> const&>::invoke(boost::detail::function::function_buffer&, int, 
> impala::HdfsOp const&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/function/function_template.hpp:118:11
>  (impalad+0x2345e80)
> #9 boost::function2::operator()(int, 
> impala::HdfsOp const&) const 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/function/function_template.hpp:770:14
>  (impalad+0x1f883be)
> #10 impala::ThreadPool::WorkerThread(int) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/thread-pool.h:166:9
>  (impalad+0x1f874e5)
> #11 boost::_mfi::mf1, 
> int>::operator()(impala::ThreadPool*, int) const 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/mem_fn_template.hpp:165:29
>  (impalad+0x1f87b7d)
> #12 void 
> boost::_bi::list2*>, 
> boost::_bi::value >::operator() impala::ThreadPool, int>, 
> boost::_bi::list0>(boost::_bi::type, boost::_mfi::mf1 impala::ThreadPool, int>&, boost::_bi::list0&, int) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/bind.hpp:319:9
>  (impalad+0x1f87abc)
> #13 boost::_bi::bind_t impala::ThreadPool, int>, 
> boost::_bi::list2*>, 
> boost::_bi::value > >::operator()() 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/bind.hpp:1222:16
>  (impalad+0x1f87a23)
> #14 
> 

[jira] [Commented] (IMPALA-9403) Allow TSAN to be set on codegen

2020-09-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193847#comment-17193847
 ] 

ASF subversion and git services commented on IMPALA-9403:
-

Commit f7dbd4939903b1dbb1994423f24a2f4159daf48a in impala's branch 
refs/heads/master from Sahil Takiar
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f7dbd49 ]

IMPALA-9740, IMPALA-9403: Fix remaining custom cluster TSAN errors

This patch fixes the remaining TSAN errors reported while running custom
cluster tests. After this patch, TSAN can be enabled for custom cluster
tests (currently it is only run for be tests).

Adds a data race suppression for
HdfsColumnarScanner::ProcessScratchBatchCodegenOrInterpret, which
usually calls a codegen function. TSAN currently does not support
codegen functions, so this warning needs to be suppressed. The call
stack of this warning is:

#0 kudu::BlockBloomFilter::Find(unsigned int) const 
kudu/util/block_bloom_filter.cc:257:7
#1   (0x7f19af1c74cd)
#2 
impala::HdfsColumnarScanner::ProcessScratchBatchCodegenOrInterpret(impala::RowBatch*)
 exec/hdfs-columnar-scanner.cc:106:10
#3 impala::HdfsColumnarScanner::TransferScratchTuples(impala::RowBatch*) 
exec/hdfs-columnar-scanner.cc:66:34

Fixes a data race in DmlExecState::FinalizeHdfsInsert where a local
HdfsFsCache::HdfsFsMap is unsafely passed between threads of a
HdfsOperationSet. HdfsOperationSet instances are run in a
HdfsOpThreadPool and each operation is run in one of the threads from
the pool. Each operation uses HdfsFsCache::GetConnection to get a hdfsFs
instance. GetConnection can take in a 'local_cache' of hdfsFs instances
before using the global map. The race condition is that the same local
cache is used for all operations in HdfsOperationSet.

Testing:
* Re-ran TSAN tests and confirmed the data races have disappeared

Change-Id: If1658a9b56d220e2cfd1f8b958604edcdf7757f4
Reviewed-on: http://gerrit.cloudera.org:8080/16426
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Allow TSAN to be set on codegen
> ---
>
> Key: IMPALA-9403
> URL: https://issues.apache.org/jira/browse/IMPALA-9403
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Sahil Takiar
>Priority: Major
>
> Similar to this commit, but for TSAN. Requires adding the 
> {{-fsanitize=thread}} flag to {{CLANG_IR_CXX_FLAGS}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-9740) TSAN data race in hdfs-bulk-ops

2020-09-10 Thread Sahil Takiar (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar resolved IMPALA-9740.
--
Fix Version/s: Impala 4.0
   Resolution: Fixed

> TSAN data race in hdfs-bulk-ops
> ---
>
> Key: IMPALA-9740
> URL: https://issues.apache.org/jira/browse/IMPALA-9740
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: Impala 4.0
>
>
> hdfs-bulk-ops usage of a local connection cache (HdfsFsCache::HdfsFsMap) has 
> a data race:
> {code:java}
>  WARNING: ThreadSanitizer: data race (pid=23205)
>   Write of size 8 at 0x7b24005642d8 by thread T47:
> #0 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::add_node(boost::unordered::detail::node_constructor  const, hdfs_internal*> > > >&, unsigned long) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:329:26
>  (impalad+0x1f93832)
> #1 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace_impl >(std::string 
> const&, std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:420:41
>  (impalad+0x1f933ed)
> #2 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:384:20
>  (impalad+0x1f932d1)
> #3 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:241:27
>  (impalad+0x1f93238)
> #4 boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::insert(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:390:26
>  (impalad+0x1f92038)
> #5 impala::HdfsFsCache::GetConnection(std::string const&, 
> hdfs_internal**, boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > >*) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/runtime/hdfs-fs-cache.cc:115:18
>  (impalad+0x1f916b3)
> #6 impala::HdfsOp::Execute() const 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:84:55
>  (impalad+0x23444d5)
> #7 HdfsThreadPoolHelper(int, impala::HdfsOp const&) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:137:6
>  (impalad+0x2344ea9)
> #8 boost::detail::function::void_function_invoker2 impala::HdfsOp const&), void, int, impala::HdfsOp 
> const&>::invoke(boost::detail::function::function_buffer&, int, 
> impala::HdfsOp const&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/function/function_template.hpp:118:11
>  (impalad+0x2345e80)
> #9 boost::function2::operator()(int, 
> impala::HdfsOp const&) const 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/function/function_template.hpp:770:14
>  (impalad+0x1f883be)
> #10 impala::ThreadPool::WorkerThread(int) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/thread-pool.h:166:9
>  (impalad+0x1f874e5)
> #11 boost::_mfi::mf1, 
> int>::operator()(impala::ThreadPool*, int) const 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/mem_fn_template.hpp:165:29
>  (impalad+0x1f87b7d)
> #12 void 
> boost::_bi::list2*>, 
> boost::_bi::value >::operator() impala::ThreadPool, int>, 
> boost::_bi::list0>(boost::_bi::type, boost::_mfi::mf1 impala::ThreadPool, int>&, boost::_bi::list0&, int) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/bind.hpp:319:9
>  (impalad+0x1f87abc)
> #13 boost::_bi::bind_t impala::ThreadPool, int>, 
> boost::_bi::list2*>, 
> boost::_bi::value > >::operator()() 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/bind.hpp:1222:16
>  (impalad+0x1f87a23)
> #14 
> 

[jira] [Commented] (IMPALA-9740) TSAN data race in hdfs-bulk-ops

2020-09-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193846#comment-17193846
 ] 

ASF subversion and git services commented on IMPALA-9740:
-

Commit f7dbd4939903b1dbb1994423f24a2f4159daf48a in impala's branch 
refs/heads/master from Sahil Takiar
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f7dbd49 ]

IMPALA-9740, IMPALA-9403: Fix remaining custom cluster TSAN errors

This patch fixes the remaining TSAN errors reported while running custom
cluster tests. After this patch, TSAN can be enabled for custom cluster
tests (currently it is only run for be tests).

Adds a data race suppression for
HdfsColumnarScanner::ProcessScratchBatchCodegenOrInterpret, which
usually calls a codegen function. TSAN currently does not support
codegen functions, so this warning needs to be suppressed. The call
stack of this warning is:

#0 kudu::BlockBloomFilter::Find(unsigned int) const 
kudu/util/block_bloom_filter.cc:257:7
#1   (0x7f19af1c74cd)
#2 
impala::HdfsColumnarScanner::ProcessScratchBatchCodegenOrInterpret(impala::RowBatch*)
 exec/hdfs-columnar-scanner.cc:106:10
#3 impala::HdfsColumnarScanner::TransferScratchTuples(impala::RowBatch*) 
exec/hdfs-columnar-scanner.cc:66:34

Fixes a data race in DmlExecState::FinalizeHdfsInsert where a local
HdfsFsCache::HdfsFsMap is unsafely passed between threads of a
HdfsOperationSet. HdfsOperationSet instances are run in a
HdfsOpThreadPool and each operation is run in one of the threads from
the pool. Each operation uses HdfsFsCache::GetConnection to get a hdfsFs
instance. GetConnection can take in a 'local_cache' of hdfsFs instances
before using the global map. The race condition is that the same local
cache is used for all operations in HdfsOperationSet.

Testing:
* Re-ran TSAN tests and confirmed the data races have disappeared

Change-Id: If1658a9b56d220e2cfd1f8b958604edcdf7757f4
Reviewed-on: http://gerrit.cloudera.org:8080/16426
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> TSAN data race in hdfs-bulk-ops
> ---
>
> Key: IMPALA-9740
> URL: https://issues.apache.org/jira/browse/IMPALA-9740
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Sahil Takiar
>Priority: Major
>
> hdfs-bulk-ops usage of a local connection cache (HdfsFsCache::HdfsFsMap) has 
> a data race:
> {code:java}
>  WARNING: ThreadSanitizer: data race (pid=23205)
>   Write of size 8 at 0x7b24005642d8 by thread T47:
> #0 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::add_node(boost::unordered::detail::node_constructor  const, hdfs_internal*> > > >&, unsigned long) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:329:26
>  (impalad+0x1f93832)
> #1 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace_impl >(std::string 
> const&, std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:420:41
>  (impalad+0x1f933ed)
> #2 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:384:20
>  (impalad+0x1f932d1)
> #3 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:241:27
>  (impalad+0x1f93238)
> #4 boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::insert(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:390:26
>  (impalad+0x1f92038)
> #5 impala::HdfsFsCache::GetConnection(std::string const&, 
> hdfs_internal**, boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > >*) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/runtime/hdfs-fs-cache.cc:115:18
>  (impalad+0x1f916b3)
> #6 impala::HdfsOp::Execute() const 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:84:55
>  (impalad+0x23444d5)
> #7 HdfsThreadPoolHelper(int, impala::HdfsOp const&) 
> 

[jira] [Assigned] (IMPALA-9740) TSAN data race in hdfs-bulk-ops

2020-09-10 Thread Sahil Takiar (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar reassigned IMPALA-9740:


Assignee: Sahil Takiar

> TSAN data race in hdfs-bulk-ops
> ---
>
> Key: IMPALA-9740
> URL: https://issues.apache.org/jira/browse/IMPALA-9740
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> hdfs-bulk-ops usage of a local connection cache (HdfsFsCache::HdfsFsMap) has 
> a data race:
> {code:java}
>  WARNING: ThreadSanitizer: data race (pid=23205)
>   Write of size 8 at 0x7b24005642d8 by thread T47:
> #0 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::add_node(boost::unordered::detail::node_constructor  const, hdfs_internal*> > > >&, unsigned long) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:329:26
>  (impalad+0x1f93832)
> #1 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace_impl >(std::string 
> const&, std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:420:41
>  (impalad+0x1f933ed)
> #2 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::detail::table_impl  const, hdfs_internal*> >, std::string, hdfs_internal*, 
> boost::hash, std::equal_to > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/detail/unique.hpp:384:20
>  (impalad+0x1f932d1)
> #3 
> std::pair  const, hdfs_internal*> > >, bool> 
> boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::emplace 
> >(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:241:27
>  (impalad+0x1f93238)
> #4 boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > 
> >::insert(std::pair&&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/unordered/unordered_map.hpp:390:26
>  (impalad+0x1f92038)
> #5 impala::HdfsFsCache::GetConnection(std::string const&, 
> hdfs_internal**, boost::unordered::unordered_map boost::hash, std::equal_to, 
> std::allocator > >*) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/runtime/hdfs-fs-cache.cc:115:18
>  (impalad+0x1f916b3)
> #6 impala::HdfsOp::Execute() const 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:84:55
>  (impalad+0x23444d5)
> #7 HdfsThreadPoolHelper(int, impala::HdfsOp const&) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/hdfs-bulk-ops.cc:137:6
>  (impalad+0x2344ea9)
> #8 boost::detail::function::void_function_invoker2 impala::HdfsOp const&), void, int, impala::HdfsOp 
> const&>::invoke(boost::detail::function::function_buffer&, int, 
> impala::HdfsOp const&) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/function/function_template.hpp:118:11
>  (impalad+0x2345e80)
> #9 boost::function2::operator()(int, 
> impala::HdfsOp const&) const 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/function/function_template.hpp:770:14
>  (impalad+0x1f883be)
> #10 impala::ThreadPool::WorkerThread(int) 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/thread-pool.h:166:9
>  (impalad+0x1f874e5)
> #11 boost::_mfi::mf1, 
> int>::operator()(impala::ThreadPool*, int) const 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/mem_fn_template.hpp:165:29
>  (impalad+0x1f87b7d)
> #12 void 
> boost::_bi::list2*>, 
> boost::_bi::value >::operator() impala::ThreadPool, int>, 
> boost::_bi::list0>(boost::_bi::type, boost::_mfi::mf1 impala::ThreadPool, int>&, boost::_bi::list0&, int) 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/bind.hpp:319:9
>  (impalad+0x1f87abc)
> #13 boost::_bi::bind_t impala::ThreadPool, int>, 
> boost::_bi::list2*>, 
> boost::_bi::value > >::operator()() 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.61.0-p2/include/boost/bind/bind.hpp:1222:16
>  (impalad+0x1f87a23)
> #14 
> boost::detail::function::void_function_obj_invoker0 boost::_mfi::mf1, int>, 
> 

[jira] [Commented] (IMPALA-10122) Allow view authorization to be deferred until selection time

2020-09-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193702#comment-17193702
 ] 

ASF subversion and git services commented on IMPALA-10122:
--

Commit e8251bb09316d1cea04502b5de8516bc879fd7d3 in impala's branch 
refs/heads/master from Fang-Yu Rao
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=e8251bb ]

IMPALA-10122 (Part 1): Deny access to views not authorized at creation

After HIVE-24026, a non-superuser is allowed to create, alter, and drop
a view directly in the HiveMetaStore via a Spark client without the
Impala FE or the HiveServer2 being involved to perform the corresponding
authorization checks to see if the non-superuser possesses the required
privileges on the underlying tables. This opens up the possibility that
a non-superuser is able to replace the underlying tables referenced in a
view with some other tables even though this non-superuser does not
possess the necessary privileges on those tables substituting for the
tables originally referenced in the view.

Recall that currently when a user is requesting to select a view in
Impala, the Impala FE only requires that there is a Ranger policy
granting the requesting user the SELECT privilege on the view but not
the SELECT privileges on the underlying tables of the view. Therefore,
with the change of HIVE-24026, a non-superuser is able to access the
data in tables for which the permission was not granted through either
i) an ALTER VIEW statement, or ii) a DROP VIEW statement followed by a
CREATE VIEW statement given that there is already a Ranger policy
allowing this user to select this view.

To prevent a user from accessing the data in tables on which the user
does not possess the required privileges, we could employ the Boolean
table property of 'Authorized' that was introduced in HIVE-24026.
Specifically, after HIVE-24026, if a view was created without the
corresponding privileges on the underlying tables being checked, the
HiveMetaStore would set this property to false and the property will not
be added if the view was authorized at creation time for backward
compatibility. Based on this table property, it is possible for the
Impala FE to determine whether or not it should additionally check for
the requesting user's privileges on the underlying tables of a view
after HIVE-24026 at selection time, but it would require a more thorough
investigation regarding how to revise the way the Impala FE registers
the authorization requests given a query.

To mitigate this potential security breach before we figure out how to
perform authorization for a view whose creation was not authorized, in
this patch, we introduce a temporary field of 'viewCreatedWithoutAuthz_'
in the class of AuthorizableTable that indicates whether or not a given
table corresponds to a view that was not authorized at creation time,
allowing the Impala FE to deny the SELECT, ALTER, and DESCRIBE access to
a view whose creation was not authorized.

Testing:
 - Manually verified that after using beeline to set to false the table
   property of 'Authorized' corresponding to a view, no user is able to
   select data from this view, or to alter or describe this view. Recall
   that currently Impala does not support the ALTER VIEW SET
   TBLPROPERTIES statement and thus we need to use beeline to create
   such a view.
 - Verified that the patch could pass the exhaustive tests in the DEBUG
   build.

Change-Id: I73965e05586771de85fa6f73c452e3de4f312034
Reviewed-on: http://gerrit.cloudera.org:8080/16423
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> Allow view authorization to be deferred until selection time
> 
>
> Key: IMPALA-10122
> URL: https://issues.apache.org/jira/browse/IMPALA-10122
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Fang-Yu Rao
>Priority: Major
>
> Recall that currently Impala performs authorization with Ranger to check 
> whether the requesting user is granted the privilege of {{SELECT}} for the 
> underlying tables when a view is created and thus does not check whether the 
> requesting user is granted the {{SELECT}} privilege on the underlying tables 
> when the view is selected.
> On the other hand, currently a Spark user is not allowed to directly create a 
> view in HMS without involving the Impala frontend, because Spark clients are 
> normal users (v.s. superusers). To relax this restriction, it would be good 
> to allow a Spark user to directly create a view in HMS without involving the 
> Impala frontend. However, it can be seen that the authorization check is 
> skipped for views created in this manner since HMS currently does not possess 
> the capability to perform the authorization. 

[jira] [Closed] (IMPALA-10163) TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile are flaky

2020-09-10 Thread Quanlong Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Quanlong Huang closed IMPALA-10163.
---
Resolution: Duplicate

Duplicates to IMPALA-10158

> TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile are flaky
> -
>
> Key: IMPALA-10163
> URL: https://issues.apache.org/jira/browse/IMPALA-10163
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Quanlong Huang
>Priority: Blocker
>
> Saw these failures in jobs ran in PDT timezone (America/Los_Angeles).
> {code:java}
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 

[jira] [Closed] (IMPALA-10163) TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile are flaky

2020-09-10 Thread Quanlong Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Quanlong Huang closed IMPALA-10163.
---
Resolution: Duplicate

Duplicates to IMPALA-10158

> TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile are flaky
> -
>
> Key: IMPALA-10163
> URL: https://issues.apache.org/jira/browse/IMPALA-10163
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Quanlong Huang
>Priority: Blocker
>
> Saw these failures in jobs ran in PDT timezone (America/Los_Angeles).
> {code:java}
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 

[jira] [Updated] (IMPALA-10163) TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile are flaky

2020-09-10 Thread Quanlong Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Quanlong Huang updated IMPALA-10163:

Description: 
Saw these failures in jobs ran in PDT timezone (America/Los_Angeles).
{code:java}
query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 

[jira] [Created] (IMPALA-10166) ALTER TABLE for Iceberg tables

2020-09-10 Thread Jira
Zoltán Borók-Nagy created IMPALA-10166:
--

 Summary: ALTER TABLE for Iceberg tables
 Key: IMPALA-10166
 URL: https://issues.apache.org/jira/browse/IMPALA-10166
 Project: IMPALA
  Issue Type: New Feature
Reporter: Zoltán Borók-Nagy


Add support for ALTER TABLE operations for Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-10166) ALTER TABLE for Iceberg tables

2020-09-10 Thread Jira
Zoltán Borók-Nagy created IMPALA-10166:
--

 Summary: ALTER TABLE for Iceberg tables
 Key: IMPALA-10166
 URL: https://issues.apache.org/jira/browse/IMPALA-10166
 Project: IMPALA
  Issue Type: New Feature
Reporter: Zoltán Borók-Nagy


Add support for ALTER TABLE operations for Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IMPALA-10165) Support all partition transforms for Iceberg

2020-09-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IMPALA-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Borók-Nagy updated IMPALA-10165:
---
Issue Type: Improvement  (was: Bug)

> Support all partition transforms for Iceberg
> 
>
> Key: IMPALA-10165
> URL: https://issues.apache.org/jira/browse/IMPALA-10165
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
>
> Currently the identity and datetime (year, month, day, hour) Iceberg 
> partition transformations are supported by Impala.
> There are also TRUNCATE and BUCKET partition transformations in Iceberg that 
> needs to be supported. They can also take parameters, i.e. truncation width 
> and number of buckets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-10165) Support all partition transforms for Iceberg

2020-09-10 Thread Jira
Zoltán Borók-Nagy created IMPALA-10165:
--

 Summary: Support all partition transforms for Iceberg
 Key: IMPALA-10165
 URL: https://issues.apache.org/jira/browse/IMPALA-10165
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Currently the identity and datetime (year, month, day, hour) Iceberg partition 
transformations are supported by Impala.

There are also TRUNCATE and BUCKET partition transformations in Iceberg that 
needs to be supported. They can also take parameters, i.e. truncation width and 
number of buckets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-10165) Support all partition transforms for Iceberg

2020-09-10 Thread Jira
Zoltán Borók-Nagy created IMPALA-10165:
--

 Summary: Support all partition transforms for Iceberg
 Key: IMPALA-10165
 URL: https://issues.apache.org/jira/browse/IMPALA-10165
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Currently the identity and datetime (year, month, day, hour) Iceberg partition 
transformations are supported by Impala.

There are also TRUNCATE and BUCKET partition transformations in Iceberg that 
needs to be supported. They can also take parameters, i.e. truncation width and 
number of buckets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IMPALA-10164) Support HadoopCatalog for Iceberg table

2020-09-10 Thread WangSheng (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

WangSheng updated IMPALA-10164:
---
Issue Type: Improvement  (was: New Feature)

> Support HadoopCatalog for Iceberg table
> ---
>
> Key: IMPALA-10164
> URL: https://issues.apache.org/jira/browse/IMPALA-10164
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: WangSheng
>Assignee: WangSheng
>Priority: Minor
>  Labels: impala-iceberg
>
> We just supported HadoopTable api to create Iceberg table in Impala now, it's 
> apparently not enough, so we preparing to support HadoopCatalog. The main 
> design is to add a new table property named 'iceberg.catalog', and default 
> value is 'hadoop.tables', we implement 'hadoop.catalog' to supported 
> HadoopCatalog api. We may even support 'hive.catalog' in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-10164) Support HadoopCatalog for Iceberg table

2020-09-10 Thread WangSheng (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

WangSheng updated IMPALA-10164:
---
Parent: (was: IMPALA-9621)
Issue Type: New Feature  (was: Sub-task)

> Support HadoopCatalog for Iceberg table
> ---
>
> Key: IMPALA-10164
> URL: https://issues.apache.org/jira/browse/IMPALA-10164
> Project: IMPALA
>  Issue Type: New Feature
>Reporter: WangSheng
>Assignee: WangSheng
>Priority: Minor
>  Labels: impala-iceberg
>
> We just supported HadoopTable api to create Iceberg table in Impala now, it's 
> apparently not enough, so we preparing to support HadoopCatalog. The main 
> design is to add a new table property named 'iceberg.catalog', and default 
> value is 'hadoop.tables', we implement 'hadoop.catalog' to supported 
> HadoopCatalog api. We may even support 'hive.catalog' in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-10164) Support HadoopCatalog for Iceberg table

2020-09-10 Thread WangSheng (Jira)
WangSheng created IMPALA-10164:
--

 Summary: Support HadoopCatalog for Iceberg table
 Key: IMPALA-10164
 URL: https://issues.apache.org/jira/browse/IMPALA-10164
 Project: IMPALA
  Issue Type: Sub-task
Reporter: WangSheng
Assignee: WangSheng


We just supported HadoopTable api to create Iceberg table in Impala now, it's 
apparently not enough, so we preparing to support HadoopCatalog. The main 
design is to add a new table property named 'iceberg.catalog', and default 
value is 'hadoop.tables', we implement 'hadoop.catalog' to supported 
HadoopCatalog api. We may even support 'hive.catalog' in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-10164) Support HadoopCatalog for Iceberg table

2020-09-10 Thread WangSheng (Jira)
WangSheng created IMPALA-10164:
--

 Summary: Support HadoopCatalog for Iceberg table
 Key: IMPALA-10164
 URL: https://issues.apache.org/jira/browse/IMPALA-10164
 Project: IMPALA
  Issue Type: Sub-task
Reporter: WangSheng
Assignee: WangSheng


We just supported HadoopTable api to create Iceberg table in Impala now, it's 
apparently not enough, so we preparing to support HadoopCatalog. The main 
design is to add a new table property named 'iceberg.catalog', and default 
value is 'hadoop.tables', we implement 'hadoop.catalog' to supported 
HadoopCatalog api. We may even support 'hive.catalog' in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IMPALA-9351) AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path

2020-09-10 Thread Norbert Luksa (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Luksa reassigned IMPALA-9351:
-

Assignee: Quanlong Huang  (was: Norbert Luksa)

> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path
> -
>
> Key: IMPALA-9351
> URL: https://issues.apache.org/jira/browse/IMPALA-9351
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Quanlong Huang
>Priority: Blocker
>  Labels: broken-build, flaky-test
> Fix For: Impala 3.4.0
>
>
> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to a non-existing path. 
> Specifically, we see the following error message.
> {code:java}
> Error Message
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
> {code}
> The stack trace is provided in the following.
> {code:java}
> Stacktrace
> java.lang.AssertionError: 
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.impala.common.FrontendFixture.analyzeStmt(FrontendFixture.java:397)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:244)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:185)
>   at 
> org.apache.impala.analysis.AnalyzeDDLTest.TestCreateTableLikeFileOrc(AnalyzeDDLTest.java:2045)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> This test was recently added by [~norbertluksa], and [~boroknagyz] gave a +2, 
> maybe [~boroknagyz] could provide some insight into this? Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: 

[jira] [Commented] (IMPALA-9351) AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path

2020-09-10 Thread Norbert Luksa (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193532#comment-17193532
 ] 

Norbert Luksa commented on IMPALA-9351:
---

Thanks [~stigahuang], I've reassigned the jira.

> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path
> -
>
> Key: IMPALA-9351
> URL: https://issues.apache.org/jira/browse/IMPALA-9351
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Norbert Luksa
>Priority: Blocker
>  Labels: broken-build, flaky-test
> Fix For: Impala 3.4.0
>
>
> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to a non-existing path. 
> Specifically, we see the following error message.
> {code:java}
> Error Message
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
> {code}
> The stack trace is provided in the following.
> {code:java}
> Stacktrace
> java.lang.AssertionError: 
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.impala.common.FrontendFixture.analyzeStmt(FrontendFixture.java:397)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:244)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:185)
>   at 
> org.apache.impala.analysis.AnalyzeDDLTest.TestCreateTableLikeFileOrc(AnalyzeDDLTest.java:2045)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> This test was recently added by [~norbertluksa], and [~boroknagyz] gave a +2, 
> maybe [~boroknagyz] could provide some insight into this? Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional 

[jira] [Created] (IMPALA-10163) TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile fail in PDT timezone

2020-09-10 Thread Quanlong Huang (Jira)
Quanlong Huang created IMPALA-10163:
---

 Summary: TestIceberg.test_iceberg_query and 
TestIceberg.test_iceberg_profile fail in PDT timezone
 Key: IMPALA-10163
 URL: https://issues.apache.org/jira/browse/IMPALA-10163
 Project: IMPALA
  Issue Type: Bug
Reporter: Quanlong Huang


Consistently saw these failures in jobs ran in PDT timezone 
(America/Los_Angeles).
{code:java}
query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 

[jira] [Created] (IMPALA-10163) TestIceberg.test_iceberg_query and TestIceberg.test_iceberg_profile fail in PDT timezone

2020-09-10 Thread Quanlong Huang (Jira)
Quanlong Huang created IMPALA-10163:
---

 Summary: TestIceberg.test_iceberg_query and 
TestIceberg.test_iceberg_profile fail in PDT timezone
 Key: IMPALA-10163
 URL: https://issues.apache.org/jira/browse/IMPALA-10163
 Project: IMPALA
  Issue Type: Bug
Reporter: Quanlong Huang


Consistently saw these failures in jobs ran in PDT timezone 
(America/Los_Angeles).
{code:java}
query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': 
'-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 'abort_on_error': 
1, 'debug_action': '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
'exec_single_node_rows_threshold': 0} | table_format: 
parquet/none]query_test.test_scanners.TestIceberg.test_iceberg_profile[protocol:
 beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 

[jira] [Resolved] (IMPALA-7658) Proper codegen for HiveUdfCall

2020-09-10 Thread Daniel Becker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Becker resolved IMPALA-7658.
---
Resolution: Implemented

> Proper codegen for HiveUdfCall
> --
>
> Key: IMPALA-7658
> URL: https://issues.apache.org/jira/browse/IMPALA-7658
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Tim Armstrong
>Assignee: Daniel Becker
>Priority: Major
>  Labels: codegen, performance
>
> This function uses GetCodegendComputeFnWrapper() to call the interpreted path 
> but instead we could codegen the Evaluate() function to reduce the overhead. 
> I think this is likely to be a little involved since there's a loop to 
> unroll, so the solution might end up looking like IMPALA-5168



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7658) Proper codegen for HiveUdfCall

2020-09-10 Thread Daniel Becker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Becker resolved IMPALA-7658.
---
Resolution: Implemented

> Proper codegen for HiveUdfCall
> --
>
> Key: IMPALA-7658
> URL: https://issues.apache.org/jira/browse/IMPALA-7658
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Tim Armstrong
>Assignee: Daniel Becker
>Priority: Major
>  Labels: codegen, performance
>
> This function uses GetCodegendComputeFnWrapper() to call the interpreted path 
> but instead we could codegen the Evaluate() function to reduce the overhead. 
> I think this is likely to be a little involved since there's a loop to 
> unroll, so the solution might end up looking like IMPALA-5168



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IMPALA-9351) AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path

2020-09-10 Thread Quanlong Huang (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193434#comment-17193434
 ] 

Quanlong Huang commented on IMPALA-9351:


Hi [~norbertluksa], are you still looking into this issue? I'd like to 
investigate it if you don't have time.

> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to non-existing path
> -
>
> Key: IMPALA-9351
> URL: https://issues.apache.org/jira/browse/IMPALA-9351
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Norbert Luksa
>Priority: Blocker
>  Labels: broken-build, flaky-test
> Fix For: Impala 3.4.0
>
>
> AnalyzeDDLTest.TestCreateTableLikeFileOrc failed due to a non-existing path. 
> Specifically, we see the following error message.
> {code:java}
> Error Message
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
> {code}
> The stack trace is provided in the following.
> {code:java}
> Stacktrace
> java.lang.AssertionError: 
> Error during analysis:
> org.apache.impala.common.AnalysisException: Cannot infer schema, path does 
> not exist: 
> hdfs://localhost:20500/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0
> sql:
> create table if not exists newtbl_DNE like orc 
> '/test-warehouse/functional_orc_def.db/complextypes_fileformat/00_0'
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.impala.common.FrontendFixture.analyzeStmt(FrontendFixture.java:397)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:244)
>   at 
> org.apache.impala.common.FrontendTestBase.AnalyzesOk(FrontendTestBase.java:185)
>   at 
> org.apache.impala.analysis.AnalyzeDDLTest.TestCreateTableLikeFileOrc(AnalyzeDDLTest.java:2045)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> This test was recently added by [~norbertluksa], and [~boroknagyz] gave a +2, 
> maybe [~boroknagyz] could provide some insight into this? Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe,