Hi,

不好意思,不会cherry-pick到1.12了,因为这是个feature,在1.14及其之后的版本支持

Best,
Jingsong

On Fri, Nov 12, 2021 at 3:06 PM 陈卓宇 <[email protected]> wrote:
>
> 社区您好 我通过代码debug已经定位到问题:
>
>
> 在flink1.12.5版本下flink-orc_2.11模块下的org/apache/flink/orc/vector/AbstractOrcColumnVector.java文件
> 下createFlinkVector中是没有对ListColumnVector进行实现的,我到flink的master上看在2021/5/12由wangwei1025提交的pr进行了实现,现在想请问社区有没有打算对1.12.5版本的此次问题根据wangwei1025的提交进行补丁的修复
>
>
> 表字段:
>
>
>
>
>    string_tag   string
>
>
>
>
>    number_tag   number
>
>
>
>
>    boolean_tag   boolean
>
>
>
>
>    datetime_tag   datetime
>
>
>
>
>    arr_tag   array<string&gt;
>
> 字段这里我进行了转换,生成这个SQL ,我发现具有array<string&gt;类型的表读取就是失败的
> SQL:CREATE TABLE smarttag_base_table_5 (
> &nbsp;&nbsp; distinct_id BIGINT,
> &nbsp; xwho VARCHAR,
> string_tag string,
> number_tag decimal,
> boolean_tag integer,
> datetime_tag bigint,
> arr_tag ARRAY<STRING&gt;,
> &nbsp;ds INTEGER
> ) WITH (
> &nbsp; 'connector' = 
> 'filesystem',&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; -- 必选: 
> 指定连接器类型
> &nbsp; 'path' = 
> 'hdfs://ark1:8020/tmp/usertag/20211029/db_31abd9593e9983ec/orcfile/smarttag_base_table_5/',&nbsp;
>  -- 必选: 指向目录的路径
> &nbsp; 'format' = 
> 'orc'&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
>  &nbsp; -- 必选: 文件系统连接器需要指定格式,请查阅 表格式 部分以获取更多细节
> )
>
>
>
>
>
> 报错:Unsupport vector: org.apache.hadoop.hive.ql.exec.vector.ListColumnVector
> 我看是因为array<string&gt;字段不支持导致的  但是 为什么会报hive的异常
> source到hdfs的一张orc的表&nbsp;
>
> 陈卓宇
>
>
> &nbsp;
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人:                                                                          
>                                               "user-zh"                       
>                                                              
> <[email protected]&gt;;
> 发送时间:&nbsp;2021年11月12日(星期五) 上午10:59
> 收件人:&nbsp;"flink中文邮件组"<[email protected]&gt;;
>
> 主题:&nbsp;Re: Flinksql 多表进行full join 出现异常
>
>
>
> Hi!
>
> 感谢反馈问题。这看起来其实和 join 无关,应该是与 source 有关。方便的话,能否把 source 表的
> ddl(包含每个字段的类型,字段名如果敏感可以重命名一下)和其他信息(例如 source 表以什么格式存储)分享在邮件里?
>
> 陈卓宇 <[email protected]&gt; 于2021年11月11日周四 下午9:44写道:
>
> &gt; 场景:进行多表的full join失败
> &gt;
> &gt;
> &gt; 报错:
> &gt; java.lang.RuntimeException: Failed to fetch next result
> &gt;
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:117)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:350)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.table.utils.PrintUtils.printAsTableauForm(PrintUtils.java:149)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.table.api.internal.TableResultImpl.print(TableResultImpl.java:154)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> TableAPI.envReadFileSysteam(TableAPI.java:441)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> &gt; Method)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> java.lang.reflect.Method.invoke(Method.java:498)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> &gt; Method)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> java.lang.reflect.Method.invoke(Method.java:498)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:64)
> &gt; Caused by: java.io.IOException: Failed to fetch job execution result
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:169)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:118)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;... 39 more
> &gt; Caused by: java.util.concurrent.ExecutionException:
> &gt; org.apache.flink.runtime.client.JobExecutionException: Job execution 
> failed.
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:167)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;... 41 more
> &gt; Caused by: org.apache.flink.runtime.client.JobExecutionException: Job
> &gt; execution failed.
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:614)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1983)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobExecutionResult(MiniClusterJobClient.java:114)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:166)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;... 41 more
> &gt; Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed
> &gt; by NoRestartBackoffTimeStrategy
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:233)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:224)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:215)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:666)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:446)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> sun.reflect.GeneratedMethodAccessor23.invoke(Unknown
> &gt; Source)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> java.lang.reflect.Method.invoke(Method.java:498)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at akka.japi.pf
> &gt; .UnitCaseStatement.apply(CaseStatements.scala:26)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at akka.japi.pf
> &gt; .UnitCaseStatement.apply(CaseStatements.scala:21)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at akka.japi.pf
> &gt; .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> akka.actor.ActorCell.invoke(ActorCell.scala:561)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> akka.dispatch.Mailbox.run(Mailbox.scala:225)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at 
> akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> &gt; Caused by: java.lang.RuntimeException: One or more fetchers have
> &gt; encountered exception
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.flink.streaming.runtime.io
> &gt; .StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.flink.streaming.runtime.io
> &gt; .StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at java.lang.Thread.run(Thread.java:748)
> &gt; Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received
> &gt; unexpected exception while polling the records
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; java.util.concurrent.FutureTask.run(FutureTask.java:266)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;... 1 more
> &gt; Caused by: java.lang.UnsupportedOperationException: Unsupport vector:
> &gt; org.apache.hadoop.hive.ql.exec.vector.ListColumnVector
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.orc.vector.AbstractOrcColumnVector.createFlinkVector(AbstractOrcColumnVector.java:73)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.orc.OrcColumnarRowFileInputFormat.lambda$createPartitionedFormat$84717d21$1(OrcColumnarRowFileInputFormat.java:161)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.orc.OrcColumnarRowFileInputFormat.createReaderBatch(OrcColumnarRowFileInputFormat.java:88)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.orc.AbstractOrcFileInputFormat.createPoolOfBatches(AbstractOrcFileInputFormat.java:157)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.orc.AbstractOrcFileInputFormat.createReader(AbstractOrcFileInputFormat.java:103)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.orc.AbstractOrcFileInputFormat.createReader(AbstractOrcFileInputFormat.java:52)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.checkSplitOrStartNext(FileSourceSplitReader.java:112)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:65)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;at
> &gt; 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
> &gt; &amp;nbsp;&amp;nbsp; &amp;nbsp;... 6 more
> &gt;
> &gt; 2021-11-11
> &gt; 
> 21:30:32.431|INFO|org.apache.flink.runtime.blob.AbstractBlobCache|TransientBlobCache
> &gt; shutdown hook|close|240|Shutting down BLOB cache
> &gt; 2021-11-11
> &gt; 
> 21:30:32.433|INFO|org.apache.flink.runtime.blob.AbstractBlobCache|PermanentBlobCache
> &gt; shutdown hook|close|240|Shutting down BLOB cache
> &gt; 2021-11-11
> &gt; 21:30:32.447|INFO|org.apache.flink.runtime.blob.BlobServer|BlobServer
> &gt; shutdown hook|close|345|Stopped BLOB server at 0.0.0.0:60726
> &gt;
> &gt; 进程已结束,退出代码为 -1
> &gt;
> &gt;
> &gt;
> &gt;
> &gt;
> &gt;
> &gt; sql:
> &gt; select * from smarttag_base_table_3 FULL JOIN smarttag_base_table_2 on
> &gt; smarttag_base_table_3.distinct_id=smarttag_base_table_2.distinct_id
> &gt; &amp;nbsp;FULL JOIN smarttag_derived_table_4 on
> &gt; smarttag_base_table_2.distinct_id=smarttag_derived_table_4.distinct_id
> &gt; &amp;nbsp;FULL JOIN smarttag_derived_table_1 on
> &gt; smarttag_derived_table_4.distinct_id=smarttag_derived_table_1.distinct_id
> &gt; &amp;nbsp;FULL JOIN smarttag_base_table_5 on
> &gt; smarttag_derived_table_1.distinct_id=smarttag_base_table_5.distinct_id
> &gt;
> &gt;
> &gt;
> &gt; 陈
> &gt;
> &gt;
> &gt; &amp;nbsp;



-- 
Best, Jingsong Lee

回复