@Bowen @jark 有时间帮忙看下
谢谢!

发件人: bigdatayunzhongyan
发送时间: 2019-03-25 19:17:22
收件人:  user-zh-help
主题: Blink SQL报错
Hi,all:
        问题详见附件:
环境信息:
        环境 hadoop2.7.2 blink hive1.2.1
        参数 ./bin/yarn-session.sh -n 50 -s 2 -jm 3072 -tm 4096 -d
./bin/sql-client.sh embedded -s application_xxxx
        数据 tpc-ds 500G总量数据

很多SQL都无法执行成功,不仅仅是SQL兼容性的问题,还有阿里的同学能否提供下详细测试报告。
1、sql
        SELECT s_store_name, s_store_id, SUM(CASE 
                WHEN d_day_name = 'Sunday' THEN ss_sales_price
                ELSE NULL
        END) AS sun_sales
        , SUM(CASE 
                WHEN d_day_name = 'Monday' THEN ss_sales_price
                ELSE NULL
        END) AS mon_sales, SUM(CASE 
                WHEN d_day_name = 'Tuesday' THEN ss_sales_price
                ELSE NULL
        END) AS tue_sales
        , SUM(CASE 
                WHEN d_day_name = 'Wednesday' THEN ss_sales_price
                ELSE NULL
        END) AS wed_sales, SUM(CASE 
                WHEN d_day_name = 'Thursday' THEN ss_sales_price
                ELSE NULL
        END) AS thu_sales
        , SUM(CASE 
                WHEN d_day_name = 'Friday' THEN ss_sales_price
                ELSE NULL
        END) AS fri_sales, SUM(CASE 
                WHEN d_day_name = 'Saturday' THEN ss_sales_price
                ELSE NULL
        END) AS sat_sales
        FROM date_dim, store_sales, store
        WHERE (d_date_sk = ss_sold_date_sk
                AND s_store_sk = ss_store_sk
                AND s_gmt_offset = -5
                AND d_year = 2000)
        GROUP BY s_store_name, s_store_id
        ORDER BY s_store_name, s_store_id, sun_sales, mon_sales, tue_sales, 
wed_sales, thu_sales, fri_sales, sat_sales
        LIMIT 10;


--------------------------------------------------------------------------------------------------------------------------
报错信息:
2019-03-25 15:39:58,975 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Discarding the 
results produced by task execution 6ecea47557a5e146f7813b3101663a8d.
2019-03-25 15:39:58,986 ERROR 
org.apache.flink.streaming.runtime.tasks.StreamTask           - Could not 
execute the task [NestedLoopJoin(where: (ss_hdemo_sk = hd_demo_sk), buildRight) 
-> Calc(select: (ss_sold_time_sk, ss_store_sk)) -> NestedLoopJoin(where: 
(ss_sold_time_sk = t_time_sk), buildRight) -> Calc(select: (ss_store_sk)), 
NestedLoopJoin(where: (ss_sold_time_sk = t_time_sk), buildRight) -> 
Calc(select: (ss_store_sk))] (86/100), aborting the execution
org.apache.flink.runtime.io.network.partition.DataConsumptionException: 
java.nio.channels.ClosedChannelException
        at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionView.getNextBuffer(SpilledSubpartitionView.java:150)
        at 
org.apache.flink.runtime.io.network.partition.consumer.LocalInputChannel.getNextBuffer(LocalInputChannel.java:189)
        at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:616)
        at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.pollNextBufferOrEvent(SingleInputGate.java:573)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.waitAndGetNextInputGate(UnionInputGate.java:250)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.getNextBufferOrEvent(UnionInputGate.java:205)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.pollNextBufferOrEvent(UnionInputGate.java:183)
        at 
org.apache.flink.streaming.runtime.io.BarrierTracker.getNext(BarrierTracker.java:171)
        at 
org.apache.flink.streaming.runtime.io.BarrierTracker.pollNext(BarrierTracker.java:118)
        at 
org.apache.flink.streaming.runtime.io.InputGateFetcher.getNextBufferOrEvent(InputGateFetcher.java:221)
        at 
org.apache.flink.streaming.runtime.io.InputGateFetcher.getNextResult(InputGateFetcher.java:195)
        at 
org.apache.flink.streaming.runtime.io.InputGateFetcher.fetchAndProcess(InputGateFetcher.java:151)
        at 
org.apache.flink.streaming.runtime.io.StreamArbitraryInputProcessor.process(StreamArbitraryInputProcessor.java:134)
        at 
org.apache.flink.streaming.runtime.tasks.ArbitraryInputStreamTask.run(ArbitraryInputStreamTask.java:183)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:324)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:727)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
        at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
        at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:300)
        at 
org.apache.flink.runtime.io.disk.iomanager.SynchronousBufferFileReader.readInto(SynchronousBufferFileReader.java:55)
        at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionView.requestAndFillBuffer(SpilledSubpartitionView.java:164)
        at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionView.getNextBuffer(SpilledSubpartitionView.java:134)
        ... 16 more
2019-03-25 15:39:58,986 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Discarding the 
results produced by task execution 36af2e9f2416dbf2058b51e27be75812.
2019-03-25 15:39:58,986 INFO  
org.apache.flink.runtime.io.network.netty.ZeroCopyNettyMessageDecoder  - 
Channel get inactive, currentNettyMessage = null
2019-03-25 15:39:58,995 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Discarding the 
results produced by task execution 1544fc9abe16995030af524c69ebc8e2.
2019-03-25 15:39:58,995 INFO  org.apache.flink.runtime.taskmanager.Task         
            - [NestedLoopJoin(where: (ss_hdemo_sk = hd_demo_sk), buildRight) -> 
Calc(select: (ss_sold_time_sk, ss_store_sk)) -> NestedLoopJoin(where: 
(ss_sold_time_sk = t_time_sk), buildRight) -> Calc(select: (ss_store_sk)), 
NestedLoopJoin(where: (ss_sold_time_sk = t_time_sk), buildRight) -> 
Calc(select: (ss_store_sk))] (86/100) (f7cb3cd73b320e1ff752558927dd8e64) 
switched from RUNNING to FAILED.
org.apache.flink.runtime.io.network.partition.DataConsumptionException: 
java.nio.channels.ClosedChannelException
        at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionView.getNextBuffer(SpilledSubpartitionView.java:150)
        at 
org.apache.flink.runtime.io.network.partition.consumer.LocalInputChannel.getNextBuffer(LocalInputChannel.java:189)
        at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:616)
        at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.pollNextBufferOrEvent(SingleInputGate.java:573)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.waitAndGetNextInputGate(UnionInputGate.java:250)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.getNextBufferOrEvent(UnionInputGate.java:205)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.pollNextBufferOrEvent(UnionInputGate.java:183)
        at 
org.apache.flink.streaming.runtime.io.BarrierTracker.getNext(BarrierTracker.java:171)
        at 
org.apache.flink.streaming.runtime.io.BarrierTracker.pollNext(BarrierTracker.java:118)
        at 
org.apache.flink.streaming.runtime.io.InputGateFetcher.getNextBufferOrEvent(InputGateFetcher.java:221)
        at 
org.apache.flink.streaming.runtime.io.InputGateFetcher.getNextResult(InputGateFetcher.java:195)
        at 
org.apache.flink.streaming.runtime.io.InputGateFetcher.fetchAndProcess(InputGateFetcher.java:151)
        at 
org.apache.flink.streaming.runtime.io.StreamArbitraryInputProcessor.process(StreamArbitraryInputProcessor.java:134)
        at 
org.apache.flink.streaming.runtime.tasks.ArbitraryInputStreamTask.run(ArbitraryInputStreamTask.java:183)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:324)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:727)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
        at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
        at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:300)
        at 
org.apache.flink.runtime.io.disk.iomanager.SynchronousBufferFileReader.readInto(SynchronousBufferFileReader.java:55)
        at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionView.requestAndFillBuffer(SpilledSubpartitionView.java:164)
        at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionView.getNextBuffer(SpilledSubpartitionView.java:134)
        ... 16 more
2019-03-25 15:39:58,996 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Discarding the 
results produced by task execution f4b624f91b9b64bf4996c8487952bbac.
2019-03-25 15:39:58,996 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Discarding the 
results produced by task execution 896dd91a47b2ddb365d49198de8973ca.
2019-03-25 15:39:58,996 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Attempting to cancel task [NestedLoopJoin(where: (ss_hdemo_sk = 
hd_demo_sk), buildRight) -> Calc(select: (ss_sold_time_sk, ss_store_sk)) -> 
NestedLoopJoin(where: (ss_sold_time_sk = t_time_sk), buildRight) -> 
Calc(select: (ss_store_sk)), NestedLoopJoin(where: (ss_sold_time_sk = 
t_time_sk), buildRight) -> Calc(select: (ss_store_sk))] (86/100) 
(f7cb3cd73b320e1ff752558927dd8e64).
2019-03-25 15:39:58,997 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Task [NestedLoopJoin(where: (ss_hdemo_sk = hd_demo_sk), 
buildRight) -> Calc(select: (ss_sold_time_sk, ss_store_sk)) -> 
NestedLoopJoin(where: (ss_sold_time_sk = t_time_sk), buildRight) -> 
Calc(select: (ss_store_sk)), NestedLoopJoin(where: (ss_sold_time_sk = 
t_time_sk), buildRight) -> Calc(select: (ss_store_sk))] (86/100) is already in 
state FAILED
2019-03-25 15:39:58,997 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Attempting to cancel task NestedLoopJoin(where: (ss_store_sk = 
s_store_sk), buildLeft) -> Calc(select: (0 AS $f0)) (52/100) 
(8614bc26847665b716dd44a0d6f7e666).
2019-03-25 15:39:58,997 INFO  org.apache.flink.runtime.taskmanager.Task         
            - NestedLoopJoin(where: (ss_store_sk = s_store_sk), buildLeft) -> 
Calc(select: (0 AS $f0)) (52/100) (8614bc26847665b716dd44a0d6f7e666) switched 
from RUNNING to CANCELING.
2019-03-25 15:39:59,000 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Triggering cancellation of task code NestedLoopJoin(where: 
(ss_store_sk = s_store_sk), buildLeft) -> Calc(select: (0 AS $f0)) (52/100) 
(8614bc26847665b716dd44a0d6f7e666).
2019-03-25 15:39:59,000 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Freeing task resources for [NestedLoopJoin(where: (ss_hdemo_sk = 
hd_demo_sk), buildRight) -> Calc(select: (ss_sold_time_sk, ss_store_sk)) -> 
NestedLoopJoin(where: (ss_sold_time_sk = t_time_sk), buildRight) -> 
Calc(select: (ss_store_sk)), NestedLoopJoin(where: (ss_sold_time_sk = 
t_time_sk), buildRight) -> Calc(select: (ss_store_sk))] (86/100) 
(f7cb3cd73b320e1ff752558927dd8e64).
2019-03-25 15:39:59,001 WARN  org.apache.flink.runtime.filecache.FileCache      
            - improper use of releaseJob() without a matching number of 
createTmpFiles() calls for jobId f338e27ce7eeb37bb0518b73c5aae8ce
2019-03-25 15:39:59,001 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Ensuring all FileSystem streams are closed for task 
[NestedLoopJoin(where: (ss_hdemo_sk = hd_demo_sk), buildRight) -> Calc(select: 
(ss_sold_time_sk, ss_store_sk)) -> NestedLoopJoin(where: (ss_sold_time_sk = 
t_time_sk), buildRight) -> Calc(select: (ss_store_sk)), NestedLoopJoin(where: 
(ss_sold_time_sk = t_time_sk), buildRight) -> Calc(select: (ss_store_sk))] 
(86/100) (f7cb3cd73b320e1ff752558927dd8e64) [FAILED]
2019-03-25 15:39:59,003 ERROR 
org.apache.flink.streaming.runtime.tasks.StreamTask           - Could not 
execute the task NestedLoopJoin(where: (ss_store_sk = s_store_sk), buildLeft) 
-> Calc(select: (0 AS $f0)) (52/100), aborting the execution
java.lang.IllegalStateException: Released
        at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:600)
        at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:563)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.waitAndGetNextInputGate(UnionInputGate.java:250)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.getNextBufferOrEvent(UnionInputGate.java:205)
        at 
org.apache.flink.runtime.io.network.partition.consumer.UnionInputGate.getNextBufferOrEvent(UnionInputGate.java:188)
        at 
org.apache.flink.streaming.runtime.io.BarrierTracker.getNext(BarrierTracker.java:171)
        at 
org.apache.flink.streaming.runtime.io.BarrierTracker.getNextNonBlocked(BarrierTracker.java:112)
        at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:381)
        at 
org.apache.flink.streaming.runtime.tasks.TwoInputStreamTask.run(TwoInputStreamTask.java:125)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:324)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:727)
        at java.lang.Thread.run(Thread.java:745)
2019-03-25 15:39:59,005 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Un-registering 
task and sending final execution state FAILED to JobManager for task 
[NestedLoopJoin(where: (ss_hdemo_sk = hd_demo_sk), buildRight) -> Calc(select: 
(ss_sold_time_sk, ss_store_sk)) -> NestedLoopJoin(where: (ss_sold_time_sk = 
t_time_sk), buildRight) -> Calc(select: (ss_store_sk)), NestedLoopJoin(where: 
(ss_sold_time_sk = t_time_sk), buildRight) -> Calc(select: (ss_store_sk))] 
f7cb3cd73b320e1ff752558927dd8e64.
2019-03-25 15:39:59,005 INFO  
org.apache.flink.runtime.io.network.netty.ZeroCopyNettyMessageDecoder  - 
Channel get inactive, currentNettyMessage = null
2019-03-25 15:39:59,005 INFO  org.apache.flink.runtime.taskmanager.Task         
            - NestedLoopJoin(where: (ss_store_sk = s_store_sk), buildLeft) -> 
Calc(select: (0 AS $f0)) (52/100) (8614bc26847665b716dd44a0d6f7e666) switched 
from CANCELING to CANCELED.

2019-03-25 17:58:24,071 INFO  
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction  - do 
work for a split, elapsed time: 54052
2019-03-25 17:58:24,072 ERROR 
org.apache.flink.streaming.runtime.tasks.StreamTask           - Could not 
execute the task Source: hive-table-source: isPartitionPrune:false 
isFilterPushDown:false -> Calc(select: (ss_sold_date_sk, ss_store_sk, 
ss_sales_price)) (41/100), aborting the execution
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator
        at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingWithOneInputStreamOperatorOutput.pushToOperator(OperatorChain.java:638)
        at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingWithOneInputStreamOperatorOutput.collect(OperatorChain.java:612)
        at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingWithOneInputStreamOperatorOutput.collect(OperatorChain.java:575)
        at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:763)
        at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:741)
        at 
org.apache.flink.streaming.api.operators.StreamSource$1.collect(StreamSource.java:192)
        at 
org.apache.flink.streaming.api.operators.StreamSource$1.collect(StreamSource.java:173)
        at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
        at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:106)
        at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
        at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:70)
        at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:106)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:324)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:727)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException
        at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:96)
        at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:76)
        at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:42)
        at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:763)
        at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:741)
        at BatchExecCalcRule$10.processElement(Unknown Source)
        at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingWithOneInputStreamOperatorOutput.pushToOperator(OperatorChain.java:635)
        ... 14 more
Caused by: java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:250)
        at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:207)
        at 
org.apache.flink.runtime.io.network.partition.InternalResultPartition.requestNewBufferBuilder(InternalResultPartition.java:448)
        at 
org.apache.flink.runtime.io.network.partition.InternalResultPartition.copyFromSerializerToTargetChannel(InternalResultPartition.java:526)
        at 
org.apache.flink.runtime.io.network.partition.InternalResultPartition.emitRecord(InternalResultPartition.java:206)
        at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:86)
        at 
org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:81)
        at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:93)
        ... 20 more
2019-03-25 17:58:24,073 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Source: hive-table-source: isPartitionPrune:false 
isFilterPushDown:false -> Calc(select: (ss_sold_date_sk, ss_store_sk, 
ss_sales_price)) (41/100) (d7f49173e3c064de695168ce12038cc1) switched from 
CANCELING to CANCELED.
2019-03-25 17:58:24,073 INFO  org.apache.flink.runtime.taskmanager.Task         
            - Freeing task resources for Source: hive-table-source: 
isPartitionPrune:false isFilterPushDown:false -> Calc(select: (ss_sold_date_sk, 
ss_store_sk, ss_sales_price)) (41/100) (d7f49173e3c064de695168ce12038cc1).
2019-03-25 17:58:24,072 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Discarding the 
results produced by task execution cf6e86b2eee81710241c8201466bd905.
2019-03-25 17:58:24,072 INFO  
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction  - 
close a split, cost: 1
2019-03-25 17:58:24,073 ERROR 
org.apache.flink.streaming.runtime.tasks.StreamTask           - Could not 
execute the task Source: hive-table-source: isPartitionPrune:false 
isFilterPushDown:false -> Calc(select: (ss_sold_date_sk, ss_store_sk, 
ss_sales_price)) (55/100), aborting the execution
org.apache.flink.runtime.execution.CancelTaskException
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:329)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:727)
        at java.lang.Thread.run(Thread.java:745)
1、sql 语法不支持,请问你
们的Flink与Spark的测试对比怎么通过的,能否提供一份你
们的详细测试报告
WITH inv AS (
    SELECT w_warehouse_name, w_warehouse_sk, i_item_sk, d_moy, stdev
      , mean
      , CASE mean
        WHEN 0 THEN NULL
        ELSE stdev / mean
      END AS cov
    FROM (
      SELECT w_warehouse_name, w_warehouse_sk, i_item_sk, d_moy
        , stddev_samp(inv_quantity_on_hand) AS stdev, AVG(inv_quantity_on_hand) 
AS mean
      FROM inventory, item, warehouse, date_dim
      WHERE inv_item_sk = i_item_sk
        AND inv_warehouse_sk = w_warehouse_sk
        AND inv_date_sk = d_date_sk
        AND d_year = 2001
      GROUP BY w_warehouse_name, w_warehouse_sk, i_item_sk, d_moy
    ) foo
    WHERE CASE mean
      WHEN 0 THEN 0
      ELSE stdev / mean
    END > 1
  )
SELECT inv1.w_warehouse_sk, inv1.i_item_sk, inv1.d_moy, inv1.mean, inv1.cov
  , inv2.w_warehouse_sk, inv2.i_item_sk, inv2.d_moy, inv2.mean, inv2.cov
FROM inv inv1, inv inv2
WHERE inv1.i_item_sk = inv2.i_item_sk
  AND inv1.w_warehouse_sk = inv2.w_warehouse_sk
  AND inv1.d_moy = 1
  AND inv2.d_moy = 1 + 1
ORDER BY inv1.w_warehouse_sk, inv1.i_item_sk, inv1.d_moy, inv1.mean, inv1.cov, 
inv2.d_moy, inv2.mean, inv2.cov
limit 20;

回复