问题1:

org.apache.hadoop.hdfs.BlockMissingException,可以用hadoop fs 命令看看那个datanode能不能访问


问题2:
写hive,需要用batch模式,set execution.type=batch;







在 2020-05-26 16:42:12,"Enzo wang" <[email protected]> 写道:

Hi Flink group,


今天再看Flink与Hive集成的部分遇到了几个问题,麻烦大家帮忙看看。
参考的网址:https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html


版本、表结构信息见这里: https://gist.github.com/r0c/e244622d66447dfc85a512e75fc2159b


问题1:Flink SQL 读Hive 表pokes 失败


Flink SQL> select * from pokes;
2020-05-26 16:12:11,439 INFO  org.apache.hadoop.mapred.FileInputFormat          
            - Total input paths to process : 4
[ERROR] Could not execute SQL statement. Reason:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: 
BP-138389591-172.20.0.4-1590471581703:blk_1073741825_1001 
file=/user/hive/warehouse/pokes/kv1.txt







问题2:Flink SQL 写Hive 表pokes 失败


Flink SQL> insert into pokes select 12,'tom';
[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Stream Tables can only be emitted by 
AppendStreamTableSink, RetractStreamTableSink, or UpsertStreamTableSink.







Cheers,
Enzo

回复