[ 
https://issues.apache.org/jira/browse/FLINK-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-21279:
----------------------------
    Priority: Minor  (was: Major)

> flink on yarn sink nothing
> --------------------------
>
>                 Key: FLINK-21279
>                 URL: https://issues.apache.org/jira/browse/FLINK-21279
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / API
>    Affects Versions: 1.12.1
>         Environment: flink: 1.12.1
> hive: 3.1.2
>            Reporter: Spongebob
>            Priority: Minor
>         Attachments: yarn.log
>
>
> Here's the data chain of the flink application:
>  # read from HDFS file via ExecutionEnvironment, get Dataset
>  # collect the Dataset into Seq object.
>  # transform Seq object into multi ArrayBuffer[Expression] objects.
>  # create TableObjects from ArrayBuffers using `fromValues`
>  # create catalog views from TableObjects
>  # sink into hive table from catalog views.
> It is all normal until step 6. And it runs inconsistently on local IDE and 
> yarn cluster. When I run the application on local IDE it cost all network 
> buffer memory then turn into failed( actually the HDFS file size is less then 
> 2MB, and I had set the parrallelism of tableEnv to 1. If I run the one sink 
> of them only it can be run normally). And to the yarn cluster, there throws 
> the exception `Job was submitted in detached mode. Results of job execution, 
> such as accumulators, runtime, etc. are not available. Please make sure your 
> program doesn't call an eager execution function [collect, print, printToErr, 
> count]` but the application can run successfully however can not sink 
> anything to hive, I find it does not request any slot while running.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to