n and inject the
>>> required jar files ? Has that been done by anyone ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Feb 15, 2019 at 2:33 AM Yun Tang wrote:
>>>
>>>> Hi
>>>>
>>>> When '
m supported truncate method. If file system not supported, it
>>> would use another work-around solution, which means you should not meet the
>>> problem. Otherwise 'RollingSink' thought and found the reflection method of
>>> 'truncate' while the file system actually
ix '.valid-length' and
>> prefix '_' to specify how many bytes in a bucket are valid.
>>
>> However, from your second email, the more serious problem should be using
>> 'Buckets' with Hadoop-2.6. From what I know the `RecoverableWriter` within
>>
should be using
> 'Buckets' with Hadoop-2.6. From what I know the `RecoverableWriter` within
> 'Buckets' can only support Hadoop-2.7+ , I'm not sure whether existed work
> around solution.
>
> Best
> Yun Tang
> ------------------
> *From:* Vish
n Tang
From: Vishal Santoshi
Sent: Friday, February 15, 2019 3:43
To: user
Subject: Re: StandAlone job on k8s fails with "Unknown method truncate" on
restore
And yes cannot work with RollingFleSink for hadoop 2.6 release of 1.7.1 b'coz
of this.
And yes cannot work with RollingFleSink for hadoop 2.6 release of 1.7.1
b'coz of this.
java.lang.UnsupportedOperationException: Recoverable writers on Hadoop
are only supported for HDFS and for Hadoop version 2.7 or newer
at
org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.(Hadoo
The job uses a RolllingFileSink to push data to hdfs. Run an HA standalone
cluster on k8s,
* get the job running
* kill the pod.
The k8s deployment relaunches the pod but fails with
java.io.IOException: Missing data in tmp file:
hdfs://nn-crunchy:8020/tmp/kafka-to-hdfs/ls_kraken_events/dt=2019-0