[ 
https://issues.apache.org/jira/browse/FLINK-29242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Zheng updated FLINK-29242:
-------------------------------
    Description: 
h2. Detail

See in GSBlobStorageImpl
{code:java}
@Override
public int write(byte[] content, int start, int length) throws IOException {
    LOGGER.trace("Writing {} bytes to blob {}", length, blobIdentifier);
    Preconditions.checkNotNull(content);
    Preconditions.checkArgument(start >= 0);
    Preconditions.checkArgument(length >= 0);

    ByteBuffer byteBuffer = ByteBuffer.wrap(content, start, length);
    int written = writeChannel.write(byteBuffer);
    LOGGER.trace("Wrote {} bytes to blob {}", written, blobIdentifier);
    return written;
}

@Override
public void close() throws IOException {
    LOGGER.trace("Closing write channel to blob {}", blobIdentifier);
    writeChannel.close();
} {code}
when I write data into google cloud storage by flink-gs-fs-haddoop.

The service always has read time out exceptions, which can be reproduced in a 
very short time of task execution. 
I tried to trace the code and found that it always occurs when the writeChannel 
Close code is executed. I tried retrying by modifying the source code but it 
didn't solve the problem, the timeout is 20s and the checkpoint will fail if 
this problem occurs.

With this component, I can't write data to gcs via flink.

 

 

 

 

  was:
h2. Detail

 

 


> [flink-filesystems] flink-gs-fs-hadoop Read time out when close write channel
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-29242
>                 URL: https://issues.apache.org/jira/browse/FLINK-29242
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystems
>    Affects Versions: 1.15.0
>         Environment: flink version: 1.15
> jdk: 1.8
>  
>            Reporter: Jian Zheng
>            Priority: Major
>
> h2. Detail
> See in GSBlobStorageImpl
> {code:java}
> @Override
> public int write(byte[] content, int start, int length) throws IOException {
>     LOGGER.trace("Writing {} bytes to blob {}", length, blobIdentifier);
>     Preconditions.checkNotNull(content);
>     Preconditions.checkArgument(start >= 0);
>     Preconditions.checkArgument(length >= 0);
>     ByteBuffer byteBuffer = ByteBuffer.wrap(content, start, length);
>     int written = writeChannel.write(byteBuffer);
>     LOGGER.trace("Wrote {} bytes to blob {}", written, blobIdentifier);
>     return written;
> }
> @Override
> public void close() throws IOException {
>     LOGGER.trace("Closing write channel to blob {}", blobIdentifier);
>     writeChannel.close();
> } {code}
> when I write data into google cloud storage by flink-gs-fs-haddoop.
> The service always has read time out exceptions, which can be reproduced in a 
> very short time of task execution. 
> I tried to trace the code and found that it always occurs when the 
> writeChannel Close code is executed. I tried retrying by modifying the source 
> code but it didn't solve the problem, the timeout is 20s and the checkpoint 
> will fail if this problem occurs.
> With this component, I can't write data to gcs via flink.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to