unsubscribe
On Sat, Feb 17, 2024 at 3:04 AM Рамик И wrote:
>
> Hi
> I'm using Spark Streaming to read from Kafka and write to S3. Sometimes I
> get errors when writing org.apache.hadoop.fs.FileAlreadyExistsException.
>
> Spark version: 3.5.0
> scala version : 2.13.8
> Cluster: k8s
>
> libraryDep
As a bare minimum you will need to add some error trapping and exception
handling!
scala> import org.apache.hadoop.fs.FileAlreadyExistsException
import org.apache.hadoop.fs.FileAlreadyExistsException
and try your code
try {
df
.coalesce(1)
.write
.option("fs.s3a.committer.require.u