[ 
https://issues.apache.org/jira/browse/BEAM-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rogan Morrow updated BEAM-12875:
--------------------------------
    Description: 
I am new to this codebase so apologies if I have any misunderstandings, but 
from what I can tell when {{SparkExecutableStageFunction}} is called an 
{{ArtifactRetrievalService}} is created (if the job bundle factory's 
environment cache is cold) to be called by the worker harness.

The issue is that {{FileSystems.setDefaultPipelineOptions}} is not called 
before this, so no filesystems are registered. If one is using cloud storage 
such as S3 to stage artifacts, then the {{ArtifactRetrievalService}} will not 
be able to retrieve the artifacts and throw an exception:
  {{java.lang.IllegalArgumentException: No filesystem found for scheme s3}}

{{This doesn't affect other runners such as the Flink runner because it calls 
{{FileSystems.setDefaultPipelineOptions}} [in its executable stage function 
|https://github.com/apache/beam/blob/v2.32.0/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkExecutableStageFunction.java#L151]}}

  was:
I am new to this codebase so apologies if I have any misunderstandings, but 
from what I can tell when `SparkExecutableStageFunction` is called an 
`ArtifactRetrievalService` is created (if the job bundle factory's environment 
cache is cold) to be called by the worker harness.

The issue is that `FileSystems.setDefaultPipelineOptions` is not called before 
this, so no filesystems are registered. If one is using cloud storage such as 
S3 to stage artifacts, then the `ArtifactRetrievalService` will not be able to 
retrieve the artifacts and throw an exception:
 {{java.lang.IllegalArgumentException: No filesystem found for scheme s3}}

{{This doesn't affect other runners such as the Flink runner because it calls 
`FileSystems.setDefaultPipelineOptions` [in its executable stage function 
|https://github.com/apache/beam/blob/v2.32.0/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkExecutableStageFunction.java#L151]}}


> File systems are not registered when ArtifactRetrievalService is created by 
> Spark runner
> ----------------------------------------------------------------------------------------
>
>                 Key: BEAM-12875
>                 URL: https://issues.apache.org/jira/browse/BEAM-12875
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>    Affects Versions: 2.32.0
>            Reporter: Rogan Morrow
>            Priority: P2
>
> I am new to this codebase so apologies if I have any misunderstandings, but 
> from what I can tell when {{SparkExecutableStageFunction}} is called an 
> {{ArtifactRetrievalService}} is created (if the job bundle factory's 
> environment cache is cold) to be called by the worker harness.
> The issue is that {{FileSystems.setDefaultPipelineOptions}} is not called 
> before this, so no filesystems are registered. If one is using cloud storage 
> such as S3 to stage artifacts, then the {{ArtifactRetrievalService}} will not 
> be able to retrieve the artifacts and throw an exception:
>   {{java.lang.IllegalArgumentException: No filesystem found for scheme s3}}
> {{This doesn't affect other runners such as the Flink runner because it calls 
> {{FileSystems.setDefaultPipelineOptions}} [in its executable stage function 
> |https://github.com/apache/beam/blob/v2.32.0/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/functions/FlinkExecutableStageFunction.java#L151]}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to