Github user yew1eb commented on the issue: https://github.com/apache/flink/pull/6118 yes, this is a hadoop-file-system discovery issue (similar case: https://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file), but if flink-job dependency this `hadoop-common`, and flink-cluster uses hdfs to store checkpoint, job will throw error when init filesystem for checkpoint.  i think we should improve `load file system factories` part. see `org.apache.flink.core.fs.FileSystem` code snippetsï¼ ``` /** All available file system factories. */ private static final List<FileSystemFactory> RAW_FACTORIES = loadFileSystems(); /** Mapping of file system schemes to the corresponding factories, * populated in {@link FileSystem#initialize(Configuration)}. */ private static final HashMap<String, FileSystemFactory> FS_FACTORIES = new HashMap<>(); /** The default factory that is used when no scheme matches. */ private static final FileSystemFactory FALLBACK_FACTORY = loadHadoopFsFactory(); ``` @StephanEwen , what do you think about this?
---