> There's a bit of confusion setting in here; the FileSystem implementations
> spark uses are subclasses of org.apache.hadoop.fs.FileSystem; the nio
> class with the same name is different.
> grab the google cloud storage connector and put it on your classpath
I was using the gs:// filesystem as an example. I should have mentioned
that I'm aware of the workaround for that one.
I'm not asking how to read from Google Cloud Storage from Spark.
What I'm interested in is Java's built-in extension mechanism for its
"Path" objects, aka custom filesystem providers
What if I want to use my own different custom filesystem provider?
Something that allow me to take a funky-looking string like "foo://bar/baz"
and open it like a regular file, even though this results in a TCP
connection to the bar server and ask it to give me the "baz" file out of
its holographic quantum entangled storage (or other unspecified future
technology that can provide file-like objects).