Steve Loughran commented on HADOOP-14598:

HADOOP-14383 was designed to allow anyone to use an http or https URL as a 
source of data in anything which takes a filesystem for reading things. This is 
good, and changing the schema to anything other than http/https doesn't make 

All thats problematic is that the bit of code which exports every Hadoop FS 
client as a URL via the JVM mustn't register the core JVM HTTP/HTTPS clients, 
as those work very well and other bits of code (here: Azure SDK), have 
assumptions/requirements about the class returned when you try to open such 

This patch stops the new schemas from being registered, sets things up for 
future schemas to go in too.

> Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection
> ----------------------------------------------------------------------------
>                 Key: HADOOP-14598
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14598
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure, test
>    Affects Versions: 2.9.0, 3.0.0-alpha4
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Blocker
>         Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to