ctubbsii commented on PR #48: URL: https://github.com/apache/accumulo-classloaders/pull/48#issuecomment-3762011843
> I think #49 is cleaner because it avoids registering JVM wide resources (which could cause code outside the classloader plugin to change behavior). It introduces the possibility of far more problems than it solves. Take a look at the changes I just pushed. This fixes it much more narrowly. All it does is say "Use Hadoop to open hdfs: URLs". Any other application's implementation would ultimately do the same thing, but if somebody really wanted something else to handle hdfs: URLs in their code, they can easily do so by manipulating the Java ServiceLoader files on their classpath (META/INF/services). In my change, we don't forcibly register it in code at all anymore. We just put a provider on the classpath, that the user can choose to omit/exclude without changing code (may require a rebuild of the jar, or using a plugin to transform the jar contents to remove the service file, but that's not very difficult). > I really like the locality. In the current code FileResolver is where the download happens, if its happy w/ the URL/URI then things are fine. Looking at the java code that converts URIs to URLs the main thing it checks is if the URI has a scheme which #49 adds a check for in FileResolver. My concern isn't with `hdfs:` URLs being represented as URIs. My concern is with us changing our entire code to use URIs, and then failing to account for all the edge cases where a `file:` URI (which can be relative) doesn't work the same as a `file:` URL (which must be absolute). There's just so much weirdness that can get pushed through. Using URI is hardly better than using String. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
