Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-84063585
> I'm a savvy customer. I name my jar file "spark-assembly.jar" because I
see that as the outermost jar file in CDH (I get rid of unnecessary symlinks).
Well, in my view the customer is on his own as soon as he starts to modify
the distribution. If he makes that change in 1.3, he'll have to figure out he
needs to change other scripts too. Yes, it's easier to change a shell script
than it is to change a compiled library, but I still don't see why Spark needs
to support a random user who wants to deploy his own view of what the Spark
distribution should look like. If that user is really savvy, he's savvy enough
to fix what needs to be fixed on his own.
The problem with looseing the regex is theoretical, as Sean says. But
consider this output, after running `sbt package assembly`:
$ ls -l assembly/target/scala-2.10/
total 156928
-rw-rw-r-- 1 vanzin vanzin 160683349 Mar 18 06:50
spark-assembly-1.3.0-SNAPSHOT-hadoop2.5.0.jar
-rw-rw-r-- 1 vanzin vanzin 290 Mar 20 09:35
spark-assembly_2.10-1.4.0-SNAPSHOT.jar
Now you have a non-theoretical breakage because of your change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]