Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4928#issuecomment-77534782
So, I'll probably keep making this comment until someone explains this or
tells me to stuff it, but, I do not see why this belongs in the Spark build.
You do not need this profile to build the artifact described here; this can
live anywhere as a big command line.
Why is the vendor not producing this packaging? It's trivial. I presume the
vendor doesn't support or guarantee this works.
It's a minor complication to an already complex build.
I find it a tiny bit wrong to have this declaration in the upstream
project; actually producing vendor-specific artifacts (which I take to be the
point of this) is more inappropriate to me.
How do vendors like Hortonworks feel about not having a profile too for
example?
People that want to roll their own build are already probably wanting to
set these values to a slightly different set of artifacts specific to their
environment; one profile may not help much anyway.
It's a little bit of trouble for the vendor too. We get occasional
questions about why Spark's CDH4 build doesn't work with all CDH4, since it
doesn't (because Spark doesn't work with all vintages of Hadoop/YARN 2.0.x).
This isn't actually supported by anyone here.
I can see leaving the repository declaration and maybe some notes about how
to specialize the build for known distros in the instructions, for those that
want to roll their own. I personally would prefer to remove all
vendor-specific build profiles and artifacts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]