Cos,
I wasn't aware that all of the dependencies are being pulled from the
Cloudera repos and you're right that it seems strange. What's causing
that to happen? If it's something on the Kite side, let us know how we
can fix it.
Mark,
Yes, we do publish to maven central. It sounds like the problem is that
somehow the Cloudera repo is being used instead of maven central.
rb
On 09/11/2015 09:31 AM, Mark Grover wrote:
Adding Ryan back
Thanks Ryan!
And, thanks Cos. Yeah, that makes total sense. I think that's a
reasonable goal, it's likely not something we can achieve overnight but
we can march towards that steadily. In particular, that involves two things:
1) Selectively building stuff that bigtop cares about.
2) Encouraging projects like Kite to publish their jars in maven
central, etc. (Ryan do you have any plans to do that? Or, is that the
case already?)
I'll create a JIRA and add some details there, feel free to add if I
missed something.
Mark
On Thu, Sep 10, 2015 at 4:02 PM, Konstantin Boudnik <[email protected]
<mailto:[email protected]>> wrote:
Thanks for the explanation Ryan! Certainly excluding the non-Apache
specific
modules make sense and needs to be done.
The other issue here, is that _all_ dependencies, including those
that Hadoop
and other components depends on, are pulled out of Cloudera repo.
That's the biggest one in my opinion. While I am not suspecting Cloudera
will be putting anything malicious into httpcomponents I, as a RM
and a PMC
member of this project, don't feel right gpg-signing packages
without knowing
what some of the jars contain. So my main concern is that if we
supply binary
packages to our users we should be sure that we are using either
- official public repos like mavencentral, that contains the jars
deployed by
the official development teams of those components; or
- ASF Infra repos where all the artifacts are controlled and a
responsibility
of a particular project's PMC
Does it make sense?
Cos
--
Ryan Blue