No idea if I get a vote ;) Nevertheless, +1 to have binaries for both versions in Maven and explicitly "scala versioned".

Some background on this for those not as familiar with scala versioning:

It's considered best practice to label what version of scala a library uses in the artifact id.

The reason is compiled scala code is only compatible with the major version of scala it was compiled for. For example, a library compatible with 2.10 is not compatible with 2.11. The same will be true with 2.12 once it is released. Mixing versions will result in undefined behavior which will likely manifest itself as runtime exceptions.

The convention to fix this problem is for all published libraries to specify the version of scala they are compatible with. Leaving out the scala version in a library is akin to saying "We don't depend on scala for this library, so feel free to use whatever you want." Sbt users will typically specify the version of scala they use and tooling is built around ensuring consistency with the "%%" operator.

E.g.

scalaVersion := "2.11.4"

// this resolves to to artifactID: "scalacheck_2.11"
libraryDependencies += "org.scalacheck" %% "scalacheck" % "1.12.0" % "test"

The most important part of this is that the scala version is explicit which eliminates the problem for downstream users.

Cheers,
Frederick

On 10/28/2015 10:55 AM, Fabian Hueske wrote:
+1 to have binaries for both versions in Maven and as build to download.

2015-10-26 17:11 GMT+01:00 Theodore Vasiloudis <
theodoros.vasilou...@gmail.com>:

+1 for having binaries, I'm working on a Spark application currently with
Scala 2.11 and having to rebuild everything when deploying e.g. to EC2 is a
pain.

On Mon, Oct 26, 2015 at 4:22 PM, Ufuk Celebi <u...@apache.org> wrote:

I agree with Till, but is this something you want to address in this
release already?

I would postpone it to 1.0.0.

– Ufuk

On 26 Oct 2015, at 16:17, Till Rohrmann <trohrm...@apache.org> wrote:

I would be in favor of deploying also Scala 2.11 artifacts to Maven
since
more and more people will try out Flink with Scala 2.11. Having the
dependencies in the Maven repository makes it considerably easier for
people to get their Flink jobs running.

Furthermore, I observed that people are not aware that our deployed
artifacts, e.g. flink-runtime, are built with Scala 2.10. As a
consequence,
they mix flink dependencies with other dependencies pulling in Scala
2.11
and then they wonder that the program crashes. It would be, imho,
clearer
if all our dependencies which depend on a specific Scala version would
have
the corresponding Scala suffix appended.

Adding the 2.10 suffix would also spare us the hassle of upgrading to a
newer Scala version in the future, because then the artifacts wouldn't
share the same artifact name.

Cheers,
Till

On Mon, Oct 26, 2015 at 4:04 PM, Maximilian Michels <m...@apache.org>
wrote:
Hi Flinksters,

We have recently committed an easy way to change Flink's Scala
version.
The
question arises now whether we should ship Scala 2.11 as binaries and
via
Maven. For the rc0, I created all binaries twice, for Scala 2.10 and
2.11.
However, I didn't create Maven artifacts. This follows our current
shipping
strategy where we only ship Hadoop1 and Hadoop 2.3.0 Maven
dependencies
but
additionally Hadoop 2.4, 2.6, 2.7 as binaries.

Should we also upload Maven dependencies for Scala 2.11?

If so, the next question arises: What version pattern should we have
for
the Flink Scala 2.11 dependencies? For Hadoop, we append -hadoop1 to
the
VERSION, e.g. artifactID=flink-core, version=0.9.1-hadoop1.

However, it is common practice to append the suffix to the artifactID
of
the Maven dependency, e.g. artifactID=flink-core_2.11, version=0.9.1.
This
has mostly historic reasons but is widely used.

Whatever naming pattern we choose, it should be consistent. I would be
in
favor of changing our artifact names to contain the Hadoop and Scala
version. This would also imply that all Scala dependent Maven modules
receive a Scala suffix (also the default Scala 2.10 modules).

Cheers,
Max



Reply via email to