+1, the default build should not be the bleeding edge but rather the most 
popular combination or even lagging it but we also need to have methods to 
build the bleeding edge and deprecation so we minimize work to support all 
versions of all deps.

I like the poll idea but wonder how many will see the notice of polling. Still 
it can’t hurt.

BTW the implications of multiple binaries for these combinations means 
re-checking all transitive deps with each binary combo. The part about the ASF 
rules that make them particularly onerous is that licenses can change at the 
whim of the copyright holder so my reading is that they need to be re-checked 
every time a dep *version* is changed. We avoid this now and IMO this makes any 
binary release of dubious value given the cost and give that *all* Templates 
are build from source anyway. A tool to do this automatically is about the only 
way to make this work.


On Jun 8, 2017, at 9:19 AM, Donald Szeto <[email protected]> wrote:

Unfortunately with OSS it is not easy to survey accurately the user base unless 
we provide some sort of opt-in feedback (automatically or by putting up a poll 
on the web site). We have many users who don't subscribe to mailing lists.

I personally prefer updating source build defaults to a reasonably popular 
combination. Note that this is independent from the process of deprecating and 
eventually dropping support of old versions of dependencies.

I feel that the major common discomfort here is the fear that some old/stable 
versions of dependencies may get dropped without notice, instead of reluctance 
to support new technologies. If so, I would propose we agree on a deprecation 
process and document it. To enforce it, I would suggest including all 
reasonable combinations of dependencies in our automatic builds. This also has 
a benefit that once we are cleared on releasing binaries, we can roll them all 
out to make it convenient for our users.

On Wed, Jun 7, 2017 at 10:31 PM takako shimamoto <[email protected] 
<mailto:[email protected]>> wrote:
I also agree with shinsuke.

> Note that this change does not discard old version supports.
> If you want to use old versions, you can build PIO with them.

This means that the versions supported by PIO don't change.
Users will be able to build PIO from source distribution as same as
the current situation.

As for supported versions (when and which versions to be deprecated),I
think we should discuss it in another thread.
In that case, it might be a good idea to decide it based on project
policy since PIO is an open source project at the ASF.


2017-06-08 11:18 GMT+09:00 Naoki Takezoe <[email protected] 
<mailto:[email protected]>>:
>> For Hadoop 2.6 and Spark 2.1, our updated dependencies will work.
>
> +1
>
> We should always catch up latest versions of Hadoop, Spark and so on,
> but default build targets should cover existing popular environments
> as much as possible.
>
> In addition, HBase version (0.98.5) looks much old. It's already EOM.
> We should upgrade it to 1.2 at least.
>
> 2017-06-08 5:51 GMT+09:00 Pat Ferrel <[email protected] 
> <mailto:[email protected]>>:
>> Supporting the latest and requiring them are 2 different things. Requiring 
>> them (except for ES) means PIO won’t run unless the clusters for every user 
>> are upgraded to match the client because only backward compatibility is 
>> supported. Last time I checked if you require HDFS 2.7, PIO won’t run on 
>> 2.6. If you require 2.6 PIO will run on 2.6 or 2.7 so immediate upgrades 
>> have no benefit. The last I checked there was no forward compatibility 
>> guarantee. Has this changed?
>>
>> If ES guarantees forward compatibility that is great.
>>
>>
>> On Jun 7, 2017, at 11:08 AM, Mars Hall <[email protected] 
>> <mailto:[email protected]>> wrote:
>>
>> These upgrades are very similar to the dependencies we support/provide for 
>> PredictionIO 0.11.0-incubating in the Heroku buildpack.
>>
>> If the framework is going to upgrade default dependencies, I wholeheartedly 
>> agree that moving to the most recent versions of everything is the way to go.
>>
>> Once PIO reaches 1.0 releases, I'd imagine that every major dependency 
>> upgrade would be taken together and increment the PredictionIO major version.
>>
>> *Mars
>>
>> ( <> .. <> )
>>
>>> On Jun 4, 2017, at 22:25, Shinsuke Sugaya <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>>
>>> Hi all,
>>>
>>> We have a plan to change default build targets in PIO-83 and PIO-84.
>>> Current versions look too old, so it will be better to support
>>> newer versions as default.
>>>
>>> Current:
>>> - PIO_SCALA_VERSION=2.10.6
>>> - PIO_SPARK_VERSION=1.6.3
>>> - PIO_ELASTICSEARCH_VERSION=1.7.6
>>> - PIO_HADOOP_VERSION=2.6.5
>>>
>>> They will be changed to:
>>>
>>> 0.12.0:
>>> - PIO_SCALA_VERSION=2.11.8
>>> - PIO_SPARK_VERSION=2.1.1
>>> - PIO_ELASTICSEARCH_VERSION=5.4.1
>>> - PIO_HADOOP_VERSION=2.7.3
>>>
>>> Note that this change does not discard old version supports.
>>> If you want to use old versions, you can build PIO with them.
>>>
>>> Please let us know if you have any concerns.
>>>
>>> https://issues.apache.org/jira/browse/PIO-83 
>>> <https://issues.apache.org/jira/browse/PIO-83>
>>> https://issues.apache.org/jira/browse/PIO-84 
>>> <https://issues.apache.org/jira/browse/PIO-84>
>>>
>>> Regards,
>>> shinsuke
>>
>>
>
>
>
> --
> Naoki Takezoe

Reply via email to