Hello Serge,

I'm sorry, I was on vacation and couldn't attend the meeting.

I see that you mentioned moving everything into one single branch,
waiting for things to stabilize, and then splitting the change into
multiple PRs.

This is indeed a good idea, it is always better to introduce changes
in small reviewable increments, but if I could make a recommendation,
it would be to not consider it an "Unomi v3" branch, but instead an
experimental branch, since some of the changes you will introduce will
not necessarily be breaking (for example, I believe Opensearch support
is not breaking).

Once you have this experimental branch in a state you are happy with,
you'll have a good idea of what is breaking or not, and in that case,
you could begin by submitting non-breaking changes in small individual
PRs, making it possible for these to be part of Unomi v2.
>From that point, the decision could be made to either:
 - release a v2 with these recent changes, bump master to v3, and
create a maintenance branch for Unomi v2 (preferable IMHO)
 - or to create a Unomi v3 branch and handle it separately

This approach helps in creating a v3.0 release containing mostly
breaking changes, by keeping all the non-breaking changes in the "last
v2" it can help in reducing migration overhead for the community.

In all cases, I believe it is best to avoid having one v3 branch
containing many unreviewed changes, hoping to have all of these
reviewed at once in a very large PR towards the end of the process.

Thanks,
François

On Thu, Feb 13, 2025 at 11:25 AM Serge Huber <shu...@apache.org> wrote:
>
> Hi all,
>
> Here are the meeting notes from the latest monthly meeting. Please don’t
> hesitate to reply to this email if I forgot or misrepresented something.
>
> Best regards,
>
>   Serge Huber.
>
>
> Unomi meeting notes :
>
> ElasticSearch support:
>
>    - ElasticSearch 9 support is important to Jahia developers because ES 7
>    support will be dropped 9 months after the release (until end 2025).
>    - Are seeing concurrency issues with less of writes, some event and
>    merges are lost. Have reproduction scenario. Profile properties are
>    overwritten because of concurrency issues.
>
>
> Developer experience
>
>    - Talked about Developer experience around event processing, schema
>    issues, which actions are triggered.
>    - We have Karaf shell commands but they require an SSH connection and
>    are not accessible via HTTP request like the API calls
>    - Created a JIRA ticket with a proposal for an explain query parameter
>    on the eventcollector. Serge prototyped the idea and it seems like a very
>    good option. A ThreadLocal object could be used to do this.
>    - Another idea would be to change the log level temporary but might not
>    be viable, and could generate lots of noise in the experience anyway.
>
>
> Unomi V3 discussions
>
>
>
>    - Serge proposed Unomi V3 that will be initially unstable
>    - He is targeting a first version by the end of February
>    - Serge talked about removing the clustering support and endpoint.
>    Question about the cluster removal and load distribution. We used to want
>    to know which IPs to distribute to from a Javascript script. We could
>    replace this DNS level handling which is very common on lots of
>    installations and a standard for production environment. We should
>    - For the new scheduler, could we have configurable purge jobs. The idea
>    would be to have an API to do it.
>    - Talked about new SchedulerService
>    - Talked about new Multi-tenancy work
>    - Serge will present a lot of the changes scheduled for V3.
>
>
> Code reviews:
>
>    - Romain mentioned that the changes should be individually reviewable
>    - Romain is concerned that for V3 there will be to have a lots of
>    changes to be reviewed at the same time
>    - Serge will look into splitting things into PRs after the fact once the
>    changes stabilize, it should be possible to some degree that will help
>    reviewers.
>
>
> Migration:
>
>    - Discussion about migration, have issues with processing large sets of
>    data
>    - Looking for solutions to maybe duplicate queries to have both the old
>    and the new system up to date before switching to the new one.
>    - At the API level, to have a proxy that would duplicate requests, using
>    journals as well. Does something already exist ? (Serge is pretty sure
>    there must be existing solutions for this)

Reply via email to