[jira] [Commented] (SLING-7792) Resource Resolver should return more than one resolved path if available

2018-08-16 Thread Carsten Ziegeler (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583391#comment-16583391
 ] 

Carsten Ziegeler commented on SLING-7792:
-

I think the test is wrong - the API of the ResourceProvider states that if 
there is more than one provider with the same root path, the one with the 
highest service ranking is used.
As both providers are registered with the same bundle id, the one registered 
first has the lower service id and therefore, it should one.
I think we should change the test to explicitely use service ranking and 
register the second one with a higher ranking.

> Resource Resolver should return more than one resolved path if available
> 
>
> Key: SLING-7792
> URL: https://issues.apache.org/jira/browse/SLING-7792
> Project: Sling
>  Issue Type: Bug
>  Components: API, ResourceResolver
>Affects Versions: Resource Resolver 1.6.0
>Reporter: Alex COLLIGNON
>Assignee: Robert Munteanu
>Priority: Major
> Fix For: API 2.18.4, Resource Resolver 1.6.6
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The current {{ResourceResolver#map}} methods return a single "path mapped 
> from the (resource) path". However, it is possible than a given path can be 
> mapped to multiple others while using features such as {{sling:alias}} and 
> {{sling:vanityUrl}}.
> In order to support that scenario, it is require to implement new maps method 
> for {{ResourceResolver}} which returns a collection of "resolved path". This 
> collection must contain the resources mapped through {{/etc/map}}, 
> {{sling:alias}} and {{sling:vanityUrl}}.
> The current API suggests to implement a second method to be 
> consistent/symmetric with the existing map operations
> {quote}
> @Nonnull java.util.Collection maps(@Nonnull 
> java.lang.String resourcePath)
> @Nonnull java.util.Collection maps(@Nonnull 
> javax.servlet.http.HttpServletRequest request, @Nonnull java.lang.String 
> resourcePath)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: ResourceProviderTrackerTest#testReactivation fails after updating to osgi-mock 2

2018-08-16 Thread Carsten Ziegeler
I think the test is wrong - the API of the ResourceProvider states that
if there is more than one provider with the same root path, the one with
the highest service ranking is used.

As both providers are registered with the same bundle id, the one
registered first has the lower service id and therefore, it should one.

I think we should change the test to explicitely use service ranking and
register the second one with a higher ranking.

Regards

Carsten


Robert Munteanu wrote
> ... or maybe the test is correct and the implementation wrong? In which
> case we should adjust the code to make sure that newer service
> references are preferred.
> 
> Robert
> 
> 
> On Wed, 2018-08-15 at 16:22 +0200, Robert Munteanu wrote:
>> Thanks for the analysis Stefan, I was able to to follow the code and
>> that explains the behaviour change.
>>
>> My opinion on this is that the test is incorrect and that we should
>> remove that part.
>>
>> Carsten, you wrote that test - maybe on have another opinion?
>>
>> Thanks,
>>
>> Robert
>>
>> On Wed, 2018-08-15 at 08:48 +, Stefan Seifert wrote:
>>> i've created a branch with some cleanup of the unit tests and
>>> updating the osgi-mock dependency - but the problem itself is not
>>> solved, but i can explain it
>>>
>>
>>
> https://github.com/apache/sling-org-apache-sling-resourceresolver/tree/feature/update-testing-deps
>>>
>>> - the testReactivation test registers two resource providers with
>>> the
>>> same path "/", and expects that the 2nd one overlays the 1st one.
>>> but
>>> this does no longer happen with the latest osgi-mock version.
>>>
>>> - the reason is that in the old version that was used the
>>> comparable
>>> implementation of ServiceRanking was wrong - this was fixed in
>>> SLING-
>>> 5462
>>>
>>> - the comparable implementation of ResourceProviderInfo, to which
>>> the
>>> comparable implementation of ResourceProviderHandler delegates to,
>>> relies on comparing the service references if the path is identical
>>>
>>> - thus with the new (and correct) logic of osgi-mock an overlaying
>>> of
>>> resource providers is not possible - and it was never possible
>>> outside the old osgi-mock context with the broken service reference
>>> comparable implementation.
>>>
>>> the question is: if the test assumption is correct, the code is
>>> wrong
>>> and has to be changed to make overlay possible.
>>> on the other side is this code productive for a long time - maybe
>>> the
>>> test assumption is false?
>>>
>>> stefan
>>>
>>>
 -Original Message-
 From: Robert Munteanu [mailto:romb...@apache.org]
 Sent: Tuesday, August 14, 2018 6:38 PM
 To: dev@sling.apache.org
 Subject: ResourceProviderTrackerTest#testReactivation fails after
 updating
 to osgi-mock 2

 Hi,

 I am trying to update the resourceresolver module from 1.4.0 to
 2.3.10
 to fix some failures in configuring components that are needed by
 the
 ResourceResolver.

 However, this makes the
 ResourceProviderTrackerTest#testReactivation fail, at line 210
 [1].

 Since I'm not familiar with neither the test nor the OSGi mocks
 code, I
 would welcome another pair of eyes to either clarify what has
 changed
 in the OSGi mocks or to pinpoint what expectation of the test is
 not
 met.

 Thanks,

 Robert

 [1]: https://github.com/apache/sling-org-apache-sling-
 resourceresolver/blob/85c19139cfe5f174b65b2daf3791bc4af650ce1b/sr
 c/
 test/jav
 a/org/apache/sling/resourceresolver/impl/providers/ResourceProvid
 er
 TrackerT
 est.java#L210


>>>
>>>
>>
>>
> 
> 
-- 
Carsten Ziegeler
Adobe Research Switzerland
cziege...@apache.org


[jira] [Created] (SLING-7831) support injecting custom/alternate PostResponse implementations for the servlets in the usermanager and accessmanager bundles

2018-08-16 Thread Eric Norman (JIRA)
Eric Norman created SLING-7831:
--

 Summary: support injecting custom/alternate PostResponse 
implementations for the servlets in the usermanager and accessmanager bundles
 Key: SLING-7831
 URL: https://issues.apache.org/jira/browse/SLING-7831
 Project: Sling
  Issue Type: Bug
Affects Versions: JCR Jackrabbit Access Manager 3.0.0, JCR Jackrabbit User 
Manager 2.2.6
Reporter: Eric Norman
Assignee: Eric Norman


The changes from SLING-2223 added support for custom PostResponse 
implementations in the main sling post servlet, but the same changes were not 
done for the similar implementations from these other locations:

1. AbstractPostServlet#createHtmlResponse in the 
org.apache.sling.jcr.jackrabbit.usermanager bundle

2. AbstractAccessPostServlet#createHtmlResponse in the 
org.apache.sling.jcr.jackrabbit.accessmanager bundle.

 

 

Adding the same support in those additional locations should make it possible 
for the developer to provide more user friendly UI when something goes wrong in 
usermanager or accessmanager POST calls.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SLING-7792) Resource Resolver should return more than one resolved path if available

2018-08-16 Thread Robert Munteanu (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582736#comment-16582736
 ] 

Robert Munteanu commented on SLING-7792:


[~cziegeler] - we could use your opinion regarding the ResourceProvider overlay 
test issue, see 
https://lists.apache.org/thread.html/45f7055091070c378bf7e40904d5f697bcb57809fe061a6b5865802f@%3Cdev.sling.apache.org%3E
 .

Other than that, I think this are working as expected now, see 
https://github.com/apache/sling-org-apache-sling-api/pull/6 for the final API 
and also 
https://github.com/apache/sling-org-apache-sling-resourceresolver/pull/9 , but 
I want to refactor the impl a bit more as right now the ResourceResolver 
exposes a bit too much as the ResourceMapper needs it.

> Resource Resolver should return more than one resolved path if available
> 
>
> Key: SLING-7792
> URL: https://issues.apache.org/jira/browse/SLING-7792
> Project: Sling
>  Issue Type: Bug
>  Components: API, ResourceResolver
>Affects Versions: Resource Resolver 1.6.0
>Reporter: Alex COLLIGNON
>Assignee: Robert Munteanu
>Priority: Major
> Fix For: API 2.18.4, Resource Resolver 1.6.6
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The current {{ResourceResolver#map}} methods return a single "path mapped 
> from the (resource) path". However, it is possible than a given path can be 
> mapped to multiple others while using features such as {{sling:alias}} and 
> {{sling:vanityUrl}}.
> In order to support that scenario, it is require to implement new maps method 
> for {{ResourceResolver}} which returns a collection of "resolved path". This 
> collection must contain the resources mapped through {{/etc/map}}, 
> {{sling:alias}} and {{sling:vanityUrl}}.
> The current API suggests to implement a second method to be 
> consistent/symmetric with the existing map operations
> {quote}
> @Nonnull java.util.Collection maps(@Nonnull 
> java.lang.String resourcePath)
> @Nonnull java.util.Collection maps(@Nonnull 
> javax.servlet.http.HttpServletRequest request, @Nonnull java.lang.String 
> resourcePath)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] rombert opened a new pull request #9: [WIP] SLING-7792 - Resource Resolver should return more than one resolved path if available

2018-08-16 Thread GitBox
rombert opened a new pull request #9: [WIP] SLING-7792 - Resource Resolver 
should return more than one resolved path if available
URL: https://github.com/apache/sling-org-apache-sling-resourceresolver/pull/9
 
 
   Base implementation which passes all Sling ITs (except the ones already 
failing ...)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (SLING-7768) Add String Interpolation support to /etc/map

2018-08-16 Thread Bertrand Delacretaz (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582502#comment-16582502
 ] 

Bertrand Delacretaz commented on SLING-7768:


Looking at string interpolation for another project I stumbled on 
https://commons.apache.org/proper/commons-lang/javadocs/api-3.1/org/apache/commons/lang3/text/StrSubstitutor.html
 - it looks like that might work for your purposes, and even if we embed that 
class in our bundle I think it's still better than creating our own 
interpolator.

> Add String Interpolation support to /etc/map
> 
>
> Key: SLING-7768
> URL: https://issues.apache.org/jira/browse/SLING-7768
> Project: Sling
>  Issue Type: Improvement
>  Components: ResourceResolver
> Environment: Sling 11-SNAPSHOT, JDK 1.8
>Reporter: Andreas Schaefer
>Priority: Major
> Attachments: Screenshot 2018-07-06 11.41.58.png, Screenshot 
> 2018-07-06 11.42.41.png, Screenshot 2018-07-06 11.43.34.png
>
>
> Having worked on migrations of a Sling derivate Ruben & I ran into issues 
> where the /etc/map would map to production instead of testing environment.
>  Many big customer have extensive /etc/maps and also many different 
> environments like dev, qa, staging, prod etc.
>  It would be great to have a tool where for example items like the host name 
> or external links in /etc/map could be configured outside so that just one 
> entry has to adjusted rather than creating a full copy of the /etc/map tree.
>   
>  Example:
>   
>  /etc/map/http/phv.fq.host.name.8080
>   
>  Placeholder provides:
>  DEV: phv.fq.host.name=localhost
>  QA: phv.fq.host.name=qa.author.acme.com
>  STAGING: 
> phv.fq.host.name=[staging.author.acme.com|http://staging.author.acme.com/]
>  PROD: phv.fq.host.name=[acme.com|http://acme.com/]
>   
>  At runtime these are the resolved values:
>  DEV: http/localhost.8080
>  QA: http/qa.author.acme.com.8080
>  STAGING: http/[staging.author.acme.com|http://staging.author.acme.com/].8080
>  PROD: http/[acme.com|http://acme.com/].8080
>   
>  Not only does that make it easier and faster to create new test environments 
> but it also cuts down on the chance of copy-n-paste errors.
>   
>  I have a working POC with an PlaceholderProvider OSGi service and an 
> enhanced MapEntries that resolved any placeholders if found.
>   
>  Attached are 3 screenshots:
>  1. OSGi Placeholder Provider Configuration
>  2. /etc/map (Composum)
>  3. Result of [http://andreass.local:8080/] call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release Apache Sling Maven Sling Plugin 2.3.8

2018-08-16 Thread Radu Cotescu
+1

> On 14 Aug 2018, at 18:18, Robert Munteanu  wrote:
> 
> Please vote to approve this release:
> 
>  [ ] +1 Approve the release
>  [ ]  0 Don't care
>  [ ] -1 Don't release, because ...
> 
> This majority vote is open for at least 72 hours.
> 
> Thanks,
> 
> Robert



[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:38 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg blue) instances. (One of) the new instance(s) will 
automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

*PS*: this could be done entirely outside of discovery.oak

PPS: to be extra safe we can increment the prefix by 2, as some other code 
could rely on the fact that it is 1 by default and sets the prefix to 0. But 
I'm not aware of this in the discovery.oak context.


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg blue) instances. (One of) the new instance(s) will 
automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

*PS*: this could be done entirely outside of discovery.oak

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:33 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg blue) instances. (One of) the new instance(s) will 
automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

*PS*: this could be done entirely outside of discovery.oak


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

*PS*: this could be done entirely outside of discovery.oak

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:31 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

*PS*: this could be done entirely outside of discovery.oak


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:30 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster). If some 
leaderElectionIds already have a prefix of eg 2, increment that to 3, etc
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster)
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:29 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison order by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster)
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison 'queue' by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster)
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:28 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used for example in the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison 'queue' by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster)
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used to follow the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison 'queue' by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster)
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli edited comment on SLING-7830 at 8/16/18 9:27 AM:
-

The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the *lowest* 
leaderElectionId (String comparison).

That fact can be used to follow the following procedure:
 * before bringing up new (eg blue) instances, put the old (eg green) 
instances's leaderElectionIds to the back of the leader comparison 'queue' by 
incrementing for example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction* (otherwise 
there will be an *unwanted* leader change in the old cluster)
 * then bring up the new (eg green) instances. (One of) the new instance(s) 
will automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..


was (Author: egli):
The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the lowest 
leaderElectionId.

That fact can be used to follow the following procedure:
* before bringing up new (eg blue) instances, put the old (green) instances's 
leaderElectionIds in the back of the leader comparison by incrementing for 
example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction*
* then bring up the new (eg green) instances. (One of) the new instance(s) will 
automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582255#comment-16582255
 ] 

Stefan Egli commented on SLING-7830:


The leader election is based on the leaderElectionId stored in the repository 
under {{/var/discovery/oak/clusterInstances}}. When a leader starts up, it 
stores its own leaderElectionId there. As Carsten mentioned, that's made up of 
a prefix, then the start time and the slingId (to avoid clashes). At the time 
the cluster view is analysed, the leader is the one with the lowest 
leaderElectionId.

That fact can be used to follow the following procedure:
* before bringing up new (eg blue) instances, put the old (green) instances's 
leaderElectionIds in the back of the leader comparison by incrementing for 
example the prefix, eg. replace the leaderElectionIds from 
*{{1}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} to 
*{{2}}*{{_001534409616936_374019fc-68bd-4c8d-a4cf-8ee8b07c63bc}} (and do 
the same for *all* old instances). Do this in *1 jcr transaction*
* then bring up the new (eg green) instances. (One of) the new instance(s) will 
automatically become leader, since the prefix is {{1}} by default and thus 
lower than the old instances.

We could be looking at automating something like this and providing it via some 
API/JMX..

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SLING-7830) Defined leader switch

2018-08-16 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/SLING-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582187#comment-16582187
 ] 

Stefan Egli commented on SLING-7830:


Assuming we're talking about discover.oak here?

Re scenario 1: I'm wondering if it would already possible today by manually 
changing the property under 
{{/var/discovery/oak/clusterInstances/823d89a4-b625-4939-8457-39600a1a09c8/@leaderElectionId}}
 - but as of now that's not a recommended way yet, so would have to first be 
tested/verified. In any case, discovery API doesn't define a leader switch, so 
we'd have to discuss a new API for this.

> Defined leader switch
> -
>
> Key: SLING-7830
> URL: https://issues.apache.org/jira/browse/SLING-7830
> Project: Sling
>  Issue Type: Improvement
>  Components: Discovery
>Reporter: Carsten Ziegeler
>Priority: Major
>
> The current leader selection is based on startup time and sling id (mainly) 
> and is stable across changed in the topology for as long as the leader is up 
> and running.
> However there are use cases like blue green deployment where new instances 
> with a new version are started and taking over the functionality. However 
> with the current discovery setup, the leader would still be one of the 
> instances with the old version.
> With a new deployed version, tasks currently bound to the leader should run 
> on the new version.
> Therefore the leader needs to switch and stay the leader (until it dies).
> We probably need an additional criteria for the leader selection
> /cc [~egli]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)