Re: Not-sticky sessions with Sling?

2017-01-16 Thread Bertrand Delacretaz
Hi,

On Mon, Jan 16, 2017 at 9:16 PM, lancedolan  wrote:
> ...this probably shoots down our entire Sling
> proof of concept project...

That would be a pity, as I suppose you're starting to like Sling now ;-)

> ...Is there any way
> to force all reads to read the most recent revision, perhaps through some
> configuration?...

As Chetan say that's a question for the Oak dev list, but from a Sling
point of view having that option would be useful IMO.

If the clustered Sling instances can get consensus on what the most
recent revision is (*), having the option for Oak to block until it
sees that revision sounds useful in some cases. That should probably
happen either on opening a JCR Session or when Session.refresh() is
called.

-Bertrand

(*) which might require an additional consensus mechanism, maybe via
Mongo if that's what you're using?


Re: Not-sticky sessions with Sling?

2017-01-16 Thread Chetan Mehrotra
On Tue, Jan 17, 2017 at 1:46 AM, lancedolan  wrote:
> It's ironic that the cluster which involves multiple datastores (tar), and
> thus should have a harder time being consistent, is the one that can
> accomplish consistency..

Thats not how it is. Cluster which involves multiple datastores (tar)
is also eventually consistent. Changes are either "pushed" to each tar
instance via some replication or changes done on one of the cluster
node surfaces on other via reverse replication. In either case change
done is not immediately visible on other cluster nodes

> More importantly, is it a function of Repo size, or repo activity?
> If the repo grows in size (number of nodes) and grows in use (number of
> writes/sec) does this impact how frequently Sling Cluster instances grab the
> most recent revision?

Its somewhat related to number of writes and is not dependent on repo size

> Less importantly... Myself and colleagues are really curious as to why
> jackrabbit is implemented this way. Is there a performance benefit to being
> eventually, when the shared datastore is actually consistent? What's the
> reasoning for not always hitting the latest data?  Also... Is there any way
> to force all reads to read the most recent revision, perhaps through some
> configuration?

Thats a question best suited for discussion on oak-dev mailing list
(oak-...@jackrabbit.apache.org)

Chetan Mehrotra


RE: Bad asset resource resolving

2017-01-16 Thread Stefan Seifert
no, the basics of resource resolution have not change recently - a resource 
with multiple dots in it's resource name (like /content/sling.logo.png) should 
always be resolvable by this name. and it works as expected when i reproduce 
the steps you describe (copy to /content/sling.logo.png).

i tested it with the current sling launchpad from trunk.

stefan


>-Original Message-
>From: Bart Wulteputte [mailto:bart.wultepu...@gmail.com]
>Sent: Monday, January 16, 2017 1:44 PM
>To: users@sling.apache.org
>Subject: Bad asset resource resolving
>
>Hi all,
>
>It seems that the way assets are resolved has changed a little. When an
>asset contains an additional . (dot) it can't be resolved anymore to the
>actual asset resource. e.g. /content/my.asset.pdf resolves to
>/content/my.pdf rather than the expected path.
>
>As a simple test you can copy the sling-logo.png to
>/content/sling.logo.png. When now using the Sling Resolver Test - we see
>that resolving /content/sling.logo.png results in a non-existing resource
>with path /content/sling.png instead of the expected asset url. The section
>after the first dot is now interpreted as a selector which didn't used to
>be the case for assets in the past.
>
>Was this intentionally changed or is this an actual bug?
>
>Best regards
>
>Bart


Re: Not-sticky sessions with Sling?

2017-01-16 Thread lancedolan
This is really disappointing for us. Through this revisioning, Oak has turned
a datastore that is consistent by default into a datastore that is not :p
It's ironic that the cluster which involves multiple datastores (tar), and
thus should have a harder time being consistent, is the one that can
accomplish consistency... and the cluster that involves a single shared
source of truth (mongo/rdbms), and should have the easiest time being
consistent, is not. Hehe. Ahh this probably shoots down our entire Sling
proof of concept project. 

Our next step is to measure the consequences of moving forward with
Sling+Oak+Mongo and not-sticky sessions. I'm going to try to test this, and
get an empirical answer, by deploying to some AWS instances. I'll develop a
custom AuthenticationHandler so that authentication is stateless and then
we'll try to see how bad the "delay" might be. However, I would love a
theoretical answer as well, if you've got one :) 


chetan mehrotra wrote
>  sticky 
> ... sticky sessions would be required due to eventual consistent nature of 
> repository. 

Okay, but if we disable stick sessions ANYHOW (because in our environment we
must), how much time delay are we talking, do you think, in realistic
practice? We might be able to solve this by giving user-feedback that covers
up for the sync delay. When a user clicks save, they might just go to a
different screen, providing enough time for things to sync up. It might be a
race condition, but that might be acceptable if we can choose that
architecture on good information. I think that, in theory, the answer to
"worst case scenario" for eventual consistency is always "forever," but
really... How long could a Sling instance take to get to the latest
revision? More importantly, is it a function of Repo size, or repo activity?
If the repo grows in size (number of nodes) and grows in use (number of
writes/sec) does this impact how frequently Sling Cluster instances grab the
most recent revision?

Less importantly... Myself and colleagues are really curious as to why
jackrabbit is implemented this way. Is there a performance benefit to being
eventually, when the shared datastore is actually consistent? What's the
reasoning for not always hitting the latest data?  Also... Is there any way
to force all reads to read the most recent revision, perhaps through some
configuration? A performance cost for this might be tolerable



--
View this message in context: 
http://apache-sling.73963.n3.nabble.com/Not-sticky-sessions-with-Sling-tp4069530p4069661.html
Sent from the Sling - Users mailing list archive at Nabble.com.


Bad asset resource resolving

2017-01-16 Thread Bart Wulteputte
Hi all,

It seems that the way assets are resolved has changed a little. When an
asset contains an additional . (dot) it can't be resolved anymore to the
actual asset resource. e.g. /content/my.asset.pdf resolves to
/content/my.pdf rather than the expected path.

As a simple test you can copy the sling-logo.png to
/content/sling.logo.png. When now using the Sling Resolver Test - we see
that resolving /content/sling.logo.png results in a non-existing resource
with path /content/sling.png instead of the expected asset url. The section
after the first dot is now interpreted as a selector which didn't used to
be the case for assets in the past.

Was this intentionally changed or is this an actual bug?

Best regards

Bart