On 19 Oct 2010, at 11:24, Thomas Müller wrote:
> changeSetId = nanosecondsSince1970 * totalClusterNodes + clusterNodeId
I have spent some time doing experiments on this, here are some observations
based on those experiments. (could be totally irrelevant to this discussion,
sorry for the noise
Hi,
> You wrote: "I think the persistence API should be synchronous as it is
> now." Which I understand as the persistence layer returning a save()
> call only when the entire cluster has been "written to".
It seems I was unclear. This is exactly what I *don't* want for
Jackrabbit 3. In the curre
On Thu, Oct 21, 2010 at 21:03, Thomas Müller wrote:
>> importance of leveraging in-memory storage
>
> In-memory storage *alone* is fast. But if used in combination with the
> current clustering architecture, then writes will not scale. They will
> be just be a bit faster (until you reach the netwo
Hi,
> See section 7 "Vector Time". Also see [1] from slide 14 onwards for a more
> approachable reference.
> [1] http://www.cambridge.org/resources/0521876346/6334_Chapter3.pdf
Thanks! From what I read so far it sounds like my idea is called "Time
Warp" / "Virtual Time".
On page 10 and 11 there
Hi,
> Network delay .. is faster than the delay of a disk
I wrote ""the network is the new disk" (in terms of bottleneck, in
terms of performance problem)". Network delay may be a bit faster now
than disk access. But it's *still* a huge bottleneck (compared to
in-memory operations), specially if
we could leverage a virtual time algorithm
I read the paper, but I don't actually understand how to implement it.
See section 7 "Vector Time". Also see [1] from slide 14 onwards for a
more approachable reference.
Michael
[1] http://www.cambridge.org/resources/0521876346/6334_Chapter3.pdf
On Tue, Oct 19, 2010 at 23:17, Justin Edelson wrote:
> You'd also need to ensure that a single session.save() which touched
> both "immediately" and "eventually" consistent nodes forces a queue
> flush (or at least a queue flush of changesets which touched the same
> set of eventually consistent n
On Wed, Oct 20, 2010 at 11:22, Thomas Müller wrote:
> Let's discuss partitioning / sharding in another thread. Asynchronous
> change merging is not about how to manage huge repositories (for that
> you need partitioning / sharding),
Having different consistency expectations for certain parts in o
Hi,
Let's discuss partitioning / sharding in another thread. Asynchronous
change merging is not about how to manage huge repositories (for that
you need partitioning / sharding), it's about how to manage cluster
nodes that are relatively far apart. I'm not sure if this is the
default use case for
Hi,
On Tue, Oct 19, 2010 at 12:24 PM, Thomas Müller wrote:
> The current Jackrabbit clustering doesn't scale well for writes
> because all cluster nodes use the same persistent storage. Even if
> persistence storage is clustered, the cluster journal relies on
> changes being immediately visible i
On Tue, Oct 19, 2010 at 4:14 PM, Alexander Klimetschek wrote:
> On Tue, Oct 19, 2010 at 12:24, Thomas Müller wrote:
>> Instead, the cluster nodes should merge each others changes
>> asynchronously (except operations like JCR locking, plus potentially
>> other operations that are not that common;
On Tue, Oct 19, 2010 at 12:24, Thomas Müller wrote:
> Instead, the cluster nodes should merge each others changes
> asynchronously (except operations like JCR locking, plus potentially
> other operations that are not that common; maybe even node move). With
> "asynchronously" I mean usually within
Thomas, I am not sure if you are proposing to add a more "asynchronous"
PersistenceManager or completely change the behaviour of the current one.
While I would love a system that can scale well for reads and writes, and
while I understand that there is a class of applications that are well
served
The current Jackrabbit clustering doesn't scale well for writes
because all cluster nodes use the same persistent storage. Even if
persistence storage is clustered, the cluster journal relies on
changes being immediately visible in all nodes. That means Jackrabbit
clustering can scale well for read
14 matches
Mail list logo