Hello Davide,
+1 :)
Regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 9 Jan 2018, at 12:53, Davide Giannella wrote:
>
> A candidate for the Jackrabbit Oak 1.8.0 release is available at:
>
>
Hello,
during the development of Oak 1.7 we’ve found many potential issues in the
oak-upgrade and also introduced useful features:
* improved the migration S3 support and reliability,
* the checkpoints are now migrated too,
* no more need to reindexing the repository after migration.
I’d like
Hi Matt,
> On 24 Oct 2017, at 21:54, Matt Ryan wrote:
> It is still unclear to me how this works in terms of configuration files,
> and how this would work for the CompositeDataStore. This is how I believe
> it would work for two FileDataStores in the composite:
>
> FDS config
Hi Matt,
> On 20 Oct 2017, at 23:02, Matt Ryan wrote:
>
> I think I basically understand all of this, except I don’t know how you go
> about configuring two file data stores. What would that look like in
> practice? Normally if I were going to configure a FileDataStore I
Hi Matt,
> On 20 Oct 2017, at 23:02, Matt Ryan wrote:
>
> I think I basically understand all of this, except I don’t know how you go
> about configuring two file data stores. What would that look like in
> practice? Normally if I were going to configure a FileDataStore I
Hi,
I plan to backport these two issues. They improve the S3 resilience in
oak-upgrade by using the newer version of S3DataStore and waiting until all the
uploads are finished.
Regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
Hello Matt,
I don’t think we should rely on the bundle activation / deactivation, but
rather on the service registration / reregistration. OSGi allows to use
MANDATORY_MULTIPLE cardinality for a @Reference - in this case, the service
consumer will be informed every time there is a new service
Hello Robert & Michael,
> On 22 Sep 2017, at 08:31, Robert Munteanu wrote:
>>
>> this seems like an opposite of the composite node store - rather than
>> combining multiple repositories together, we’re trying to split one
>> repository into many jails. Maybe I’m too
Hello Bertrand,
this seems like an opposite of the composite node store - rather than combining
multiple repositories together, we’re trying to split one repository into many
jails. Maybe I’m too optimistic, but I think the implementation should be quite
easy if done on the node store level.
Hello,
the migration code requires access to the checkpoint metadata: the creation and
expiry timestamps. They can be read by accessing the checkpoints root node
(using the method mentioned in the subject). However, the method is
package-scoped. Can we make it public, so the other modules can
for removing the lease
info.
Regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 11 Aug 2017, at 08:07, Tomek Rekawek <reka...@adobe.com.INVALID> wrote:
>
> Hello,
>
> I wanted to draw your attention to OAK-6547. When running Oak
Hello,
I wanted to draw your attention to OAK-6547. When running Oak inside Docker,
all the properties used to create the machine id and the process id are exactly
the same for different containers running on the same host. I was wondering if
it’s a good idea to use the container id
egards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
>
>
>
> On 29.05.17 10:50, Tomek Rekawek wrote:
>> Hello,
>> in the OAK-6220 I’m exploring a topic of having a switchable copy-on-write
>> node store implementation. The idea is that
Hi,
regression fixed, sorry for that.
Regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 29 May 2017, at 20:11, Vikas Saurabh wrote:
>
> Hi Angela,
>
>> do others experience the same issue? and if yes, is anybody working on
>>
Hello,
in the OAK-6220 I’m exploring a topic of having a switchable copy-on-write node
store implementation. The idea is that the “main” node store (eg. DocumentMK)
is wrapped with an extra layer (copy-on-write node store), which can be turned
on/off in the runtime. When the copy-on-write is
Hi Marco,
the main purpose of the oak-upgrade is to migrate a Jackrabbit 2 / CRX2
repository into Oak or to migrate one Oak node store (eg. segment) to another
(like Mongo). On the other hand, it’s not a good choice to use it for the
application upgrades within the same repository type. You
Hello,
so, it seems we have the consensus. I’ll rename the implementation to
CompositeNodeStore and the module to oak-store-composite tomorrow afternoon.
Regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 11 May 2017, at 14:48, Julian Sedding
Hello,
> On 5 May 2017, at 20:40, Robert Munteanu wrote:
>> I was wondering about this also WRT federated data store.
> I think the high-level intent is the same for both - compose a single
> {Data,Node}Store out of multiple sub-stores.
I also think that both implementations
Hello oak-dev,
the multiplexing node store has been recently extracted from the oak-core into
a separate module and I’ve used it as an opportunity to rename the thing. The
name I suggested is Federated Node Store. Robert doesn’t agree it’s the right
name, mostly because the “partial” node
sent this option to the user.
>
> Michael
>
>
> On 12.03.17 18:30, Tomek Rekawek wrote:
>> Hello,
>>
>> I’d like to backport the OAK-5920 to branch 1.6. Apparently, under some
>> circumstances the checkpoint migration in the oak-upgrade doesn’t work. I
Hello,
I’d like to backport the OAK-5920 to branch 1.6. Apparently, under some
circumstances the checkpoint migration in the oak-upgrade doesn’t work. It’s a
best-effort procedure anyway, so if the exception occurs, the new patch will
catch it, log a warning, clean up the incomplete state and
Hi,
I’ve found a bug which for some configuration may cause the MongoDocumentStore
to always send requests to the Mongo primary instance (even if the secondary is
“nearest”). The fix is simple, but it requires a lazy initialisation of one of
the objects, so a couple of extra eyes won’t hurt:
Hi,
Some of the Oak users are interested in rolling back the Oak upgrade within a
branch (like 1.4.10 -> 1.4.1). As far as I understand, it should work, unless
some of the commits in (1.4.10, 1.4.10] introduces a repository format change
that is not compatible with the previous version (eg.
Hi,
[x] +1 Release this package as Apache Jackrabbit 2.15.0
Regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
smime.p7s
Description: S/MIME cryptographic signature
[X] +1 Release this package as Apache Jackrabbit Oak 1.4.10
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 10 Nov 2016, at 15:43, Alex Parvulescu wrote:
>
> [X] +1 Release this package as Apache Jackrabbit Oak 1.4.10
>
> On Thu, Nov 10,
Hi Robert,
> On 27 Oct 2016, at 14:11, Robert Munteanu wrote:
>
> If we have referenceable nodes or versionable nodes in the 'private'
> repository then it will be inconsistent, as they point to data
> maintained under the 'global' store. So we need to prevent such
>
Hello,
let me describe the use cases we have in mind for the multiplexing node store
in a more detailed way. I hope it’ll allow to plan a few (well, two) milestones
for the implementation.
Let’s assume a Sling installation, in which /apps and /libs nodes contains the
immutable application
Hi,
> On 23 Sep 2016, at 00:32, Michael Dürig wrote:
>
> To this respect I think your fix is basically correct, it should just be
> applied deeper down. Instead of wrapping the base states before passing them
> to the MemoryNodeStore constructor, I think that constructor
Hi Robert,
I think the quoted exception may be caused by some long string stored in the
Jackrabbit 2 repository. In MongoMK all the strings are inlined in the Mongo
documents, while the binaries are extracted to the blob store. Therefore,
string properties longer than ~15MB are not supported.
Hi Robert,
Thanks for the feedback.
> On 22 Sep 2016, at 15:13, Robert Munteanu wrote:
>
> Only thing I'm wondering is whether there is a scenario where
> performance would be greatly impacted since the NodeState contains lots
> of entries _and_ it's not a MemoryNodeState,
Hi,
I’ve looked into this issue. I think it’s caused by the fact that the squeeze()
method sometimes doesn’t wrap the passed node state with MemoryNodeStates, but
return it as-is. I tried to wrap the state unconditionally in the initializers
and it fixed the issue.
Michael, Robert - do you
.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>at
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:42)
> )
>
> Looking at RepositoryUpgrade.java, I'm guessing the upgrade tool only works
> if you use the default SearchIndex? For certain re
Hi Robert & Marcel,
thanks for the report. I created OAK-4832[1] to track it.
Robert, could you check if the problem exists on the recent SNAPSHOT[2]? If
it’s fine, I’ll backport the fix to the 1.4 branch.
Marcel, do you think using ConfigurationParameters.EMPTY for userConfig is
enough if
Hi,
FWIW, I merged OAK-4831, so now the upgrade tests won’t break even if they
can’t cleanup the repository directory at the end. The directories are now
created under ./target (so it’s easy to remove them manually) and the logs will
contain exact reason of the failure (in this case the
Hi Tulika,
> On 16 Sep 2016, at 15:16, Tulika Goel wrote:
> I just wanted to know if there are any plans to support a major version
> increment in future? Or even in current implementation, is there a scenario
> under which a new Version object is created with a name which
Hi Davide,
> On 13 Sep 2016, at 14:57, Davide Giannella wrote:
>
> Sorry if it's redundant but I didn't have the time yet to check the commits.
>
> Here's, from another project I worked on, how to enforce a specific java
> version at compile time.
>
> The build will fail if
Hi,
the interesting thing here is that we actually compile the code with -source
and -target=1.6 in these branches [1][2]. However, the javac still uses the
rt.jar coming from the current JDK and it does contain the java.nio package. It
seems that the only way to check the API usage
Hi Ian,
> On 09 Sep 2016, at 18:04, Ian Boston wrote:
>
> Hi,
> Is it possible write a CommitHook as an OSGI Component/Service and for Oak
> to pick it up ?
> The Component starts and gets registered as a service, but Oak doesn't
> appear to pick it up.
The standard
Hi Chetan,
yes, it seems that this has been overlooked in the OAK-3239 (porting the
—include-paths support from RepositoryUpgrade). Feel free to create an issue /
commit a patch or let me know if you want me to do it.
Best regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
Hello,
I was wondering whether it’d make sense to normalise the RDB Document Store
schema - get rid of the JSON/JSOP concatenated strings and store each key/value
in a separate database row. Something like this:
id STRING
key STRING
revisionSTRING (nullable)
value
Hello,
during an Adobe internal Oak-coordination call I presented two improvements for
the clustered Oak setup I’m working on: OAK-3865 and OAK-4412. The presentation
is available at [1], please find the summary of the discussion below.
OAK-3865 optimises the secondary-read strategy. It tracks
> longer.
>
> 2016-06-06 15:01 GMT+02:00 Tomek Rekawek <reka...@adobe.com>:
>
>> Hello,
>>
>> I’ve noticed that the maven-deploy-plugin is disabled for the
>> oak-segment-tar. As a result, the 1.5.2 release is not quite consistent
>> (oak-jcr and oa
Hello,
I’ve noticed that the maven-deploy-plugin is disabled for the oak-segment-tar.
As a result, the 1.5.2 release is not quite consistent (oak-jcr and oak-upgrade
depend on the oak-segment-tar, which isn’t available in the remote repository).
Should we re-enable the plugin before the 1.5.3
Hi Francesco,
it will be useful in the oak-upgrade too. Do you think it’d possible to provide
a similar schema for the blob store configuration?
Best regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 05 May 2016, at 10:42, Francesco Mari
+1
Best regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 06 Apr 2016, at 14:27, Francesco Mari wrote:
>
> Hi all,
>
> some months ago we decided to drop support for Java 1.6 [1]. What about
> increasing the language level of
Hello Angela,
> On 16 Mar 2016, at 12:39, Angela Schreiber wrote:
>
> stepping into the AbstractOak2OakTest and finally
> RepositorySidegrade.copyState i actually see that the target NodeBuilder
> that results from copyWorkspace doesn't have the jcr:mixinTypes
> property
Hello Ancona,
the mailing list didn’t allow to add an attachment. Could you post it somewhere
online or (even better) put the code into github?
Best regards,
Tomek
--
Tomek Rękawek | Adobe Research | www.adobe.com
reka...@adobe.com
> On 17 Mar 2016, at 14:46, Ancona Francesco
Hello,
thanks for the warm feedback, I’ll prepare a list of proposed tests /
“spoilers” (eg. exhausting file descriptor limit) — it may be a good input for
the Oakathon.
> On 08 Mar 2016, at 14:06, Bertrand Delacretaz wrote:
> Great stuff, and it looks like there's not
Hello,
For some time I've worked on a little project called oak-resilience. It aims to
be a resilience testing framework for the Oak. It uses virtualisation to run
Java code in a controlled environment, that can be spoilt in different ways, by:
* resetting the machine,
* filling the JVM
Hi Julian,
> On 29 Feb 2016, at 15:40, Julian Sedding wrote:
>
> Should we automatically wrap the DS in the LengthCachingDatastore in
> oak-upgrade? Or provide an option for the cache-file path, which turns
> it on if set?
Good idea. I think we should enable it by default
tra <chetan.mehro...@gmail.com> wrote:
>
> On Mon, Feb 29, 2016 at 6:42 PM, Tomek Rekawek <reka...@adobe.com> wrote:
>> I wonder if we can switch the order of length and identity comparison in
>> AbstractBlob#equal() method. Is there any case in which the
>> getConte
Hello,
one of our customers tries to perform a repeated upgrade [1], using a JCR2+S3
repository as a source. The repository is large and the migration process
spends most of the time communicating with S3. It seems that besides from
copying the new content, it gets the S3 metadata of each
Hello,
The already mentioned JCR-2633 puts jcr:mixinTypes property into
NodePropBundle#getPropertyEntries(). As a result, the oak-upgrade code
responsible for replacing mix:simpleVersionable with mix:versionable doesn’t
work correctly (the results are replaced by the original properties). I
Hello,
Recently I checked what percent of the MongoDB reads are done from the
secondary instance in a global clustering setup of the AEM with the recent Oak
1.3.x. It was about 3%.
In the current trunk we'll only read document from the secondary instance if:
(1) we have the parent of the
irectly and b) trying 3 (failing) bulk updates first? My
>point being: I wonder how much value is in tweaking the exact parameters.
>
>Cheers
>Michael
>
>
>
>On 15/12/15 14:04, "Tomek Rekawek" <reka...@adobe.com> wrote:
>
>>Hi Michael,
>>
&g
Hello,
The OAK-2066 contains a number of patches, which finally will lead to use batch
insert/update operations available in RDB and Mongo. It’ll increase the
performance of applying a commit, especially when we have many small updates of
different documents.
There are some documents that
licting documents would
>move in and out of bulk updates periodically?
>Or do you envision that removal from bulk updates would be forever, once a
>document is removed?
>
>Michael
>
>
>
>
>On 15/12/15 11:35, "Tomek Rekawek" <reka...@adobe.com> wrote:
>
Hi,
On 10/12/15 15:58, "Julian Reschke" wrote:
>Anyway, that still doesn't explain the confusion in the surefire logs, no?
I think I found a cause of this one. The custom Parallelized runner we use has
a bug - if a test class runs for longer than 10 minutes, it will
Hi,
I spent some time analysing the logs and I found out a strange thing. On a
“slow” machine, in the surefire logs for the AtomicCounterTest (which takes 63
sec while it should 3 sec), following test case appears [1]:
It’s a test case from a completely different class. I downloded all
" <ju...@apache.org> wrote:
>Build Update for apache/jackrabbit-oak
>-
>
>Build: #6892
>Status: Broken
>
>Duration: 464 seconds
>Commit: d7f8d606426e67e02098b32f0918cfa144d15900 (trunk)
>Author: Tomek Rekawek
>Message: Create
gt;
>Build: #6775
>Status: Broken
>
>Duration: 425 seconds
>Commit: 7155b7fae5e1c4727b4acef01ad823de05804feb (trunk)
>Author: Tomek Rekawek
>Message: Merge branch 'trunk' into OAK-3586
>
>View the changeset: https://github.com/apache/jackrabbit-oak/pull/44
>
>View the
Hello,
Can I ask some committer to merge two issues before the release?
OAK-3148
Allows to migrate one blobstore to another during normal Oak operations.
Already reviewed by the Thomas Mueller.
OAK-2171
The command line interface for all the upgrade/sidegrade features in Oak with a
bunch of
even after the first
commit hook is run.
Switching the MemoryNodeStore to a memory-based SegmentNodeStore in the
CopyVersionHistoryTest#performCopy() method fixes the issue.
Marcel - could you commit the change?
Best regards,
Tomek
On 08/09/15 10:13, "Tomek Rekawek" <reka...@ado
Hi Marcel,
The setEarlyShutdown invoked in the test code is intended. There is also the
RepositoryUpgrade#overrideEarlyShutdown() method, which should override the
manual setting, if we need to have access to the source repository in the
commit hooks. The override method should prevent such
Hi,
On 04/09/15 10:01, "Michael Dürig" wrote:
>+1 and maybe put those into a oak-tools folder and change the naming a bit:
>
>oak-tools/
>oak-tools/oak-development
>oak-tools/oak-upgrade
>oak-tools/oak-operations
+1. As a first step I extracted the upgrade command from
Hi Julian,
Thanks for making this more general.
On 03/09/15 09:31, "Julian Sedding" wrote:
>(…) split oak-run into three modules:
>
>- oak-dev-tools
>- oak-upgrade
>- oak-ops-tool
I think it makes perfect sense.
Regarding the oak-upgrade, we already have such a
tions? :)
Best regards,
Tomek
On 02/09/15 14:58, "Tomek Rekawek" <reka...@adobe.com> wrote:
>Hello,
>
>I created a pull request [1] for the OAK-2171 [2]. It exposes all features
>added recently to the oak-upgrade module (version history copy, filtering
>paths
Hello,
I created a pull request [1] for the OAK-2171 [2]. It exposes all features
added recently to the oak-upgrade module (version history copy, filtering
paths) as well as all migration paths (eg. mongo -> rdb) in the oak-run upgrade
command. There are also tests. Looking forward to feedback
Hello,
I’m trying to avoid re-assigning local variables whenever it’s possible. For me
it makes the code easier to reason about - I know that the variable “state
created at the beginning of the method is the same one I can access at the end.
If I need the child state at some point I can create
Hello,
I prepared a pull request [1] with the OAK-3148 [2] implementation. It allows
to run a JMX process that migrates binaries from one external blob store to
another, operating on a running instance. More information can be found in the
JIRA description and the comment. It would be great to
Hello,
I started working on OAK-3148, which is a new feature that allows to gradually
migrate blobs from one store to another, without turning off the instance. In
order to create the SplitBlobStore I need a way to remember (and save) already
transferred blob ids.
So, basically I need a
71 matches
Mail list logo