[
https://issues.apache.org/jira/browse/JCRVLT-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17544060#comment-17544060
]
Ben Helleman commented on JCRVLT-630:
-------------------------------------
Hi Konrad, our team is leveraging binaryless content packages to optimize
moving content, vs moving the binaries through FileVault for speed and file
transfer size. We are moving content from one document store to another and
this is where the install of the binaryless content packages is having issues,
as described in the ticket.
We are using the
[VaultDistributionPackageBuilderFactory|https://github.com/apache/sling-org-apache-sling-distribution-core/blob/master/src/main/java/org/apache/sling/distribution/serialization/impl/vlt/VaultDistributionPackageBuilderFactory.java]
generating in memory packages and streaming them out to a shared azure
container where the importer can read the content package and try to install
it.
Eventually, upon import, the install process is going through parsing the
package and comes across a binary and in the DocViewProperty code wherein
[apply|https://github.com/apache/jackrabbit-filevault/blob/jackrabbit-filevault-3.4.0/vault-core/src/main/java/org/apache/jackrabbit/vault/util/DocViewProperty.java#L413],
it calls to get the value given the ref:
{code:java}
node.getSession().getValueFactory().createValue(ref) {code}
At this point, the ref contains the blobId and HMAC and when createValue(ref)
is called the code makes its way into oak's AbstractDataStore
[getRecordFromReference|https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-data/src/main/java/org/apache/jackrabbit/core/data/AbstractDataStore.java#L64]
where it does a check to see if the reference
[matches|https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-data/src/main/java/org/apache/jackrabbit/core/data/AbstractDataStore.java#L71]
the expected blob + [target datastore
HMAC.|https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-data/src/main/java/org/apache/jackrabbit/core/data/AbstractDataStore.java#L96]
Konrad, do you think that there is any location in this code flow where we
could configure FileVault such that we can tell it that we expect a difference
in HMAC values, and that's the expected configured behavior?
I'm not super familiar with the code base, but I'm not seeing a clear way to
extend things as the validation checks all seem to be in oak's
AbstractDataStore's getRecordFromReference, outside of vaults control.
> Unable to import binaryless packages from one datastore to another
> ------------------------------------------------------------------
>
> Key: JCRVLT-630
> URL: https://issues.apache.org/jira/browse/JCRVLT-630
> Project: Jackrabbit FileVault
> Issue Type: Bug
> Components: vlt
> Reporter: Johnson Ho
> Priority: Major
>
> Unable to import a binaryless packages from one datastore to another. When
> creating a binaryless package using FileVault as part of the content.xml the
> data includes a binaryRef
> {noformat}
> <jcr:content
>
> jcr:data="{BinaryRef}cc3959dc5fd54665c7d38471830663f3013baf4e83f8f251cd8997ffcf1be2f6:4503fbae2bd5df805a1a6245708f372c4e666d72"
> jcr:lastModified="{Date}2022-03-16T16:06:53.797-04:00"
> jcr:lastModifiedBy="workflow-process-service"
> jcr:mimeType="image/png"
> jcr:primaryType="oak:Resource"/>
> {noformat}
> The binaryRef *4503fbae2bd5df805a1a6245708f372c4e666d72* is computed using
> the binary Id + the datastore reference key. This binds the binaryless
> package to the datastore. As a result, binaryless packages can only be
> installed into systems which have a shared datastore configured. When
> installing to a system which has a different datastore, the result is an
> OakConstraint 21 error.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)