[jira] [Commented] (OAK-9713) SAS URI Support in Oak-upgrade

2022-03-09 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17503630#comment-17503630
 ] 

Matt Ryan commented on OAK-9713:


Who should this issue be assigned to?  Please ensure this ticket is assigned to 
someone before a PR is merged.

> SAS URI Support in Oak-upgrade
> --
>
> Key: OAK-9713
> URL: https://issues.apache.org/jira/browse/OAK-9713
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-azure
>Reporter: Arun Kumar Ram
>Priority: Major
>
> Add support to  Oak-upgrade for authentication & authorisation using Azure 
> Shared Access Signature URIs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (OAK-9710) Allow direct download client to specify a shorter signed URI TTL

2022-03-02 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17500358#comment-17500358
 ] 

Matt Ryan commented on OAK-9710:


A draft PR covering the proposed feature is at 
[https://github.com/apache/jackrabbit-oak/pull/509] .  Let's please review the 
proposal in this PR.

> Allow direct download client to specify a shorter signed URI TTL
> 
>
> Key: OAK-9710
> URL: https://issues.apache.org/jira/browse/OAK-9710
> Project: Jackrabbit Oak
>  Issue Type: Story
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.42.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When you request a direct download URI from cloud blob storage, the TTL that 
> is specified for the URI is set to a default value that is specified in 
> configuration.
> The proposal here is to consider extending the capabilities of requesting a 
> direct download URI such that a client can specify their own TTL, *so long 
> as* that TTL does not exceed a maximum value specified in the configuration.
> In other words - suppose the default configured TTL is 1800 (30 minutes) and 
> the configured max TTL is 3600 (60 minutes).  In this case:
>  * A client that does not specify any other TTL value would get URIs that 
> expire in 1800 seconds.
>  * A client that chooses to specify a TTL value could provide any value 
> greater than 0 but less than or equal to 3600 seconds.
>  * Specifying a TTL value of over 3600 would be an error condition.
>  * Specifying a TTL value of 0 would be an error condition.
>  * Specifying a TTL value of less than 0 would be the same as the default 
> value (-1), meaning to use the configured value (i.e. 1800 seconds).
> The maximum TTL value would be considered an optional configuration value.  
> If not specified, the default configured TTL would be used (which is required 
> to enable the direct download feature).  If specified but lower than the 
> default, the actual max would be the default value (i.e. {{{}max(default TTL, 
> max TTL){}}}).
> What should happen if an error condition should occur?  Typically in the 
> direct download code an error results in the return of a {{null}} value for 
> the URI, accompanied by a log message - so that'd be my proposal for this 
> situation also.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (OAK-9710) Allow direct download client to specify a shorter signed URI TTL

2022-03-02 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9710:
---
Description: 
When you request a direct download URI from cloud blob storage, the TTL that is 
specified for the URI is set to a default value that is specified in 
configuration.

The proposal here is to consider extending the capabilities of requesting a 
direct download URI such that a client can specify their own TTL, *so long as* 
that TTL does not exceed a maximum value specified in the configuration.

In other words - suppose the default configured TTL is 1800 (30 minutes) and 
the configured max TTL is 3600 (60 minutes).  In this case:
 * A client that does not specify any other TTL value would get URIs that 
expire in 1800 seconds.
 * A client that chooses to specify a TTL value could provide any value greater 
than 0 but less than or equal to 3600 seconds.
 * Specifying a TTL value of over 3600 would be an error condition.
 * Specifying a TTL value of 0 would be an error condition.
 * Specifying a TTL value of less than 0 would be the same as the default value 
(-1), meaning to use the configured value (i.e. 1800 seconds).

The maximum TTL value would be considered an optional configuration value.  If 
not specified, the default configured TTL would be used (which is required to 
enable the direct download feature).  If specified but lower than the default, 
the actual max would be the default value (i.e. {{{}max(default TTL, max 
TTL){}}}).

What should happen if an error condition should occur?  Typically in the direct 
download code an error results in the return of a {{null}} value for the URI, 
accompanied by a log message - so that'd be my proposal for this situation also.

  was:
When you request a direct download URI from cloud blob storage, the TTL that is 
specified for the URI is set to a default value that is specified in 
configuration.

We could consider extending the capabilities of requesting a direct download 
URI such that a client can specify their own TTL, *so long as* that TTL does 
not exceed the value specified in the configuration.  This would allow a client 
to request a more restrictive-use URI, but not the opposite.

In other words - supposing the default configured TTL is 1800 (30 minutes).  In 
this case:
 * A client that does not specify any other TTL value would get URIs that 
expire in 1800 seconds.
 * A client that chooses to specify a TTL value could provide any value greater 
than 0 but less than or equal to 1800 seconds.
 * Specifying a TTL value of over 1800 would be an error condition.
 * Specifying a TTL value of 0 would be an error condition.
 * Specifying a TTL value of less than 0 would be the same as the default value 
(-1), meaning to use the configured value.

What error condition should occur?  Typically in the direct download code an 
error results in the return of a {{null}} value for the URI, accompanied by a 
log message - so that'd be my proposal for this situation also.


> Allow direct download client to specify a shorter signed URI TTL
> 
>
> Key: OAK-9710
> URL: https://issues.apache.org/jira/browse/OAK-9710
> Project: Jackrabbit Oak
>  Issue Type: Story
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.42.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When you request a direct download URI from cloud blob storage, the TTL that 
> is specified for the URI is set to a default value that is specified in 
> configuration.
> The proposal here is to consider extending the capabilities of requesting a 
> direct download URI such that a client can specify their own TTL, *so long 
> as* that TTL does not exceed a maximum value specified in the configuration.
> In other words - suppose the default configured TTL is 1800 (30 minutes) and 
> the configured max TTL is 3600 (60 minutes).  In this case:
>  * A client that does not specify any other TTL value would get URIs that 
> expire in 1800 seconds.
>  * A client that chooses to specify a TTL value could provide any value 
> greater than 0 but less than or equal to 3600 seconds.
>  * Specifying a TTL value of over 3600 would be an error condition.
>  * Specifying a TTL value of 0 would be an error condition.
>  * Specifying a TTL value of less than 0 would be the same as the default 
> value (-1), meaning to use the configured value (i.e. 1800 seconds).
> The maximum TTL value would be considered an optional configuration value.  
> If not specified, the default configured TTL would be used (which is required 
> to enable the direct download feature).  If specified but lower than the 
> default, the actual max would be the default value (i.e. {{{}max(default TTL, 
> max TTL){}}}).
> What should happen if an 

[jira] [Commented] (OAK-9712) Add support for Azure SAS URIs in oak-run

2022-03-02 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17500260#comment-17500260
 ] 

Matt Ryan commented on OAK-9712:


Should the components for this issue be {{blob-cloud-azure}} instead of 
{{{}segment-azure{}}}?  I don't understand the use case for getting SAS URIs 
via {{oak-run}} for {{{}segment-azure{}}}.  Maybe a description would help?

> Add support for Azure SAS URIs in oak-run
> -
>
> Key: OAK-9712
> URL: https://issues.apache.org/jira/browse/OAK-9712
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: oak-run, segment-azure
>Reporter: Kunal Shubham
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (OAK-9710) Allow direct download client to specify a shorter signed URI TTL

2022-02-25 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9710:
---
Description: 
When you request a direct download URI from cloud blob storage, the TTL that is 
specified for the URI is set to a default value that is specified in 
configuration.

We could consider extending the capabilities of requesting a direct download 
URI such that a client can specify their own TTL, *so long as* that TTL does 
not exceed the value specified in the configuration.  This would allow a client 
to request a more restrictive-use URI, but not the opposite.

In other words - supposing the default configured TTL is 1800 (30 minutes).  In 
this case:
 * A client that does not specify any other TTL value would get URIs that 
expire in 1800 seconds.
 * A client that chooses to specify a TTL value could provide any value greater 
than 0 but less than or equal to 1800 seconds.
 * Specifying a TTL value of over 1800 would be an error condition.
 * Specifying a TTL value of 0 would be an error condition.
 * Specifying a TTL value of less than 0 would be the same as the default value 
(-1), meaning to use the configured value.

What error condition should occur?  Typically in the direct download code an 
error results in the return of a {{null}} value for the URI, accompanied by a 
log message - so that'd be my proposal for this situation also.

  was:
When you request a direct download URI from cloud blob storage, the TTL that is 
specified for the URI is set to a default value that is specified in 
configuration.

We could consider extending the capabilities of requesting a direct download 
URI such that a client can specify their own TTL, *so long as* that TTL does 
not exceed the value specified in the configuration.  This would allow a client 
to request a more restrictive-use URI, but not the opposite.

In other words - supposing the default configured TTL is 1800 (30 minutes).  In 
this case:
 * A client that does not specify any other TTL value would get URIs that 
expire in 1800 seconds.
 * A client that chooses to specify a TTL value could provide any value greater 
than 0 but less than or equal to 1800 seconds.
 * Specifying a TTL value of over 1800 would be an error condition.
 * Specifying a TTL value of 0 would be an error condition.
 * Specifying a TTL value of less than 0 would be the same as the default value 
(-1), meaning to use the configured value.

What error condition should occur?  Typically in the direct download code an 
error results in the return of a {{null}} value for the URI, accompanied by a 
log message.


> Allow direct download client to specify a shorter signed URI TTL
> 
>
> Key: OAK-9710
> URL: https://issues.apache.org/jira/browse/OAK-9710
> Project: Jackrabbit Oak
>  Issue Type: Story
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.42.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When you request a direct download URI from cloud blob storage, the TTL that 
> is specified for the URI is set to a default value that is specified in 
> configuration.
> We could consider extending the capabilities of requesting a direct download 
> URI such that a client can specify their own TTL, *so long as* that TTL does 
> not exceed the value specified in the configuration.  This would allow a 
> client to request a more restrictive-use URI, but not the opposite.
> In other words - supposing the default configured TTL is 1800 (30 minutes).  
> In this case:
>  * A client that does not specify any other TTL value would get URIs that 
> expire in 1800 seconds.
>  * A client that chooses to specify a TTL value could provide any value 
> greater than 0 but less than or equal to 1800 seconds.
>  * Specifying a TTL value of over 1800 would be an error condition.
>  * Specifying a TTL value of 0 would be an error condition.
>  * Specifying a TTL value of less than 0 would be the same as the default 
> value (-1), meaning to use the configured value.
> What error condition should occur?  Typically in the direct download code an 
> error results in the return of a {{null}} value for the URI, accompanied by a 
> log message - so that'd be my proposal for this situation also.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (OAK-9710) Allow direct download client to specify a shorter signed URI TTL

2022-02-25 Thread Matt Ryan (Jira)
Matt Ryan created OAK-9710:
--

 Summary: Allow direct download client to specify a shorter signed 
URI TTL
 Key: OAK-9710
 URL: https://issues.apache.org/jira/browse/OAK-9710
 Project: Jackrabbit Oak
  Issue Type: Story
  Components: blob-cloud, blob-cloud-azure, blob-plugins
Affects Versions: 1.42.0
Reporter: Matt Ryan
Assignee: Matt Ryan


When you request a direct download URI from cloud blob storage, the TTL that is 
specified for the URI is set to a default value that is specified in 
configuration.

We could consider extending the capabilities of requesting a direct download 
URI such that a client can specify their own TTL, *so long as* that TTL does 
not exceed the value specified in the configuration.  This would allow a client 
to request a more restrictive-use URI, but not the opposite.

In other words - supposing the default configured TTL is 1800 (30 minutes).  In 
this case:
 * A client that does not specify any other TTL value would get URIs that 
expire in 1800 seconds.
 * A client that chooses to specify a TTL value could provide any value greater 
than 0 but less than or equal to 1800 seconds.
 * Specifying a TTL value of over 1800 would be an error condition.
 * Specifying a TTL value of 0 would be an error condition.
 * Specifying a TTL value of less than 0 would be the same as the default value 
(-1), meaning to use the configured value.

What error condition should occur?  Typically in the direct download code an 
error results in the return of a {{null}} value for the URI, accompanied by a 
log message.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (OAK-9488) Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand

2021-10-04 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-9488.

Resolution: Fixed

> Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand 
> 
>
> Key: OAK-9488
> URL: https://issues.apache.org/jira/browse/OAK-9488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Paul Chibulcuteanu
>Assignee: Matt Ryan
>Priority: Minor
>
> Running datastore command to retrieve the index binaries blob IDs from Azure 
> with following options:
> {code}
> 16.06.2021 20:49:50.452 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Command line arguments used for datastore command [--dump-ref --out-dir 
> /opt/out_dir --azureds /DataStore.config --verboseRootPath /oak:index/ 
> --verbosePathInclusionRegex /*/:data,/*/:suggest-data --verbose] 
> 16.06.2021 20:49:50.454 *INFO* [main] o.a.j.oak.run.DataStoreCommand - System 
> properties and vm options passed [-Dsegment.timeout.interval=60, 
> -Dsegment.timeout.execution=325] 
> {code}
> can sometimes take minutes. This operation is usually fast(a few seconds).
> The log of the command only displays little info about what is happening:
> {code}
> 16.06.2021 20:49:50.839 *INFO* [pool-3-thread-1] 
> o.a.j.o.plugins.blob.FileCache - Cache built with [0] files from file system 
> in [0] seconds 
> 16.06.2021 20:54:45.901 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Starting dump of blob references by traversing 
> {code}
> There is no information on where the 5 minutes is spent. It could be a 
> network hiccup, maybe Azure connectivity issues, or too many sessions opened, 
> but there is no way to find out. We should add extra logging in 
> _org.apache.jackrabbit.oak.run.DataStoreCommand_ in order to be able to 
> investigate such slowness cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9488) Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand

2021-10-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17424029#comment-17424029
 ] 

Matt Ryan commented on OAK-9488:


PR [https://github.com/apache/jackrabbit-oak/pull/380] has been merged to trunk

> Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand 
> 
>
> Key: OAK-9488
> URL: https://issues.apache.org/jira/browse/OAK-9488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Paul Chibulcuteanu
>Assignee: Matt Ryan
>Priority: Minor
>
> Running datastore command to retrieve the index binaries blob IDs from Azure 
> with following options:
> {code}
> 16.06.2021 20:49:50.452 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Command line arguments used for datastore command [--dump-ref --out-dir 
> /opt/out_dir --azureds /DataStore.config --verboseRootPath /oak:index/ 
> --verbosePathInclusionRegex /*/:data,/*/:suggest-data --verbose] 
> 16.06.2021 20:49:50.454 *INFO* [main] o.a.j.oak.run.DataStoreCommand - System 
> properties and vm options passed [-Dsegment.timeout.interval=60, 
> -Dsegment.timeout.execution=325] 
> {code}
> can sometimes take minutes. This operation is usually fast(a few seconds).
> The log of the command only displays little info about what is happening:
> {code}
> 16.06.2021 20:49:50.839 *INFO* [pool-3-thread-1] 
> o.a.j.o.plugins.blob.FileCache - Cache built with [0] files from file system 
> in [0] seconds 
> 16.06.2021 20:54:45.901 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Starting dump of blob references by traversing 
> {code}
> There is no information on where the 5 minutes is spent. It could be a 
> network hiccup, maybe Azure connectivity issues, or too many sessions opened, 
> but there is no way to find out. We should add extra logging in 
> _org.apache.jackrabbit.oak.run.DataStoreCommand_ in order to be able to 
> investigate such slowness cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9488) Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand

2021-09-30 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17423051#comment-17423051
 ] 

Matt Ryan commented on OAK-9488:


[~chibulcu] / [~amjain] / [~nitigupt] can you please review the PR?

> Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand 
> 
>
> Key: OAK-9488
> URL: https://issues.apache.org/jira/browse/OAK-9488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Paul Chibulcuteanu
>Assignee: Matt Ryan
>Priority: Minor
>
> Running datastore command to retrieve the index binaries blob IDs from Azure 
> with following options:
> {code}
> 16.06.2021 20:49:50.452 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Command line arguments used for datastore command [--dump-ref --out-dir 
> /opt/out_dir --azureds /DataStore.config --verboseRootPath /oak:index/ 
> --verbosePathInclusionRegex /*/:data,/*/:suggest-data --verbose] 
> 16.06.2021 20:49:50.454 *INFO* [main] o.a.j.oak.run.DataStoreCommand - System 
> properties and vm options passed [-Dsegment.timeout.interval=60, 
> -Dsegment.timeout.execution=325] 
> {code}
> can sometimes take minutes. This operation is usually fast(a few seconds).
> The log of the command only displays little info about what is happening:
> {code}
> 16.06.2021 20:49:50.839 *INFO* [pool-3-thread-1] 
> o.a.j.o.plugins.blob.FileCache - Cache built with [0] files from file system 
> in [0] seconds 
> 16.06.2021 20:54:45.901 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Starting dump of blob references by traversing 
> {code}
> There is no information on where the 5 minutes is spent. It could be a 
> network hiccup, maybe Azure connectivity issues, or too many sessions opened, 
> but there is no way to find out. We should add extra logging in 
> _org.apache.jackrabbit.oak.run.DataStoreCommand_ in order to be able to 
> investigate such slowness cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9488) Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand

2021-09-30 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17423049#comment-17423049
 ] 

Matt Ryan commented on OAK-9488:


PR:  [https://github.com/apache/jackrabbit-oak/pull/380]

> Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand 
> 
>
> Key: OAK-9488
> URL: https://issues.apache.org/jira/browse/OAK-9488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Paul Chibulcuteanu
>Assignee: Matt Ryan
>Priority: Minor
>
> Running datastore command to retrieve the index binaries blob IDs from Azure 
> with following options:
> {code}
> 16.06.2021 20:49:50.452 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Command line arguments used for datastore command [--dump-ref --out-dir 
> /opt/out_dir --azureds /DataStore.config --verboseRootPath /oak:index/ 
> --verbosePathInclusionRegex /*/:data,/*/:suggest-data --verbose] 
> 16.06.2021 20:49:50.454 *INFO* [main] o.a.j.oak.run.DataStoreCommand - System 
> properties and vm options passed [-Dsegment.timeout.interval=60, 
> -Dsegment.timeout.execution=325] 
> {code}
> can sometimes take minutes. This operation is usually fast(a few seconds).
> The log of the command only displays little info about what is happening:
> {code}
> 16.06.2021 20:49:50.839 *INFO* [pool-3-thread-1] 
> o.a.j.o.plugins.blob.FileCache - Cache built with [0] files from file system 
> in [0] seconds 
> 16.06.2021 20:54:45.901 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Starting dump of blob references by traversing 
> {code}
> There is no information on where the 5 minutes is spent. It could be a 
> network hiccup, maybe Azure connectivity issues, or too many sessions opened, 
> but there is no way to find out. We should add extra logging in 
> _org.apache.jackrabbit.oak.run.DataStoreCommand_ in order to be able to 
> investigate such slowness cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9488) Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand

2021-09-30 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17423034#comment-17423034
 ] 

Matt Ryan commented on OAK-9488:


I'm assuming the slowness is around the creation of the NodeStoreFixture [0]:
{noformat}
NodeStoreFixture fixture = NodeStoreFixtureProvider.create(opts);
closer.register(fixture); {noformat}
I'll add some logging around that and a few other places to try to give a 
better opportunity to narrow down where the slowness is occurring.  Assuming it 
is inside the NodeStoreFixture, however, to get more information will require 
changes down in that layer.

 

[0] - 
https://github.com/apache/jackrabbit-oak/blob/aca2aaeee7334281500f2b065b8226013f9a97df/oak-run/src/main/java/org/apache/jackrabbit/oak/run/DataStoreCommand.java#L147

> Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand 
> 
>
> Key: OAK-9488
> URL: https://issues.apache.org/jira/browse/OAK-9488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Paul Chibulcuteanu
>Assignee: Matt Ryan
>Priority: Minor
>
> Running datastore command to retrieve the index binaries blob IDs from Azure 
> with following options:
> {code}
> 16.06.2021 20:49:50.452 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Command line arguments used for datastore command [--dump-ref --out-dir 
> /opt/out_dir --azureds /DataStore.config --verboseRootPath /oak:index/ 
> --verbosePathInclusionRegex /*/:data,/*/:suggest-data --verbose] 
> 16.06.2021 20:49:50.454 *INFO* [main] o.a.j.oak.run.DataStoreCommand - System 
> properties and vm options passed [-Dsegment.timeout.interval=60, 
> -Dsegment.timeout.execution=325] 
> {code}
> can sometimes take minutes. This operation is usually fast(a few seconds).
> The log of the command only displays little info about what is happening:
> {code}
> 16.06.2021 20:49:50.839 *INFO* [pool-3-thread-1] 
> o.a.j.o.plugins.blob.FileCache - Cache built with [0] files from file system 
> in [0] seconds 
> 16.06.2021 20:54:45.901 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Starting dump of blob references by traversing 
> {code}
> There is no information on where the 5 minutes is spent. It could be a 
> network hiccup, maybe Azure connectivity issues, or too many sessions opened, 
> but there is no way to find out. We should add extra logging in 
> _org.apache.jackrabbit.oak.run.DataStoreCommand_ in order to be able to 
> investigate such slowness cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-9488) Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand

2021-09-30 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-9488:
--

Assignee: Matt Ryan

> Extra logging in org.apache.jackrabbit.oak.run.DataStoreCommand 
> 
>
> Key: OAK-9488
> URL: https://issues.apache.org/jira/browse/OAK-9488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Paul Chibulcuteanu
>Assignee: Matt Ryan
>Priority: Minor
>
> Running datastore command to retrieve the index binaries blob IDs from Azure 
> with following options:
> {code}
> 16.06.2021 20:49:50.452 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Command line arguments used for datastore command [--dump-ref --out-dir 
> /opt/out_dir --azureds /DataStore.config --verboseRootPath /oak:index/ 
> --verbosePathInclusionRegex /*/:data,/*/:suggest-data --verbose] 
> 16.06.2021 20:49:50.454 *INFO* [main] o.a.j.oak.run.DataStoreCommand - System 
> properties and vm options passed [-Dsegment.timeout.interval=60, 
> -Dsegment.timeout.execution=325] 
> {code}
> can sometimes take minutes. This operation is usually fast(a few seconds).
> The log of the command only displays little info about what is happening:
> {code}
> 16.06.2021 20:49:50.839 *INFO* [pool-3-thread-1] 
> o.a.j.o.plugins.blob.FileCache - Cache built with [0] files from file system 
> in [0] seconds 
> 16.06.2021 20:54:45.901 *INFO* [main] o.a.j.oak.run.DataStoreCommand - 
> Starting dump of blob references by traversing 
> {code}
> There is no information on where the 5 minutes is spent. It could be a 
> network hiccup, maybe Azure connectivity issues, or too many sessions opened, 
> but there is no way to find out. We should add extra logging in 
> _org.apache.jackrabbit.oak.run.DataStoreCommand_ in order to be able to 
> investigate such slowness cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-9386) S3Backend may return incorrect download URI

2021-03-12 Thread Matt Ryan (Jira)
Matt Ryan created OAK-9386:
--

 Summary: S3Backend may return incorrect download URI
 Key: OAK-9386
 URL: https://issues.apache.org/jira/browse/OAK-9386
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: blob-cloud
Reporter: Matt Ryan
Assignee: Matt Ryan


S3Backend suffers from the same issue that AzureBlobStoreBackend did in 
OAK-9384.  A fix like the one for OAK-9384 needs to be applied also to S3.

For context:  The implementation creates a cache key with the identifier and 
the domain but does not take the download options into account. That is, two 
calls to get a download URI for the same identifier and different download 
options will return the same download URI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9384) AzureBlobStoreBackend may return incorrect download URI

2021-03-12 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300441#comment-17300441
 ] 

Matt Ryan commented on OAK-9384:


LGTM [~mreutegg].  That was exactly what I was thinking of doing to fix this, 
but you were quicker :)

I commented on the PR indicating the same.

> AzureBlobStoreBackend may return incorrect download URI
> ---
>
> Key: OAK-9384
> URL: https://issues.apache.org/jira/browse/OAK-9384
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud-azure
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>
> AzureBlobStoreBackend may return an incorrect URI when download URI caching 
> is enabled. The implementation creates a cache key with the identifier and 
> the domain but does not take the download options into account. That is, two 
> calls to get a download URI for the same identifier and different download 
> options will return the same download URI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9304:
---
Fix Version/s: 1.38.0
   1.22.6

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.6, 1.38.0
>
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-9304.

Resolution: Fixed

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.6, 1.38.0
>
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-08 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17261630#comment-17261630
 ] 

Matt Ryan commented on OAK-9304:


Fixed in 1.22 in 
[r1885279|https://svn.apache.org/viewvc?view=revision=1885279].

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9318) oak-segment-aws integration tests fail when region is us-east-1

2021-01-08 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17261586#comment-17261586
 ] 

Matt Ryan commented on OAK-9318:


A change in Amazon's SDK meant that if you are using the region {{us-east-1}} 
you simply don't specify the region.  Unfortunately this causes our code to 
fail (dumb, I know).  To work around it you simply detect if the region is 
{{us-east-1}} and instead send {{null}} as the region if so.

See [0] for an example of how we're handling this in {{oak-blob-cloud}}.

[0] - 
[https://github.com/apache/jackrabbit-oak/blob/efc9485b688be249eac2507e1658d4157303b640/oak-blob-cloud/src/main/java/org/apache/jackrabbit/oak/blob/cloud/s3/S3Backend.java#L266-L274]

> oak-segment-aws integration tests fail when region is us-east-1
> ---
>
> Key: OAK-9318
> URL: https://issues.apache.org/jira/browse/OAK-9318
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-aws
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Priority: Major
>
> If you run the integration tests for {{oak-segment-aws}}, the tests will fail 
> if your configuration specifies the AWS region {{us-east-1}}.
> To reproduce, in the {{oak-it}} module run {{mvn test 
> -Ds3.config=/path/to/aws.config -PintegrationTesting}}.  If 
> {{s3Region=us-east-1}} is set in your config the tests will fail when trying 
> to create a bucket for the test content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-9318) oak-segment-aws integration tests fail when region is us-east-1

2021-01-08 Thread Matt Ryan (Jira)
Matt Ryan created OAK-9318:
--

 Summary: oak-segment-aws integration tests fail when region is 
us-east-1
 Key: OAK-9318
 URL: https://issues.apache.org/jira/browse/OAK-9318
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segment-aws
Affects Versions: 1.36.0
Reporter: Matt Ryan


If you run the integration tests for {{oak-segment-aws}}, the tests will fail 
if your configuration specifies the AWS region {{us-east-1}}.

To reproduce, in the {{oak-it}} module run {{mvn test 
-Ds3.config=/path/to/aws.config -PintegrationTesting}}.  If 
{{s3Region=us-east-1}} is set in your config the tests will fail when trying to 
create a bucket for the test content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-08 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17261578#comment-17261578
 ] 

Matt Ryan commented on OAK-9304:


Fixed in 
[r1885277|https://svn.apache.org/viewvc?view=revision=1885277].

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-06 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17260171#comment-17260171
 ] 

Matt Ryan commented on OAK-9304:


[~reschke] thank you for the feedback.  I updated my branch taking your 
feedback into account, and this still seems to work with Azure's blob storage 
service.  Will you please take a look at the diff to see if I've changed it 
correctly?  If so I'll update trunk with my changes.

[https://github.com/apache/jackrabbit-oak/compare/trunk...mattvryan:OAK-9304-2?expand=1]

 

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-05 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258433#comment-17258433
 ] 

Matt Ryan edited comment on OAK-9304 at 1/6/21, 12:43 AM:
--

Sure thing [~reschke].  Sorry, I've been on holidays :)

Previously, in regard to the example in the description above, you said:  "The 
first of the two entries looks perfectly ok to me."  The issue here is that the 
first one does not work with Azure blob storage service - it rejects the 
request as having an invalid character in the URI.  So this is less an issue of 
whether the URI is correct per RFCs, and more an issue that the URI does not 
properly work with Azure.

More details follow.

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=UTF-8''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=UTF-8''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear, or if it's clear now, let me know 
if you'd like me to update the bug description accordingly or just let it go :).

 

[0] - 
[https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]

[1] - 
[https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryDownloadOptions.html]


was (Author: mattvryan):
Sure thing [~reschke].  Sorry, I've been on holidays :)

Previously, in regard to the example in the description above, you said:  "The 
first of the two entries looks perfectly ok to me."  The issue here is that the 
first one does not work with Azure blob storage service - it rejects the 
request as having an invalid character in the URI.  So this is less an issue of 
whether the URI is correct per RFCs, and more an issue that the URI does not 
properly work with Azure.

More details follow.

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear, or if 

[jira] [Commented] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-05 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259232#comment-17259232
 ] 

Matt Ryan commented on OAK-9304:


{quote}my suspicion is that the problem you want to solve is somewhere else: 
where the desired field value of Content-Disposition is sent to Azure.
{quote}
That's exactly where the problem lies.  We don't control the encoding, their 
SDK does that for us.

The documentation for this SDK specifies that you are to provide the exact 
string you want in the response's Content-Disposition header.  But there are 
edge cases where it doesn't always behave the way it is documented.  It's worth 
pointing out that the AWS SDK for S3 doesn't have these problems, it works just 
fine as-is.  So it is definitely an issue of trying to accommodate Azure's SDK.

Compounding the problem is that the SDK we currently use in Oak is too old.  
It's considered maintenance only by Microsoft now, and needs to be upgraded to 
the latest (see OAK-8105).  We've run into edge case issues like this before 
with this version of the SDK, and it is possible this is fixed in newer 
versions.

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258616#comment-17258616
 ] 

Matt Ryan commented on OAK-9304:


I've updated the description to try to focus better on the issue we're trying 
to solve.

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> Note that the purpose of this ticket is to address compatibility issues with 
> blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
> encoding the "filename" portion using standard Java character set encoding 
> conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
> URI that works with Azure, delivers the proper Content-Disposition header in 
> responses, and generates the proper client result (meaning, the correct name 
> for the downloaded file).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-04 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9304:
---
Description: 
When generating a direct download URI for a filename with certain non-standard 
characters in the name, it can cause the resulting signed URI to be considered 
invalid by some blob storage services (Azure in particular).  This can lead to 
blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
Azure blob storage service fails trying to parse a URI with that 
Content-Disposition header specification in the query string.  It instead 
should look like:
{noformat}
inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
 

The "filename" portion of the Content-Disposition needs to consist of 
ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] in 
this paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
Note that the purpose of this ticket is to address compatibility issues with 
blob storage services, not to ensure ISO-8859-1 compatibility.  However, by 
encoding the "filename" portion using standard Java character set encoding 
conversion (e.g. {{Charsets.ISO_8859_1.encode(fileName)}}), we can generate a 
URI that works with Azure, delivers the proper Content-Disposition header in 
responses, and generates the proper client result (meaning, the correct name 
for the downloaded file).

  was:
When generating a direct download URI for a filename with certain non-standard 
characters in the name, it can cause the resulting signed URI to be considered 
invalid by some blob storage services (Azure in particular).  This can lead to 
blob storage services being unable to service the URl request.

The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
This is not usually a problem, but if the filename provided contains 
non-standard characters, it can cause the resulting signed URI to be invalid.  
This can lead to blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
It instead should look like:
{noformat}
inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
 

The "filename" portion of the Content-Disposition needs to consist of 
ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] in 
this paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
By encoding the "filename" portion using standard Java character set encoding 
conversion (e.g. {{ 


> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> Azure blob storage service fails trying to parse a URI with that 
> Content-Disposition header specification in the query string.  It instead 
> should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; 

[jira] [Updated] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-04 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9304:
---
Description: 
When generating a direct download URI for a filename with certain non-standard 
characters in the name, it can cause the resulting signed URI to be considered 
invalid by some blob storage services (Azure in particular).  This can lead to 
blob storage services being unable to service the URl request.

The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
This is not usually a problem, but if the filename provided contains 
non-standard characters, it can cause the resulting signed URI to be invalid.  
This can lead to blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
It instead should look like:
{noformat}
inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
 

The "filename" portion of the Content-Disposition needs to consist of 
ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] in 
this paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
By encoding the "filename" portion using standard Java character set encoding 
conversion (e.g. {{ 

  was:
The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
This is not usually a problem, but if the filename provided contains 
non-standard characters, it can cause the resulting signed URI to be invalid.  
This can lead to blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
It instead should look like:
{noformat}
inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
 

 


> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> When generating a direct download URI for a filename with certain 
> non-standard characters in the name, it can cause the resulting signed URI to 
> be considered invalid by some blob storage services (Azure in particular).  
> This can lead to blob storage services being unable to service the URl 
> request.
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
> The "filename" portion of the Content-Disposition needs to consist of 
> ISO-8859-1 characters, per [https://tools.ietf.org/html/rfc6266#section-4.3] 
> in this paragraph:
> {quote}The 

[jira] [Updated] (OAK-9304) Filename with special characters in direct download URI Content-Disposition are causing HTTP 400 errors from Azure

2021-01-04 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9304:
---
Summary: Filename with special characters in direct download URI 
Content-Disposition are causing HTTP 400 errors from Azure  (was: Filename 
portion of direct download URI Content-Disposition should be ISO-8859-1 encoded)

> Filename with special characters in direct download URI Content-Disposition 
> are causing HTTP 400 errors from Azure
> --
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2021-01-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258613#comment-17258613
 ] 

Matt Ryan commented on OAK-9304:


{quote}"a umlaut" *is* inside ISO-8859-1.
{quote}
I probably need to remove the mention of ISO-8859-1 as I think this is just 
confusing the issue.

The issue is, Azure's blob storage service can't process the URIs if the 
filename has characters like this in the name.  The fix implemented for this 
issue resolves that.

Since the purpose of {{oak-blob-cloud-azure}} is to work with Azure's blob 
storage service, I felt it was appropriate to make the change I made so that it 
actually does work with Azure in this case.

Here's the relevant diff showing the primary code change for this bug:
{noformat}
  private String formatContentDispositionHeader(@NotNull final String 
dispositionType,
@NotNull final String fileName,
@Nullable final String 
rfc8187EncodedFileName) {
+ String iso_8859_1_fileName =
+new String(Charsets.ISO_8859_1.encode(fileName).array()).replace("\"", 
"\\\"");
  return null != rfc8187EncodedFileName ?
- String.format("%s; filename=\"%s\"; filename*=UTF-8''%s",
-dispositionType, fileName, rfc8187EncodedFileName) :
-  String.format("%s; filename=\"%s\"", dispositionType, fileName);
+ String.format("%s; filename=\"%s\"; filename*=UTF-8''%s",
+   dispositionType, iso_8859_1_fileName, 
rfc8187EncodedFileName) :
+ String.format("%s; filename=\"%s\"",
+   dispositionType, iso_8859_1_fileName);
  } {noformat}
Note, really all that was added was to do 
{{Charsets.ISO_8859_1.encode(fileName)}} to convert the filename, e.g from "a 
umlaut" to "a ?".

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2021-01-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258433#comment-17258433
 ] 

Matt Ryan edited comment on OAK-9304 at 1/4/21, 7:22 PM:
-

Sure thing [~reschke].  Sorry, I've been on holidays :)

Previously, in regard to the example in the description above, you said:  "The 
first of the two entries looks perfectly ok to me."  The issue here is that the 
first one does not work with Azure blob storage service - it rejects the 
request as having an invalid character in the URI.  So this is less an issue of 
whether the URI is correct per RFCs, and more an issue that the URI does not 
properly work with Azure.

More details follow.

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear, or if it's clear now, let me know 
if you'd like me to update the bug description accordingly or just let it go :).

 

[0] - 
[https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]

[1] - 
[https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryDownloadOptions.html]


was (Author: mattvryan):
Sure thing [~reschke].  Sorry, I've been on holidays :)

Previously, in regard to the example in the description above, you said:  "The 
first of the two entries looks perfectly ok to me."  The issue here is that the 
first one does not work with Azure blob storage service - it rejects the 
request as having an invalid character in the URI.  So this is less an issue of 
whether the URI is correct per RFCs, and more an issue that the URI does not 
properly work with Azure.

More details follow.

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear.

 

[0] - 

[jira] [Comment Edited] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2021-01-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258433#comment-17258433
 ] 

Matt Ryan edited comment on OAK-9304 at 1/4/21, 7:14 PM:
-

Sure thing [~reschke].  Sorry, I've been on holidays :)

Previously, in regard to the example in the description above, you said:  "The 
first of the two entries looks perfectly ok to me."  The issue here is that the 
first one does not work with Azure blob storage service - it rejects the 
request as having an invalid character in the URI.  So this is less an issue of 
whether the URI is correct per RFCs, and more an issue that the URI does not 
properly work with Azure.

More details follow.

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear.

 

[0] - 
[https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]

[1] - 
[https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryDownloadOptions.html]


was (Author: mattvryan):
Sure thing [~reschke].  Sorry, I've been on holidays :)

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear.

 

[0] - 
[https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]

[1] - 
[https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryDownloadOptions.html]

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  

[jira] [Commented] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2021-01-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258433#comment-17258433
 ] 

Matt Ryan commented on OAK-9304:


Sure thing [~reschke].  Sorry, I've been on holidays :)

PRIOR TO THIS FIX:  When Oak would attempt to generate a direct binary access 
URI for a filename with characters outside the ISO-8859-1 character set, this 
would result in a URI that Azure would reject with a 400-level error.  The 
reason was due to Oak failing to properly encode this filename in the 
"filename" portion of the Content-Disposition header specification.

(As background, remember that Oak declares to the cloud storage the value that 
should be used in the Content-Disposition header for requests to the generated 
direct binary access URI.  In Oak we specify both the content disposition type 
and filenames for this.  See [0] and [1] for more info.)

Example:  Suppose the filename is "umläut.jpg".  Oak would specify a 
Content-Disposition header value of:
{noformat}
inline; filename="umläut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This is then specified in a query parameter in the direct access URI, so this 
information gets encoded.  It is probably this encoding change that Azure does 
not expect.  Since this portion of the URI is signed, the signature doesn't 
match and the request fails.

WITH THIS FIX:  A basic ISO-8859-1 encoding is done on the "filename" value of 
the header.  This was made based on RFC6266 Section 4.3 which seems to suggest 
that only ISO-8859-1 characters are allowed for that value.

Thus the header now looks like this:
{noformat}
inline; filename="umla?ut.jpg"; filename*=''umla%CC%88ut.jpg{noformat}
This header encodes and validates properly with Azure.  In testing, modern 
clients prefer the "filename*" portion, which results in the proper filename 
being used.

Please let me know if this is still unclear.

 

[0] - 
[https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]

[1] - 
[https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryDownloadOptions.html]

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2020-12-18 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17252033#comment-17252033
 ] 

Matt Ryan commented on OAK-9304:


This bug has existed in Oak since Oak 1.10.  Do we need to backport to older 
supported versions, and if so, which?

If we do not backport, the risk is that users on older versions of Oak may run 
into this bug.  The bug is, if requesting a direct download URI from Oak for a 
filename containing characters outside the ISO-8859-1 character set, users may 
find that the resulting direct download URI does not work with certain blob 
storage services.

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2020-12-18 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17252032#comment-17252032
 ] 

Matt Ryan commented on OAK-9304:


Fixed in Oak trunk in 
[r1884613|https://svn.apache.org/viewvc?view=revision=1884613].

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2020-12-17 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251456#comment-17251456
 ] 

Matt Ryan commented on OAK-9304:


I have an implementation in place for this.  You can see the diff here: 
[https://github.com/apache/jackrabbit-oak/compare/trunk...mattvryan:OAK-9304]

However, one use case is not passing.  That use case is a filename with a 
single double-quote in the middle of the filename, like {{my"file.txt}}.  
Azure's blob storage service seems to be okay with this but S3 doesn't like it 
and returns a 400 response when you try to issue a request with a URI that has 
this filename in the query parameters.

Java's ISO-8859-1 encoder doesn't transpose the " character.  But IIUC this 
filename is a legal filename in Oak.

My question is, should I move forward with the fix I have so far?  It is 
probably better than what's in trunk.  Or do we first need to address the issue 
of double-quotes in the filename - and if so, how to address it?

One option would be to search and replace " with %22, which is what is used in 
the RFC-8187 encoding for the other filename value in the content disposition.  
While not technically the correct value, it would probably work.

Note that both Azure and S3 do support a file with two double-quotes.  For 
example, if you name the file {{"myfile.txt"}} (with the double-quotes as a 
part of the filename), this appears to work, although it might be working by 
accident.

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2020-12-17 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251351#comment-17251351
 ] 

Matt Ryan commented on OAK-9304:


[~reschke] I would very much appreciate a review of the bug description as I've 
written it, to see if I've interpreted the RFC correctly in this case.

> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> attachment; filename="Ausländische.jpg"; 
> filename*=UTF-8''Ausla%CC%88ndische.jpg {noformat}
> It instead should look like:
> {noformat}
> attachment; filename="Ausla?ndische.jpg"; 
> filename*=UTF-8''Ausla%CC%88ndische.jpg {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2020-12-17 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-9304:
---
Description: 
The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
This is not usually a problem, but if the filename provided contains 
non-standard characters, it can cause the resulting signed URI to be invalid.  
This can lead to blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
It instead should look like:
{noformat}
inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
{noformat}
 

 

  was:
The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
This is not usually a problem, but if the filename provided contains 
non-standard characters, it can cause the resulting signed URI to be invalid.  
This can lead to blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
attachment; filename="Ausländische.jpg"; 
filename*=UTF-8''Ausla%CC%88ndische.jpg {noformat}
It instead should look like:
{noformat}
attachment; filename="Ausla?ndische.jpg"; 
filename*=UTF-8''Ausla%CC%88ndische.jpg {noformat}
 

 


> Filename portion of direct download URI Content-Disposition should be 
> ISO-8859-1 encoded
> 
>
> Key: OAK-9304
> URL: https://issues.apache.org/jira/browse/OAK-9304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.36.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
> encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
> paragraph:
> {quote}The parameters "filename" and "filename*" differ only in that 
> "filename*" uses the encoding defined in RFC5987, allowing the use of 
> characters not present in the ISO-8859-1 character set ISO-8859-1.
> {quote}
> This is not usually a problem, but if the filename provided contains 
> non-standard characters, it can cause the resulting signed URI to be invalid. 
>  This can lead to blob storage services being unable to service the URl 
> request.
> For example, a filename of "Ausländische.jpg" currently requests a 
> Content-Disposition header that looks like:
> {noformat}
> inline; filename="Ausländische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
> It instead should look like:
> {noformat}
> inline; filename="Ausla?ndische.jpg"; filename*=UTF-8''Ausla%CC%88ndische.jpg 
> {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-9304) Filename portion of direct download URI Content-Disposition should be ISO-8859-1 encoded

2020-12-17 Thread Matt Ryan (Jira)
Matt Ryan created OAK-9304:
--

 Summary: Filename portion of direct download URI 
Content-Disposition should be ISO-8859-1 encoded
 Key: OAK-9304
 URL: https://issues.apache.org/jira/browse/OAK-9304
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: blob-cloud, blob-cloud-azure, blob-plugins
Affects Versions: 1.36.0
Reporter: Matt Ryan
Assignee: Matt Ryan


The "filename" portion of the Content-Disposition needs to be ISO-8859-1 
encoded, per [https://tools.ietf.org/html/rfc6266#section-4.3] in this 
paragraph:
{quote}The parameters "filename" and "filename*" differ only in that 
"filename*" uses the encoding defined in RFC5987, allowing the use of 
characters not present in the ISO-8859-1 character set ISO-8859-1.
{quote}
This is not usually a problem, but if the filename provided contains 
non-standard characters, it can cause the resulting signed URI to be invalid.  
This can lead to blob storage services being unable to service the URl request.

For example, a filename of "Ausländische.jpg" currently requests a 
Content-Disposition header that looks like:
{noformat}
attachment; filename="Ausländische.jpg"; 
filename*=UTF-8''Ausla%CC%88ndische.jpg {noformat}
It instead should look like:
{noformat}
attachment; filename="Ausla?ndische.jpg"; 
filename*=UTF-8''Ausla%CC%88ndische.jpg {noformat}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9254) Logging for JCR API calls

2020-11-03 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225549#comment-17225549
 ] 

Matt Ryan commented on OAK-9254:


[~thomasm] I decided to give the call stack logging a try, and now I'm 
rethinking my proposal.

I pulled your branch and rebuilt my own custom Oak with your changes, along 
with a change to log the call stack.  Running my application only for maybe ten 
minutes with just a couple of page loads generated a 29GB log file.  It's so 
much information that it makes the log unwieldy and hard to use.

However, the trace logging you've added seems really useful.  I'd like to see 
this added to Oak.

> Logging for JCR API calls
> -
>
> Key: OAK-9254
> URL: https://issues.apache.org/jira/browse/OAK-9254
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> I would like to see which JCR API methods are called with which parameters 
> (paths, property names). Currently, debug level logging will only show the 
> operation, but not the parameters / the paths the method operates on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9254) Logging for JCR API calls

2020-10-26 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221009#comment-17221009
 ] 

Matt Ryan commented on OAK-9254:


[~thomasm] what would you think about also - or maybe optionally, only at TRACE 
logging level - also logging the call stack?

I could see this being really useful from the application side to temporarily 
enable logging and then run application code to see the resulting JCR calls and 
where they are coming from, to facilitate performance optimization efforts.

> Logging for JCR API calls
> -
>
> Key: OAK-9254
> URL: https://issues.apache.org/jira/browse/OAK-9254
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> I would like to see which JCR API methods are called with which parameters 
> (paths, property names). Currently, debug level logging will only show the 
> operation, but not the parameters / the paths the method operates on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8347) S3DataStore should use virtual-host-style access

2020-08-18 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180104#comment-17180104
 ] 

Matt Ryan commented on OAK-8347:


The main consideration here is probably just ensuring that we are using a 
reasonably up-to-date version of their SDK.

> S3DataStore should use virtual-host-style access
> 
>
> Key: OAK-8347
> URL: https://issues.apache.org/jira/browse/OAK-8347
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud
>Affects Versions: 1.12.0
>Reporter: Matt Ryan
>Priority: Major
>
> AWS has announced that they are moving from a path-based model for 
> addressable storage to a virtual-host model (see [0]).  S3DataStore is 
> implemented using the path-based model, and should update to use the 
> virtual-host model before the old format is deprecated.
> [0] - 
> https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-9142) AzureDataStore should use concurrent request count for all API calls

2020-07-21 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-9142.

Fix Version/s: 1.34.0
   Resolution: Fixed

Fixed in 
[r1880114|https://svn.apache.org/viewvc?view=revision=1880114].

> AzureDataStore should use concurrent request count for all API calls
> 
>
> Key: OAK-9142
> URL: https://issues.apache.org/jira/browse/OAK-9142
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud-azure
>Affects Versions: 1.32.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.34.0
>
>
> The Azure concurrent request count option, settable in the request options, 
> is not used for most web API requests but could easily be used for all of 
> them by default.  We should change the code in {{getAzureContainer()}} to 
> automatically use the configured concurrent request count, if provided, or 
> the default value otherwise.
> Additionally, we should validate the configuration value and enforce it 
> within a range.  The suggested range is a minimum value of 2 and a max of 50.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-9142) AzureDataStore should use concurrent request count for all API calls

2020-07-16 Thread Matt Ryan (Jira)
Matt Ryan created OAK-9142:
--

 Summary: AzureDataStore should use concurrent request count for 
all API calls
 Key: OAK-9142
 URL: https://issues.apache.org/jira/browse/OAK-9142
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob-cloud-azure
Affects Versions: 1.32.0
Reporter: Matt Ryan
Assignee: Matt Ryan


The Azure concurrent request count option, settable in the request options, is 
not used for most web API requests but could easily be used for all of them by 
default.  We should change the code in {{getAzureContainer()}} to automatically 
use the configured concurrent request count, if provided, or the default value 
otherwise.

Additionally, we should validate the configuration value and enforce it within 
a range.  The suggested range is a minimum value of 2 and a max of 50.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-26 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8969:
---
Fix Version/s: 1.22.3

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud-azure
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
>  Labels: candidate_oak_1_22
> Fix For: 1.22.3, 1.28.0
>
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-26 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067924#comment-17067924
 ] 

Matt Ryan commented on OAK-8969:


Fixed in 1.22 in 
[r1875730|https://svn.apache.org/viewvc?view=revision=1875730].

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud-azure
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
>  Labels: candidate_oak_1_22
> Fix For: 1.28.0
>
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-25 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066396#comment-17066396
 ] 

Matt Ryan edited comment on OAK-8969 at 3/25/20, 8:51 AM:
--

Fixed in 
[r1875608|https://svn.apache.org/viewvc?view=revision=1875608].

Needs to be backported also to 1.22.


was (Author: mattvryan):
Fixed in 
[r1875608|https://svn.apache.org/viewvc?view=revision=1875608].

Needs to be backported also to 1.22.  Does it also need to backport to 1.26?

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud-azure
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.3, 1.28.0
>
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-25 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8969:
---
Component/s: (was: blob-cloud)
 blob-cloud-azure

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud-azure
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.3, 1.28.0
>
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-24 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066396#comment-17066396
 ] 

Matt Ryan edited comment on OAK-8969 at 3/25/20, 5:22 AM:
--

Fixed in 
[r1875608|https://svn.apache.org/viewvc?view=revision=1875608].

Needs to be backported also to 1.22.  Does it also need to backport to 1.26?


was (Author: mattvryan):
Fixed in 
[r1875608|https://svn.apache.org/viewvc?view=revision=1875608].

Needs to be backported also to 1.26 and 1.22.

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.3, 1.28.0
>
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-24 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8969:
---
Fix Version/s: 1.28.0
   1.22.3

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.3, 1.28.0
>
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-24 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066396#comment-17066396
 ] 

Matt Ryan commented on OAK-8969:


Fixed in 
[r1875608|https://svn.apache.org/viewvc?view=revision=1875608].

Needs to be backported also to 1.26 and 1.22.

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-24 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8969:
---
Affects Version/s: 1.26.0

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.26.0
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-24 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066218#comment-17066218
 ] 

Matt Ryan commented on OAK-8969:


If caching is enabled then there is a chance that a URI previously generated 
without ignoring a domain override will be reused out of cache when another URI 
for the same blob ID is requested again.  If the subsequent request wants to 
ignore the domain override then the wrong URI will then be pulled from the 
cache, if the URI hasn't expired yet, because the key for the cache is just the 
blob ID.

A simple solution to this would be to compute the value of the download URI 
domain beforehand, and then use that along with the blob ID for the cache key.  
The domain value will be different if the domain override ignore flag is set 
than it would be if it is not set, so this would result in a cache miss and the 
correct URI being returned.

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8969) Ignore domain overwrite doesn't work well when presignedHttpDownloadURICacheMaxSize is set

2020-03-24 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-8969:
--

Assignee: Matt Ryan

> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set
> --
>
> Key: OAK-8969
> URL: https://issues.apache.org/jira/browse/OAK-8969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Reporter: Jun Zhang
>Assignee: Matt Ryan
>Priority: Major
>
> Ignore domain overwrite doesn't work well when 
> presignedHttpDownloadURICacheMaxSize is set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-09 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055427#comment-17055427
 ] 

Matt Ryan commented on OAK-8936:


Backport merged into 1.22 and committed in revision 1875019.

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.22.1
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
>  Labels: candidate_oak_1_22
> Fix For: 1.26.0
>
> Attachments: OAK-8936.patch
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-09 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8936:
---
Fix Version/s: 1.22.2

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.22.1
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
>  Labels: candidate_oak_1_22
> Fix For: 1.26.0, 1.22.2
>
> Attachments: OAK-8936.patch
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-05 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8936:
---
Attachment: OAK-8936.patch

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.22.1
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
>  Labels: candidate_oak_1_22
> Fix For: 1.26.0
>
> Attachments: OAK-8936.patch
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-04 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reopened OAK-8936:


Reopening to do the necessary backports.

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.22.0, 1.24.0, 1.22.1
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
> Fix For: 1.26.0
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-04 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051415#comment-17051415
 ] 

Matt Ryan commented on OAK-8936:


I was thinking about that. It probably would be good to backport.

The associated feature was introduced in 1.22.0.  So I assume it needs to be 
backported to 1.24 and 1.22.1.  Does it also need to be backported to 1.22.0?

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.22.0, 1.24.0, 1.22.1
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
> Fix For: 1.26.0
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-04 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8936:
---
Affects Version/s: 1.22.0
   1.22.1

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud
>Affects Versions: 1.22.0, 1.24.0, 1.22.1
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
> Fix For: 1.26.0
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-03 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8936.

Resolution: Fixed

Fixed in 
[r1874770|https://svn.apache.org/viewvc?view=revision=1874770].

> ValueImpl does not properly set domain override flag of BlobDownloadOptions
> ---
>
> Key: OAK-8936
> URL: https://issues.apache.org/jira/browse/OAK-8936
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.24.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Critical
> Fix For: 1.26.0
>
>
> In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
> when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
> conversion does not take the domain override flag into account.  This flag 
> must be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8936) ValueImpl does not properly set domain override flag of BlobDownloadOptions

2020-03-03 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8936:
--

 Summary: ValueImpl does not properly set domain override flag of 
BlobDownloadOptions
 Key: OAK-8936
 URL: https://issues.apache.org/jira/browse/OAK-8936
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.24.0
Reporter: Matt Ryan
Assignee: Matt Ryan
 Fix For: 1.26.0


In {{org.apache.jackrabbit.oak.plugins.value.jcr.ValueImpl.getDownloadURI()}} 
when converting a {{BinaryDownloadOptions}} to a {{BlobDownloadOptions}}, the 
conversion does not take the domain override flag into account.  This flag must 
be preserved in the conversion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8863) Oak-doc should cover BinaryUploadOptions usage

2020-01-16 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8863.

Resolution: Fixed

Fixed in 
[r1872896|https://svn.apache.org/viewvc?view=revision=1872896].

> Oak-doc should cover BinaryUploadOptions usage
> --
>
> Key: OAK-8863
> URL: https://issues.apache.org/jira/browse/OAK-8863
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Affects Versions: 1.22.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
> Fix For: 1.24.0
>
>
> Need to update the online documentation to cover usage of 
> {{BinaryUploadOptions}} for ignoring the domain override when creating signed 
> upload URIs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8863) Oak-doc should cover BinaryUploadOptions usage

2020-01-16 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8863:
--

 Summary: Oak-doc should cover BinaryUploadOptions usage
 Key: OAK-8863
 URL: https://issues.apache.org/jira/browse/OAK-8863
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: doc
Affects Versions: 1.22.0
Reporter: Matt Ryan
Assignee: Matt Ryan
 Fix For: 1.24.0


Need to update the online documentation to cover usage of 
{{BinaryUploadOptions}} for ignoring the domain override when creating signed 
upload URIs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8280) [Direct Binary Access] Allow client to veto use of CDN URI

2020-01-09 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012449#comment-17012449
 ] 

Matt Ryan commented on OAK-8280:


Yeah the baseline plugin complained.

> [Direct Binary Access] Allow client to veto use of CDN URI
> --
>
> Key: OAK-8280
> URL: https://issues.apache.org/jira/browse/OAK-8280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> As we learned in OAK-7702, using signed CDN URIs usually offers improved 
> throughput, but not always.  Implementing this issue would mean that we 
> extend the API to allow a client to indicate that they do not want the CDN 
> URI, even if a CDN is configured, because the client somehow knows that the 
> non-CDN URI will be better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8280) [Direct Binary Access] Allow client to veto use of CDN URI

2020-01-09 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8280.

Resolution: Fixed

Fixed in 
[r1872572|https://svn.apache.org/viewvc?view=revision=1872572].

> [Direct Binary Access] Allow client to veto use of CDN URI
> --
>
> Key: OAK-8280
> URL: https://issues.apache.org/jira/browse/OAK-8280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> As we learned in OAK-7702, using signed CDN URIs usually offers improved 
> throughput, but not always.  Implementing this issue would mean that we 
> extend the API to allow a client to indicate that they do not want the CDN 
> URI, even if a CDN is configured, because the client somehow knows that the 
> non-CDN URI will be better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8280) [Direct Binary Access] Allow client to veto use of CDN URI

2020-01-09 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8280:
---
Fix Version/s: 1.22.0

> [Direct Binary Access] Allow client to veto use of CDN URI
> --
>
> Key: OAK-8280
> URL: https://issues.apache.org/jira/browse/OAK-8280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> As we learned in OAK-7702, using signed CDN URIs usually offers improved 
> throughput, but not always.  Implementing this issue would mean that we 
> extend the API to allow a client to indicate that they do not want the CDN 
> URI, even if a CDN is configured, because the client somehow knows that the 
> non-CDN URI will be better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8280) [Direct Binary Access] Allow client to veto use of CDN URI

2020-01-09 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8280:
---
Affects Version/s: 1.20.0

> [Direct Binary Access] Allow client to veto use of CDN URI
> --
>
> Key: OAK-8280
> URL: https://issues.apache.org/jira/browse/OAK-8280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> As we learned in OAK-7702, using signed CDN URIs usually offers improved 
> throughput, but not always.  Implementing this issue would mean that we 
> extend the API to allow a client to indicate that they do not want the CDN 
> URI, even if a CDN is configured, because the client somehow knows that the 
> non-CDN URI will be better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8825) S3 tests fail due to non-existent bucket

2020-01-08 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8825:
---
Affects Version/s: 1.20.0

> S3 tests fail due to non-existent bucket
> 
>
> Key: OAK-8825
> URL: https://issues.apache.org/jira/browse/OAK-8825
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, it
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
> Fix For: 1.22.0
>
>
> Some Oak tests fail from time to time due to a missing S3 bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8825) S3 tests fail due to non-existent bucket

2020-01-08 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8825.

Resolution: Fixed

Fixed in 
[r1872524|https://svn.apache.org/viewvc?view=revision=1872524].

> S3 tests fail due to non-existent bucket
> 
>
> Key: OAK-8825
> URL: https://issues.apache.org/jira/browse/OAK-8825
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, it
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
> Fix For: 1.22.0
>
>
> Some Oak tests fail from time to time due to a missing S3 bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8825) S3 tests fail due to non-existent bucket

2020-01-08 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8825:
---
Fix Version/s: 1.22.0

> S3 tests fail due to non-existent bucket
> 
>
> Key: OAK-8825
> URL: https://issues.apache.org/jira/browse/OAK-8825
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, it
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
> Fix For: 1.22.0
>
>
> Some Oak tests fail from time to time due to a missing S3 bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8825) S3 tests fail due to non-existent bucket

2020-01-08 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010883#comment-17010883
 ] 

Matt Ryan commented on OAK-8825:


For now we're just going to:
* Add a bit of retry logic in the tests using newly created buckets to give S3 
time to actually create the bucket
* Catch the fact that the bucket creation failed, when it does, so we can exit 
the test a bit more gracefully

> S3 tests fail due to non-existent bucket
> 
>
> Key: OAK-8825
> URL: https://issues.apache.org/jira/browse/OAK-8825
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, it
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
>
> Some Oak tests fail from time to time due to a missing S3 bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8280) [Direct Binary Access] Allow client to veto use of CDN URI

2020-01-08 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-8280:
--

Assignee: Matt Ryan

> [Direct Binary Access] Allow client to veto use of CDN URI
> --
>
> Key: OAK-8280
> URL: https://issues.apache.org/jira/browse/OAK-8280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud, blob-cloud-azure
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> As we learned in OAK-7702, using signed CDN URIs usually offers improved 
> throughput, but not always.  Implementing this issue would mean that we 
> extend the API to allow a client to indicate that they do not want the CDN 
> URI, even if a CDN is configured, because the client somehow knows that the 
> non-CDN URI will be better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8825) S3 tests fail due to non-existent bucket

2020-01-08 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8825:
---
Description: Some Oak tests fail from time to time due to a missing S3 
bucket.  (was: Some Oak tests fail from time to time due to a missing S3 
bucket.  Since these tests do not fail consistently, I suspect the issue is 
just a race condition.)

> S3 tests fail due to non-existent bucket
> 
>
> Key: OAK-8825
> URL: https://issues.apache.org/jira/browse/OAK-8825
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, it
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Minor
>
> Some Oak tests fail from time to time due to a missing S3 bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8607) Undo workarounds for improper Content-Disposition support

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8607:
---
Fix Version/s: 1.22.0

> Undo workarounds for improper Content-Disposition support
> -
>
> Key: OAK-8607
> URL: https://issues.apache.org/jira/browse/OAK-8607
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure, blob-plugins, jcr
>Affects Versions: 1.10.0, 1.12.0, 1.14.0, 1.16.0, 1.18.0, 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> In order to move forward with a functioning signed download implementation 
> (particularly on {{AzureDataStore}}) workarounds were implemented in OAK-8013 
> and OAK-8601.  Once Content-Disposition is properly supported again for 
> signed downloads, these workarounds need to be removed and retested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8104) [Direct Binary Access] DataRecordDownloadOptions should properly support filename* for Content-Disposition

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8104:
---
Affects Version/s: 1.10.0
   1.12.0
   1.14.0
   1.16.0
   1.18.0
   1.20.0

> [Direct Binary Access] DataRecordDownloadOptions should properly support 
> filename* for Content-Disposition
> --
>
> Key: OAK-8104
> URL: https://issues.apache.org/jira/browse/OAK-8104
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.10.0, 1.12.0, 1.14.0, 1.16.0, 1.18.0, 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> OAK-8013 implemented a workaround for an issue in direct binary access where 
> the "filename*" portion of the specified Content-Disposition header was being 
> improperly set, and could not be fixed due to an issue with the Azure SDK.
> This issue is created to properly fix the "filename*" portion for both AWS 
> and Azure so it works as it should.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8104) [Direct Binary Access] DataRecordDownloadOptions should properly support filename* for Content-Disposition

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8104:
---
Fix Version/s: 1.22.0

> [Direct Binary Access] DataRecordDownloadOptions should properly support 
> filename* for Content-Disposition
> --
>
> Key: OAK-8104
> URL: https://issues.apache.org/jira/browse/OAK-8104
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.10.0, 1.12.0, 1.14.0, 1.16.0, 1.18.0, 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> OAK-8013 implemented a workaround for an issue in direct binary access where 
> the "filename*" portion of the specified Content-Disposition header was being 
> improperly set, and could not be fixed due to an issue with the Azure SDK.
> This issue is created to properly fix the "filename*" portion for both AWS 
> and Azure so it works as it should.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8607) Undo workarounds for improper Content-Disposition support

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8607:
---
Affects Version/s: 1.10.0
   1.12.0
   1.14.0
   1.16.0
   1.18.0
   1.20.0

> Undo workarounds for improper Content-Disposition support
> -
>
> Key: OAK-8607
> URL: https://issues.apache.org/jira/browse/OAK-8607
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure, blob-plugins, jcr
>Affects Versions: 1.10.0, 1.12.0, 1.14.0, 1.16.0, 1.18.0, 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> In order to move forward with a functioning signed download implementation 
> (particularly on {{AzureDataStore}}) workarounds were implemented in OAK-8013 
> and OAK-8601.  Once Content-Disposition is properly supported again for 
> signed downloads, these workarounds need to be removed and retested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8104) [Direct Binary Access] DataRecordDownloadOptions should properly support filename* for Content-Disposition

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8104.

Resolution: Fixed

Fixed in 
[r1871195|https://svn.apache.org/viewvc?view=revision=1871195].

> [Direct Binary Access] DataRecordDownloadOptions should properly support 
> filename* for Content-Disposition
> --
>
> Key: OAK-8104
> URL: https://issues.apache.org/jira/browse/OAK-8104
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> OAK-8013 implemented a workaround for an issue in direct binary access where 
> the "filename*" portion of the specified Content-Disposition header was being 
> improperly set, and could not be fixed due to an issue with the Azure SDK.
> This issue is created to properly fix the "filename*" portion for both AWS 
> and Azure so it works as it should.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8105) [Direct Binary Access] Update to latest Azure Blob Storage SDK

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8105:
---
Parent: (was: OAK-8104)
Issue Type: Task  (was: Technical task)

> [Direct Binary Access] Update to latest Azure Blob Storage SDK
> --
>
> Key: OAK-8105
> URL: https://issues.apache.org/jira/browse/OAK-8105
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: blob-cloud-azure
>Affects Versions: 1.10.0, 1.12.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> Since the implementation of the original AzureDataStore, the [original Azure 
> SDK for Java|https://github.com/Azure/azure-sdk-for-java] has been 
> [deprecated|https://github.com/Azure/azure-sdk-for-java/blob/master/README.md]
>  and replaced with a [newer SDK|https://github.com/Azure/azure-storage-java] 
> that uses a different API.
> The issue raised in OAK-8013 could not be properly fixed due to a bug in the 
> original SDK.  OAK-8104 was created for the purpose of ensuring this gets 
> fixed properly instead of settling for the workaround implemented with 
> OAK-8013.  The latest Azure SDK does not exhibit the bug that was in the 
> original SDK, so the proper way to fix OAK-8104 is to update AzureDataStore 
> to use the latest Azure SDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8607) Undo workarounds for improper Content-Disposition support

2019-12-11 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8607.

Resolution: Fixed

Fixed in 
[r1871195|https://svn.apache.org/viewvc?view=revision=1871195].

> Undo workarounds for improper Content-Disposition support
> -
>
> Key: OAK-8607
> URL: https://issues.apache.org/jira/browse/OAK-8607
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure, blob-plugins, jcr
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> In order to move forward with a functioning signed download implementation 
> (particularly on {{AzureDataStore}}) workarounds were implemented in OAK-8013 
> and OAK-8601.  Once Content-Disposition is properly supported again for 
> signed downloads, these workarounds need to be removed and retested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8825) S3 tests fail due to non-existent bucket

2019-12-09 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8825:
--

 Summary: S3 tests fail due to non-existent bucket
 Key: OAK-8825
 URL: https://issues.apache.org/jira/browse/OAK-8825
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: blob-cloud, it
Reporter: Matt Ryan
Assignee: Matt Ryan


Some Oak tests fail from time to time due to a missing S3 bucket.  Since these 
tests do not fail consistently, I suspect the issue is just a race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8105) [Direct Binary Access] Update to latest Azure Blob Storage SDK

2019-12-09 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991808#comment-16991808
 ] 

Matt Ryan commented on OAK-8105:


Microsoft has released a new version of the v8 SDK which fixes the 
Content-Disposition header issue, so there is no longer any urgency regarding 
this update.
We should still do this update of course, when it makes sense to do so.

> [Direct Binary Access] Update to latest Azure Blob Storage SDK
> --
>
> Key: OAK-8105
> URL: https://issues.apache.org/jira/browse/OAK-8105
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud-azure
>Affects Versions: 1.10.0, 1.12.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> Since the implementation of the original AzureDataStore, the [original Azure 
> SDK for Java|https://github.com/Azure/azure-sdk-for-java] has been 
> [deprecated|https://github.com/Azure/azure-sdk-for-java/blob/master/README.md]
>  and replaced with a [newer SDK|https://github.com/Azure/azure-storage-java] 
> that uses a different API.
> The issue raised in OAK-8013 could not be properly fixed due to a bug in the 
> original SDK.  OAK-8104 was created for the purpose of ensuring this gets 
> fixed properly instead of settling for the workaround implemented with 
> OAK-8013.  The latest Azure SDK does not exhibit the bug that was in the 
> original SDK, so the proper way to fix OAK-8104 is to update AzureDataStore 
> to use the latest Azure SDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8104) [Direct Binary Access] DataRecordDownloadOptions should properly support filename* for Content-Disposition

2019-12-09 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991804#comment-16991804
 ] 

Matt Ryan commented on OAK-8104:


Microsoft updated their version 8 SDK with a fix for this issue so I'm working 
on updating Oak to this latest version which should fix this.  OAK-8607 will 
also be completed as a part of that change.

> [Direct Binary Access] DataRecordDownloadOptions should properly support 
> filename* for Content-Disposition
> --
>
> Key: OAK-8104
> URL: https://issues.apache.org/jira/browse/OAK-8104
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> OAK-8013 implemented a workaround for an issue in direct binary access where 
> the "filename*" portion of the specified Content-Disposition header was being 
> improperly set, and could not be fixed due to an issue with the Azure SDK.
> This issue is created to properly fix the "filename*" portion for both AWS 
> and Azure so it works as it should.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8607) Undo workarounds for improper Content-Disposition support

2019-12-09 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991803#comment-16991803
 ] 

Matt Ryan commented on OAK-8607:


Microsoft updated their version 8 SDK with a fix for OAK-8104 so I'm working on 
updating Oak to this latest version which should fix OAK-8104 and this issue.

> Undo workarounds for improper Content-Disposition support
> -
>
> Key: OAK-8607
> URL: https://issues.apache.org/jira/browse/OAK-8607
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure, blob-plugins, jcr
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> In order to move forward with a functioning signed download implementation 
> (particularly on {{AzureDataStore}}) workarounds were implemented in OAK-8013 
> and OAK-8601.  Once Content-Disposition is properly supported again for 
> signed downloads, these workarounds need to be removed and retested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8818) oak-run test failures when running with -Dazure.config defined

2019-12-05 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8818.

Fix Version/s: 1.22.0
   Resolution: Fixed

Fixed in 
[r1870903|https://svn.apache.org/viewvc?view=revision=1870903].

> oak-run test failures when running with -Dazure.config defined
> --
>
> Key: OAK-8818
> URL: https://issues.apache.org/jira/browse/OAK-8818
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: oak-run
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> When running tests in {{oak-run}}, there are test failures in 
> {{DataStoreCommandTest}} when specifying {{-Dazure.config}}.  The failures 
> indicate an invalid connection string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8818) oak-run test failures when running with -Dazure.config defined

2019-12-05 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8818:
--

 Summary: oak-run test failures when running with -Dazure.config 
defined
 Key: OAK-8818
 URL: https://issues.apache.org/jira/browse/OAK-8818
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-run
Affects Versions: 1.20.0
Reporter: Matt Ryan
Assignee: Matt Ryan


When running tests in {{oak-run}}, there are test failures in 
{{DataStoreCommandTest}} when specifying {{-Dazure.config}}.  The failures 
indicate an invalid connection string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8280) [Direct Binary Access] Allow client to veto use of CDN URI

2019-12-03 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987512#comment-16987512
 ] 

Matt Ryan commented on OAK-8280:


The proposed changes can be reviewed here:  
https://github.com/apache/jackrabbit-oak/pull/162

This adds the ability for a client to specify to ignore the domain override 
when generating direct access upload or download URIs.  The use case (mentioned 
as something needed before) is that sometimes a client may have enough 
knowledge to know that certain requested URIs will result in faster throughput 
if the domain override is not used.  An example is a third-party VM running as 
a service in the same cloud region as where the blob storage is, which based on 
my testing (at least with Azure) gives many times faster access time if going 
direct to storage using the default domain rather than using a CDN domain 
(which would be faster for clients not within the Azure region).  With this 
change the client can request signed URIs that do not apply any domain 
override, if one is configured.

> [Direct Binary Access] Allow client to veto use of CDN URI
> --
>
> Key: OAK-8280
> URL: https://issues.apache.org/jira/browse/OAK-8280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob-cloud, blob-cloud-azure
>Reporter: Matt Ryan
>Priority: Major
>
> As we learned in OAK-7702, using signed CDN URIs usually offers improved 
> throughput, but not always.  Implementing this issue would mean that we 
> extend the API to allow a client to indicate that they do not want the CDN 
> URI, even if a CDN is configured, because the client somehow knows that the 
> non-CDN URI will be better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8696) [Direct Binary Access] upload algorithm documentation should also be on the website

2019-12-03 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8696.

Fix Version/s: 1.22.0
   Resolution: Fixed

Fixed in 
[r1870776|https://svn.apache.org/viewvc?view=revision=1870776].

> [Direct Binary Access] upload algorithm documentation should also be on the 
> website
> ---
>
> Key: OAK-8696
> URL: https://issues.apache.org/jira/browse/OAK-8696
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Alexander Klimetschek
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> The upload algorithm description that is currently a bit hidden in the 
> javadocs of 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-jackrabbit-api/src/main/java/org/apache/jackrabbit/api/binary/BinaryUpload.java]
> should also be included on the documentation site: 
> [https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8815) Javadoc build fails if using Java 11

2019-12-03 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987371#comment-16987371
 ] 

Matt Ryan commented on OAK-8815:


Also has the same effect when using {{mvn site-deploy}}.

> Javadoc build fails if using Java 11
> 
>
> Key: OAK-8815
> URL: https://issues.apache.org/jira/browse/OAK-8815
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> Trying to build the Javadocs when using Java 11 fails. If you specify Java 8 
> when building the Javadocs, the build succeeds.
> Command I'm using to build the Javadocs:  {{mvn site -Pjavadoc}} (as 
> described in the {{oak-doc}} readme).
> I will include more information on the errors in comments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8695) [Direct Binary Access] upload algorithm documentation should make it clear that not all URIs have to be used

2019-12-03 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan resolved OAK-8695.

Fix Version/s: 1.22.0
   Resolution: Fixed

Fixed in 
[r1870774|https://svn.apache.org/viewvc?view=revision=1870774].

> [Direct Binary Access] upload algorithm documentation should make it clear 
> that not all URIs have to be used
> 
>
> Key: OAK-8695
> URL: https://issues.apache.org/jira/browse/OAK-8695
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: api, doc
>Reporter: Alexander Klimetschek
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.22.0
>
>
> Regarding [BinaryUpload 
> javadoc|http://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryUpload.html]
>  ([code 
> here|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-jackrabbit-api/src/main/java/org/apache/jackrabbit/api/binary/BinaryUpload.java]):
> In the "Steps" section for #3, it should explicitly state that not all the 
> URIs need to be used. It could be inferred based on what is there, but it's 
> not obvious.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8815) Javadoc build fails if using Java 11

2019-12-03 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987136#comment-16987136
 ] 

Matt Ryan commented on OAK-8815:


On an updated SVN (r1870758), I ran {{mvn clean install -DskipTests}} to do a 
full build of all of Oak, and then ran {{mvn clean install -Pdoc -pl 
:oak-doc-railroad-macro}}, {{mvn clean -Pdoc}}, and {{mvn site -Pdoc}} in 
sequence as described in the {{oak-doc}} README.md.  All of these worked 
successfully.

I then ran {{mvn site -Pjavadoc}}.  I got error output like the following:
{noformat}
[INFO] Jackrabbit Oak . FAILURE [01:48 min]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  02:01 min
[INFO] Finished at: 2019-12-03T09:44:30-07:00
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.1.1:aggregate (aggregate) on 
project jackrabbit-oak: An error has occurred in Javadoc report generation:
[ERROR] Exit code: 1 - 
/Users/maryan/svn/oak/trunk/oak-security-spi/src/main/java/org/apache/jackrabbit/oak/spi/security/principal/GroupPrincipals.java:20:
 warning: [removal] Group in java.security.acl has been deprecated and marked 
for removal
[ERROR] import java.security.acl.Group;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-security-spi/src/main/java/org/apache/jackrabbit/oak/spi/security/principal/PrincipalProvider.java:20:
 warning: [removal] Group in java.security.acl has been deprecated and marked 
for removal
[ERROR] import java.security.acl.Group;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-security-spi/src/main/java/org/apache/jackrabbit/oak/spi/security/principal/GroupPrincipalWrapper.java:20:
 warning: [removal] Group in java.security.acl has been deprecated and marked 
for removal
[ERROR] import java.security.acl.Group;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-security-spi/src/main/java/org/apache/jackrabbit/oak/spi/security/principal/EmptyPrincipalProvider.java:20:
 warning: [removal] Group in java.security.acl has been deprecated and marked 
for removal
[ERROR] import java.security.acl.Group;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-security-spi/src/main/java/org/apache/jackrabbit/oak/spi/security/principal/EveryonePrincipal.java:20:
 warning: [removal] Group in java.security.acl has been deprecated and marked 
for removal
[ERROR] import java.security.acl.Group;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-security-spi/src/main/java/org/apache/jackrabbit/oak/spi/security/principal/CompositePrincipalProvider.java:20:
 warning: [removal] Group in java.security.acl has been deprecated and marked 
for removal
[ERROR] import java.security.acl.Group;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/RepositoryUpgrade.java:144:
 error: cannot find symbol
[ERROR] import org.apache.lucene.index.TermDocs;
[ERROR]   ^
[ERROR]   symbol:   class TermDocs
[ERROR]   location: package org.apache.lucene.index
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/RepositoryUpgrade.java:145:
 error: cannot find symbol
[ERROR] import org.apache.lucene.index.TermEnum;
[ERROR]   ^
[ERROR]   symbol:   class TermEnum
[ERROR]   location: package org.apache.lucene.index
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/FieldFactory.java:36:
 error: cannot find symbol
[ERROR] import static 
org.apache.lucene.index.FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS;
[ERROR]^
[ERROR]   symbol:   class IndexOptions
[ERROR]   location: class FieldInfo
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/FieldFactory.java:36:
 error: static import only from classes and interfaces
[ERROR] import static 
org.apache.lucene.index.FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS;
[ERROR] ^
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/FieldFactory.java:37:
 error: cannot find symbol
[ERROR] import static 
org.apache.lucene.index.FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS;
[ERROR]^
[ERROR]   symbol:   class IndexOptions
[ERROR]   location: class FieldInfo
[ERROR] 
/Users/maryan/svn/oak/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/FieldFactory.java:37:
 error: 

[jira] [Created] (OAK-8815) Javadoc build fails if using Java 11

2019-12-03 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8815:
--

 Summary: Javadoc build fails if using Java 11
 Key: OAK-8815
 URL: https://issues.apache.org/jira/browse/OAK-8815
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: doc
Affects Versions: 1.20.0
Reporter: Matt Ryan
 Fix For: 1.22.0


Trying to build the Javadocs when using Java 11 fails. If you specify Java 8 
when building the Javadocs, the build succeeds.

Command I'm using to build the Javadocs:  {{mvn site -Pjavadoc}} (as described 
in the {{oak-doc}} readme).

I will include more information on the errors in comments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-6632) [upgrade] oak-upgrade should support azure blobstorage

2019-12-02 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-6632:
--

Assignee: (was: Matt Ryan)

> [upgrade] oak-upgrade should support azure blobstorage
> --
>
> Key: OAK-6632
> URL: https://issues.apache.org/jira/browse/OAK-6632
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Raul Hudea
>Priority: Major
>  Labels: azureblob
>
> oak-upgrade should support azuredatastore in addition to s3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-6632) [upgrade] oak-upgrade should support azure blobstorage

2019-12-02 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-6632:
--

Assignee: Matt Ryan

> [upgrade] oak-upgrade should support azure blobstorage
> --
>
> Key: OAK-6632
> URL: https://issues.apache.org/jira/browse/OAK-6632
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Raul Hudea
>Assignee: Matt Ryan
>Priority: Major
>  Labels: azureblob
>
> oak-upgrade should support azuredatastore in addition to s3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8695) [Direct Binary Access] upload algorithm documentation should make it clear that not all URIs have to be used

2019-11-26 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983067#comment-16983067
 ] 

Matt Ryan commented on OAK-8695:


A pull request for review is here:  
[https://github.com/apache/jackrabbit-oak/pull/164]

/cc [~alexander.klimetschek]

> [Direct Binary Access] upload algorithm documentation should make it clear 
> that not all URIs have to be used
> 
>
> Key: OAK-8695
> URL: https://issues.apache.org/jira/browse/OAK-8695
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: api, doc
>Reporter: Alexander Klimetschek
>Assignee: Matt Ryan
>Priority: Major
>
> Regarding [BinaryUpload 
> javadoc|http://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryUpload.html]
>  ([code 
> here|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-jackrabbit-api/src/main/java/org/apache/jackrabbit/api/binary/BinaryUpload.java]):
> In the "Steps" section for #3, it should explicitly state that not all the 
> URIs need to be used. It could be inferred based on what is there, but it's 
> not obvious.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-8696) [Direct Binary Access] upload algorithm documentation should also be on the website

2019-11-26 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983063#comment-16983063
 ] 

Matt Ryan edited comment on OAK-8696 at 11/27/19 1:19 AM:
--

[~alexander.klimetschek] I would prefer to keep the algorithm in one place so 
we don't have multiple copies to update if it ever changes. Is it acceptable to 
expose the link a bit more in the section where we talk about the second step 
of direct upload?

For example, the documentation for that part currently says:
{noformat}
2. **Upload:** The remote client performs the actual binary upload directly to
the binary storage provider. The BinaryUpload returned from the previous call
to `initiateBinaryUpload(long, int)` contains detailed instructions on how to
complete the upload successfully. For more information, see the `BinaryUpload`
documentation. {noformat}
We could change it to something like:
{noformat}
2. **Upload:** The remote client performs the actual binary upload directly to
the binary storage provider. The BinaryUpload returned from the previous call
to `initiateBinaryUpload(long, int)` contains detailed instructions on how to
complete the upload successfully. For more information, see the
[`BinaryUpload` 
documentation](http://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryUpload.html).
 {noformat}
WDYT?


was (Author: mattvryan):
[~alexander.klimetschek] I would prefer to keep the algorithm in one place so 
we don't have multiple copies to update if it ever changes. Is it acceptable to 
expose the link a bit more in the section where we talk about the second step 
of direct upload?

For example, the documentation for that part currently says:
{noformat}
2. **Upload:** The remote client performs the actual binary upload directly to 
the binary storage provider. The BinaryUpload returned from the previous call 
to `initiateBinaryUpload(long, int)` contains detailed instructions on how to 
complete the upload successfully. For more information, see the `BinaryUpload` 
documentation. {noformat}
We could change it to something like:
{noformat}
2. **Upload:** The remote client performs the actual binary upload directly to 
the binary storage provider. The BinaryUpload returned from the previous call 
to `initiateBinaryUpload(long, int)` contains detailed instructions on how to 
complete the upload successfully. For more information, see the [`BinaryUpload` 
documentation](http://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryUpload.html).
 {noformat}
WDYT?

> [Direct Binary Access] upload algorithm documentation should also be on the 
> website
> ---
>
> Key: OAK-8696
> URL: https://issues.apache.org/jira/browse/OAK-8696
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Alexander Klimetschek
>Assignee: Matt Ryan
>Priority: Major
>
> The upload algorithm description that is currently a bit hidden in the 
> javadocs of 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-jackrabbit-api/src/main/java/org/apache/jackrabbit/api/binary/BinaryUpload.java]
> should also be included on the documentation site: 
> [https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8696) [Direct Binary Access] upload algorithm documentation should also be on the website

2019-11-26 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983063#comment-16983063
 ] 

Matt Ryan commented on OAK-8696:


[~alexander.klimetschek] I would prefer to keep the algorithm in one place so 
we don't have multiple copies to update if it ever changes. Is it acceptable to 
expose the link a bit more in the section where we talk about the second step 
of direct upload?

For example, the documentation for that part currently says:
{noformat}
2. **Upload:** The remote client performs the actual binary upload directly to 
the binary storage provider. The BinaryUpload returned from the previous call 
to `initiateBinaryUpload(long, int)` contains detailed instructions on how to 
complete the upload successfully. For more information, see the `BinaryUpload` 
documentation. {noformat}
We could change it to something like:
{noformat}
2. **Upload:** The remote client performs the actual binary upload directly to 
the binary storage provider. The BinaryUpload returned from the previous call 
to `initiateBinaryUpload(long, int)` contains detailed instructions on how to 
complete the upload successfully. For more information, see the [`BinaryUpload` 
documentation](http://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/api/binary/BinaryUpload.html).
 {noformat}
WDYT?

> [Direct Binary Access] upload algorithm documentation should also be on the 
> website
> ---
>
> Key: OAK-8696
> URL: https://issues.apache.org/jira/browse/OAK-8696
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Alexander Klimetschek
>Assignee: Matt Ryan
>Priority: Major
>
> The upload algorithm description that is currently a bit hidden in the 
> javadocs of 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-jackrabbit-api/src/main/java/org/apache/jackrabbit/api/binary/BinaryUpload.java]
> should also be included on the documentation site: 
> [https://jackrabbit.apache.org/oak/docs/features/direct-binary-access.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8794) oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 2.10.0

2019-11-25 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-8794:
--

Assignee: Matt Ryan

> oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 
> 2.10.0
> ---
>
> Key: OAK-8794
> URL: https://issues.apache.org/jira/browse/OAK-8794
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> If the Jackson version in {{oak-parent/pom.xml}} is updated from 2.9.10 to 
> 2.10.0, we get a build failure in {{oak-solr-osgi}} if we try to build with 
> Java 8.
> This is blocking OAK-8105 which in turn is blocking OAK-8607 and OAK-8104.  
> OAK-8105 is about updating {{AzureDataStore}} to the Azure version 12 SDK 
> which requires Jackson 2.10.0.
> Would it be possible to update {{oak-parent/pom.xml}} to Jackson version 
> 2.10.0 and then specify 2.9.10 in {{oak-solr-osgi}}?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (OAK-8794) oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 2.10.0

2019-11-25 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan reassigned OAK-8794:
--

Assignee: Thomas Mueller  (was: Matt Ryan)

> oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 
> 2.10.0
> ---
>
> Key: OAK-8794
> URL: https://issues.apache.org/jira/browse/OAK-8794
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Affects Versions: 1.20.0
>Reporter: Matt Ryan
>Assignee: Thomas Mueller
>Priority: Major
>
> If the Jackson version in {{oak-parent/pom.xml}} is updated from 2.9.10 to 
> 2.10.0, we get a build failure in {{oak-solr-osgi}} if we try to build with 
> Java 8.
> This is blocking OAK-8105 which in turn is blocking OAK-8607 and OAK-8104.  
> OAK-8105 is about updating {{AzureDataStore}} to the Azure version 12 SDK 
> which requires Jackson 2.10.0.
> Would it be possible to update {{oak-parent/pom.xml}} to Jackson version 
> 2.10.0 and then specify 2.9.10 in {{oak-solr-osgi}}?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8794) oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 2.10.0

2019-11-25 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16981826#comment-16981826
 ] 

Matt Ryan commented on OAK-8794:


The build failure:
{noformat}
[WARNING] Manifest org.apache.jackrabbit:oak-solr-osgi:bundle:1.21-SNAPSHOT : 
Unused Import-Package instructions: [com.googlecode.*, com.sun.*, 
org.apache.regexp.*, org.apache.calcite.ling4j.*]
[ERROR] Manifest org.apache.jackrabbit:oak-solr-osgi:bundle:1.21-SNAPSHOT : Got 
unexpected exception while 
analyzing:org.apache.felix.scrplugin.SCRDescriptorException: Unable to load 
compiled class: module-info
at 
org.apache.felix.scrplugin.helper.ClassScanner.scanSources(ClassScanner.java:156)
at 
org.apache.felix.scrplugin.SCRDescriptorGenerator.execute(SCRDescriptorGenerator.java:146)
at 
org.apache.felix.scrplugin.bnd.SCRDescriptorBndPlugin.analyzeJar(SCRDescriptorBndPlugin.java:178)
at aQute.bnd.osgi.Analyzer.doPlugins(Analyzer.java:664)
at aQute.bnd.osgi.Analyzer.analyze(Analyzer.java:216)
at aQute.bnd.osgi.Builder.analyze(Builder.java:387)
at aQute.bnd.osgi.Analyzer.calcManifest(Analyzer.java:694)
at aQute.bnd.osgi.Builder.build(Builder.java:108)
at 
org.apache.felix.bundleplugin.ManifestPlugin.getAnalyzer(ManifestPlugin.java:291)
at 
org.apache.felix.bundleplugin.ManifestPlugin.execute(ManifestPlugin.java:98)
at 
org.apache.felix.bundleplugin.BundlePlugin.execute(BundlePlugin.java:384)
at 
org.apache.felix.bundleplugin.BundlePlugin.execute(BundlePlugin.java:375)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:955)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:290)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: java.lang.UnsupportedClassVersionError: module-info has been 
compiled by a more recent version of the Java Runtime (class file version 
53.0), this version of the Java Runtime only recognizes class file versions up 
to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.felix.scrplugin.helper.ClassScanner.scanSources(ClassScanner.java:144)
... 33 more
[ERROR] Error(s) found in manifest configuration
{noformat}

/cc [~reschke] who provided the build error output

> oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 
> 2.10.0
> ---
>
> 

[jira] [Created] (OAK-8794) oak-solr-osgi does not build for Java 8 if Jackson libraries upgraded to 2.10.0

2019-11-25 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8794:
--

 Summary: oak-solr-osgi does not build for Java 8 if Jackson 
libraries upgraded to 2.10.0
 Key: OAK-8794
 URL: https://issues.apache.org/jira/browse/OAK-8794
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: solr
Affects Versions: 1.20.0
Reporter: Matt Ryan


If the Jackson version in {{oak-parent/pom.xml}} is updated from 2.9.10 to 
2.10.0, we get a build failure in {{oak-solr-osgi}} if we try to build with 
Java 8.
This is blocking OAK-8105 which in turn is blocking OAK-8607 and OAK-8104.  
OAK-8105 is about updating {{AzureDataStore}} to the Azure version 12 SDK which 
requires Jackson 2.10.0.
Would it be possible to update {{oak-parent/pom.xml}} to Jackson version 2.10.0 
and then specify 2.9.10 in {{oak-solr-osgi}}?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >