gerlowskija commented on code in PR #1545: URL: https://github.com/apache/solr/pull/1545#discussion_r1160157029
########## solr/modules/s3-repository/src/test/com/adobe/testing/s3mock/util/AwsChunkedDecodingInputStream.java: ########## @@ -0,0 +1,144 @@ +/* + * Copyright 2017-2022 Adobe. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.adobe.testing.s3mock.util; + +import java.io.BufferedInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +/** + * Skips V4 style signing metadata from input streams. + * <p>The original stream looks like this (newlines are CRLF):</p> + * + * <pre> + * 5;chunk-signature=7ece820edcf094ce1ef6d643c8db60b67913e28831d9b0430efd2b56a9deec5e + * 12345 + * 0;chunk-signature=ee2c094d7162170fcac17d2c76073cd834b0488bfe52e89e48599b8115c7ffa2 + * </pre> + * + * <p>The format of each chunk of data is:</p> + * + * <pre> + * [hex-encoded-number-of-bytes-in-chunk];chunk-signature=[sha256-signature][crlf] + * [payload-bytes-of-this-chunk][crlf] + * </pre> + * + * @see + * <a href="http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/AwsChunkedEncodingInputStream.html"> + * AwsChunkedEncodingInputStream</a> + */ +public class AwsChunkedDecodingInputStream extends InputStream { + + /** + * That's the max chunk buffer size used in the AWS implementation. + */ + private static final int MAX_CHUNK_SIZE = 256 * 1024; + + private static final byte[] CRLF = "\r\n".getBytes(StandardCharsets.UTF_8); + + private static final byte[] DELIMITER = ";".getBytes(StandardCharsets.UTF_8); + + private final InputStream source; + + private int remainingInChunk = 0; + + private final ByteBuffer byteBuffer = ByteBuffer.allocate(MAX_CHUNK_SIZE); + + /** + * Constructs a new {@link AwsChunkedDecodingInputStream}. + * + * @param source The {@link InputStream} to wrap. + */ + public AwsChunkedDecodingInputStream(final InputStream source) { + // Remove this class after TODO open issue with s3mock + // Buffer the source InputStream since this class only implements read() so + // pass off the actual buffering to the BufferedInputStream to read bigger + // chunks at once. This avoids a lot of single byte reads. + this.source = new BufferedInputStream(source); Review Comment: I could go either way. I get that copying the file just for tests is a pain, and we'd have to update our NOTICE.txt for it and all that. But OTOH, I just had to increase a timeout on the relatively new S3InstallShardTest that I'm 99% sure is failing due to general S3Mock slowness. IMO it wouldn't hurt to merge this and then nuke the copied file when the next S3Mock release comes out. With the way the solrbot stuff works now we'd even get alerted about the S3Mock release by way of a new PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
