Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 20/05/15 15:17, Alan Bateman wrote: On 14/05/2015 09:26, Chris Hegarty wrote: I think we’ve agreed that we are not going to attempt to re-introduce the problematic interruptible I/O mechanism. These new methods are targeted at specific use-cases and common patterns found in code. I’d like to do a final review of the spec before finalising it. http://cr.openjdk.java.net/~chegar/readBytes/webrev.00 I read through the javadoc and it looks good. Thanks Alan. A passing comment is that the areas of the byte array that aren't touched by readNBytes seems a bit much and a distraction to have it covered in two paragraphs. Roger made the very same comment. I justified the duplication as the paragraphs are covering potentially two different cases. One where some data is read, and the other when no data is read. Given this has some up twice now, I'll just remove the second paragraph. In @param b then it says "buffer" when I assume it should be "byte array" to be consistent with the method description. Fixed. Thanks, -Chris. -Alan
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 14/05/2015 09:26, Chris Hegarty wrote: I think we’ve agreed that we are not going to attempt to re-introduce the problematic interruptible I/O mechanism. These new methods are targeted at specific use-cases and common patterns found in code. I’d like to do a final review of the spec before finalising it. http://cr.openjdk.java.net/~chegar/readBytes/webrev.00 I read through the javadoc and it looks good. A passing comment is that the areas of the byte array that aren't touched by readNBytes seems a bit much and a distraction to have it covered in two paragraphs. In @param b then it says "buffer" when I assume it should be "byte array" to be consistent with the method description. -Alan
Re: RFR [9] Add blocking bulk read to java.io.InputStream
I think we’ve agreed that we are not going to attempt to re-introduce the problematic interruptible I/O mechanism. These new methods are targeted at specific use-cases and common patterns found in code. I’d like to do a final review of the spec before finalising it. http://cr.openjdk.java.net/~chegar/readBytes/webrev.00 -Chris. On 7 May 2015, at 15:10, Chris Hegarty wrote: > Thanks for the comments. All have been considered and incorporated ( where > applicable ). > > I sketched out a readAllBytes, added some basic tests, and moved this into a > webrev. I have not created a specdiff, as the changes simply add two new > methods, that are easily readable. > > I think this version, less review comments, covers the most common use-cases. > > http://cr.openjdk.java.net/~chegar/readBytes/webrev.00/ > > -Chris. > > On 5 May 2015, at 10:54, Alan Bateman wrote: > >> On 02/05/2015 09:27, Chris Hegarty wrote: >>> : >>> Thanks, this was an editing issue. Removed. >> I think the javadoc looks quite good now, except may be the first statement >> "Reads some bytes ...". It might be clearer to start with "Reads a given >> number of bytes ...". The subsequent text makes the short read case and the >> return value clear. >> >>> >>> As Alan has commented, another readAllBytes() returning a byte[] maybe >>> useful too ( but a different use case ). Let’s park this momentarily, while >>> I sketch up the readAllBytes variant, so we can ensure that the typical use >>> cases have been addressed. Doing so may feedback into the spec of this >>> method. I’ll push this latest draft into the sandbox so it is not lost. >> Yes, a separate use-case but once that I would expect to be common. >> >> -Alan. >> >
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 07/05/2015 15:36, Roger Riggs wrote: Hi Alan, I recognize that this utility function can't compensate for failings of the underlying stream. But I thought async close was the one common action that did work on streams to avoid indefinite blocking. No, not generally for InputStreams so it's very possible that the close might block until the read completes. -Alan
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Alan, I recognize that this utility function can't compensate for failings of the underlying stream. But I thought async close was the one common action that did work on streams to avoid indefinite blocking. It would be more appropriate to say that the behavior depends on the underlying stream and not propagate the under specification. Roger On 5/7/2015 10:29 AM, Alan Bateman wrote: On 07/05/2015 15:20, Roger Riggs wrote: Hi Chris, Is it really impossible to specify at least that closing the stream will terminate the read? The wording that Chris has is wording that we use elsewhere too. In general then the InputStream could be to anything and no guarantee that all InputStream implementations could preempt the read. -Alan.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 07/05/2015 15:20, Roger Riggs wrote: Hi Chris, Is it really impossible to specify at least that closing the stream will terminate the read? The wording that Chris has is wording that we use elsewhere too. In general then the InputStream could be to anything and no guarantee that all InputStream implementations could preempt the read. -Alan.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Chris, Is it really impossible to specify at least that closing the stream will terminate the read? A thread that is blocking on some I/O needs to have some way to interrupt it. Terminating the VM because a read is stuck or to leave the thread around indefinately is too severe. + * The behavior for the case where the input stream is asynchronously + * closed, or the thread interrupted during the read, is highly input + * stream specific, and therefore not specified. + * Roger On 5/7/2015 10:10 AM, Chris Hegarty wrote: Thanks for the comments. All have been considered and incorporated ( where applicable ). I sketched out a readAllBytes, added some basic tests, and moved this into a webrev. I have not created a specdiff, as the changes simply add two new methods, that are easily readable. I think this version, less review comments, covers the most common use-cases. http://cr.openjdk.java.net/~chegar/readBytes/webrev.00/ -Chris. On 5 May 2015, at 10:54, Alan Bateman wrote: On 02/05/2015 09:27, Chris Hegarty wrote: : Thanks, this was an editing issue. Removed. I think the javadoc looks quite good now, except may be the first statement "Reads some bytes ...". It might be clearer to start with "Reads a given number of bytes ...". The subsequent text makes the short read case and the return value clear. As Alan has commented, another readAllBytes() returning a byte[] maybe useful too ( but a different use case ). Let’s park this momentarily, while I sketch up the readAllBytes variant, so we can ensure that the typical use cases have been addressed. Doing so may feedback into the spec of this method. I’ll push this latest draft into the sandbox so it is not lost. Yes, a separate use-case but once that I would expect to be common. -Alan.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Thanks for the comments. All have been considered and incorporated ( where applicable ). I sketched out a readAllBytes, added some basic tests, and moved this into a webrev. I have not created a specdiff, as the changes simply add two new methods, that are easily readable. I think this version, less review comments, covers the most common use-cases. http://cr.openjdk.java.net/~chegar/readBytes/webrev.00/ -Chris. On 5 May 2015, at 10:54, Alan Bateman wrote: > On 02/05/2015 09:27, Chris Hegarty wrote: >> : >> Thanks, this was an editing issue. Removed. > I think the javadoc looks quite good now, except may be the first statement > "Reads some bytes ...". It might be clearer to start with "Reads a given > number of bytes ...". The subsequent text makes the short read case and the > return value clear. > >> >> As Alan has commented, another readAllBytes() returning a byte[] maybe >> useful too ( but a different use case ). Let’s park this momentarily, while >> I sketch up the readAllBytes variant, so we can ensure that the typical use >> cases have been addressed. Doing so may feedback into the spec of this >> method. I’ll push this latest draft into the sandbox so it is not lost. > Yes, a separate use-case but once that I would expect to be common. > > -Alan. >
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 02/05/2015 09:27, Chris Hegarty wrote: : Thanks, this was an editing issue. Removed. I think the javadoc looks quite good now, except may be the first statement "Reads some bytes ...". It might be clearer to start with "Reads a given number of bytes ...". The subsequent text makes the short read case and the return value clear. As Alan has commented, another readAllBytes() returning a byte[] maybe useful too ( but a different use case ). Let’s park this momentarily, while I sketch up the readAllBytes variant, so we can ensure that the typical use cases have been addressed. Doing so may feedback into the spec of this method. I’ll push this latest draft into the sandbox so it is not lost. Yes, a separate use-case but once that I would expect to be common. -Alan.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Thanks for looking at this Roger. On 1 May 2015, at 15:20, Roger Riggs wrote: > Hi Chris, > > There is some duplication in the descriptions of the buffer contents; see > below. > > On 5/1/2015 5:54 AM, Chris Hegarty wrote: >> This latest version addresses all comments so far: >> >> /** >> * Reads some bytes from the input stream into the given byte array. This >> * method blocks until {@code len} bytes of input data have been read, end >> * of stream is detected, or an exception is thrown. The number of bytes >> * actually read, possibly zero, is returned. This method does not close >> * the input stream. >> * >> * In the case where end of stream is reached before {@code len} bytes >> * have been read, then the actual number of bytes read will be returned. >> * When this stream reaches end of stream, further invocations of this >> * method will return zero. >> * >> * If {@code len} is zero, then no bytes are read and {@code 0} is >> * returned; otherwise, there is an attempt to read up to {@code len} bytes. >> * >> * The first byte read is stored into element {@code b[off]}, the next >> * one in to {@code b[off+1]}, and so on. The number of bytes read is, at >> * most, equal to {@code len}. Let k be the number of bytes actually >> * read; these bytes will be stored in elements {@code b[off]} through >> * {@code b[off+}k{@code -1]}, leaving elements {@code >> b[off+}k >> * {@code ]} through {@code b[off+len-1]} unaffected. > This section duplicates the previous sentence and the following sentence. I don’t see these as being duplicates. The above states the case where some bytes have been read, while the paragraph below is always the case, regardless of whether any bytes are read. I see it as being explicit. >> * >> * In the case where {@code off > 0}, elements {@code b[0]} through >> * {@code b[off-1]} are unaffected. In every case, elements >> * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. >> * >> * In every case, elements {@code b[0]} through {@code b[off-1]} and >> * elements {@code b[off+len]} through {@code b[b.length-1]} are unaffected. > Duplicates previous paragraph. Thanks, this was an editing issue. Removed. As Alan has commented, another readAllBytes() returning a byte[] maybe useful too ( but a different use case ). Let’s park this momentarily, while I sketch up the readAllBytes variant, so we can ensure that the typical use cases have been addressed. Doing so may feedback into the spec of this method. I’ll push this latest draft into the sandbox so it is not lost. -Chris. > Each section of the buffer should be described only once. > > Regards, Roger > >> * >> * The behavior for the case where the input stream is asynchronously >> * closed, or the thread interrupted during the read, is highly input >> * stream specific, and therefore not specified. >> * >> * If an I/O error occurs reading from the input stream, then it may >> occur do >> * so after some, but not all, bytes of {@code b} have been updated with >> * data from the input stream. Consequently the input stream and {@code b} >> * may be in an inconsistent state. It is strongly recommended that the >> * stream be promptly closed if an I/O error occurs. >> * >> * @param b the buffer into which the data is read >> * @param off the start offset in {@code b} at which the data is written >> * @param len the maximum number of bytes to read >> * @return the actual number of bytes read into the buffer >> * @throws IOException if an I/O error occurs >> * @throws NullPointerException if {@code b} is {@code null} >> * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} >> * is negative, or {@code len} is greater than {@code b.length - >> off} >> * >> * @since 1.9 >> */ >> public int readNBytes(byte[] b, int off, int len) throws IOException { >> Objects.requireNonNull(b); >> if (off < 0 || len < 0 || len > b.length - off) >> throw new IndexOutOfBoundsException(); >> int n = 0; >> while (n < len) { >> int count = read(b, off + n, len - n); >> if (count < 0) >> break; >> n += count; >> } >> return n; >> } >> >> -Chris. >> >> On 24/04/15 09:44, Chris Hegarty wrote: >>> On 23 Apr 2015, at 22:24, Roger Riggs wrote: >>> Hi Pavel, On 4/23/2015 5:12 PM, Pavel Rappo wrote: > Hey Roger, > > 1. Good catch! This thing also applies to > java.io.InputStream.read(byte[], int, int): >>> >>> Yes, good catch indeed. >>> > * In every case, elements b[0] through > * b[off] and elements b[off+len] through > * b[b.length-1] are unaffected. > > I suppose the javadoc for the method proposed by Chris has started its > life as a > copy of the javadoc read(byte[], int, int) which was assumed to be > perfectly
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Chris, There is some duplication in the descriptions of the buffer contents; see below. On 5/1/2015 5:54 AM, Chris Hegarty wrote: This latest version addresses all comments so far: /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, end * of stream is detected, or an exception is thrown. The number of bytes * actually read, possibly zero, is returned. This method does not close * the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. _Let k be the number of bytes actually __ __ * read; these bytes will be stored in elements {@code b[off]} through __ __ * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k __ __ * {@code ]} through {@code b[off+len-1]} unaffected. _ This section duplicates the previous sentence and the following sentence. * * In the case where {@code off > 0}, elements {@code b[0]} through * {@code b[off-1]} are unaffected. In every case, elements * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. * _ * In every case, elements {@code b[0]} through {@code b[off-1]} and __ __ * elements {@code b[off+len]} through {@code b[b.length-1]} are unaffected. _ Duplicates previous paragraph. Each section of the buffer should be described only once. Regards, Roger * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may _occur _do * so__after some, but not all, bytes of {@code b} have been updated with * data from the input stream. Consequently the input stream and {@code b} * may be in an inconsistent state. It is strongly recommended that the * stream be promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} * is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 */ public int readNBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count = read(b, off + n, len - n); if (count < 0) break; n += count; } return n; } -Chris. On 24/04/15 09:44, Chris Hegarty wrote: On 23 Apr 2015, at 22:24, Roger Riggs wrote: Hi Pavel, On 4/23/2015 5:12 PM, Pavel Rappo wrote: Hey Roger, 1. Good catch! This thing also applies to java.io.InputStream.read(byte[], int, int): Yes, good catch indeed. * In every case, elements b[0] through * b[off] and elements b[off+len] through * b[b.length-1] are unaffected. I suppose the javadoc for the method proposed by Chris has started its life as a copy of the javadoc read(byte[], int, int) which was assumed to be perfectly polished. Unfortunately it was a false assumption. it happens... many many people have read those descriptions (or didn't because it was too obvious or thought to be redundant). I propose this small amendment. * In the case where {@code off > 0}, elements {@code b[0]} through * {@code b[off-1]} are unaffected. In every case, elements * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. 2. About awkward sentences. This paragraph also has to be rephrased for the same reason: * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. If k == 0 then spec claims to store values in b[off]... b[off - 1]. Reading the whole method description leads to be believe that 'k' ca
Re: RFR [9] Add blocking bulk read to java.io.InputStream
This latest version addresses all comments so far: /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, end * of stream is detected, or an exception is thrown. The number of bytes * actually read, possibly zero, is returned. This method does not close * the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In the case where {@code off > 0}, elements {@code b[0]} through * {@code b[off-1]} are unaffected. In every case, elements * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * In every case, elements {@code b[0]} through {@code b[off-1]} and * elements {@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some, but not all, bytes of {@code b} have been updated with * data from the input stream. Consequently the input stream and {@code b} * may be in an inconsistent state. It is strongly recommended that the * stream be promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} * is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 */ public int readNBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count = read(b, off + n, len - n); if (count < 0) break; n += count; } return n; } -Chris. On 24/04/15 09:44, Chris Hegarty wrote: On 23 Apr 2015, at 22:24, Roger Riggs wrote: Hi Pavel, On 4/23/2015 5:12 PM, Pavel Rappo wrote: Hey Roger, 1. Good catch! This thing also applies to java.io.InputStream.read(byte[], int, int): Yes, good catch indeed. * In every case, elements b[0] through * b[off] and elements b[off+len] through * b[b.length-1] are unaffected. I suppose the javadoc for the method proposed by Chris has started its life as a copy of the javadoc read(byte[], int, int) which was assumed to be perfectly polished. Unfortunately it was a false assumption. it happens... many many people have read those descriptions (or didn't because it was too obvious or thought to be redundant). I propose this small amendment. * In the case where {@code off > 0}, elements {@code b[0]} through * {@code b[off-1]} are unaffected. In every case, elements * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. 2. About awkward sentences. This paragraph also has to be rephrased for the same reason: * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. If k == 0 then spec claims to store values in b[off]... b[off - 1]. Reading the whole method description leads to be believe that 'k' cannot equal 0 at this point. The previous paragraph handles the case where len is 0. The previous paragraph to that handles the EOF case. This paragraph implicitly implies that k is greater than 0, “The first byte read”, and “the number of actual bytes read”, neither of which can be 0 at this point. I included below [*] the latest version of this method, including all comment
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 23 Apr 2015, at 22:24, Roger Riggs wrote: > Hi Pavel, > > On 4/23/2015 5:12 PM, Pavel Rappo wrote: >> Hey Roger, >> >> 1. Good catch! This thing also applies to java.io.InputStream.read(byte[], >> int, int): Yes, good catch indeed. >> * In every case, elements b[0] through >> * b[off] and elements b[off+len] through >> * b[b.length-1] are unaffected. >> >> I suppose the javadoc for the method proposed by Chris has started its life >> as a >> copy of the javadoc read(byte[], int, int) which was assumed to be perfectly >> polished. Unfortunately it was a false assumption. > it happens... many many people have read those descriptions (or didn't > because > it was too obvious or thought to be redundant). I propose this small amendment. * In the case where {@code off > 0}, elements {@code b[0]} through * {@code b[off-1]} are unaffected. In every case, elements * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. >> >> 2. About awkward sentences. This paragraph also has to be rephrased for the >> same reason: >> >> * The first byte read is stored into element {@code b[off]}, the >> next >> * one in to {@code b[off+1]}, and so on. The number of bytes read is, at >> * most, equal to {@code len}. Let k be the number of bytes >> actually >> * read; these bytes will be stored in elements {@code b[off]} through >> * {@code b[off+}k{@code -1]}, leaving elements {@code >> b[off+}k >> * {@code ]} through {@code b[off+len-1]} unaffected. >> >> If k == 0 then spec claims to store values in b[off]... b[off - 1]. Reading the whole method description leads to be believe that 'k' cannot equal 0 at this point. The previous paragraph handles the case where len is 0. The previous paragraph to that handles the EOF case. This paragraph implicitly implies that k is greater than 0, “The first byte read”, and “the number of actual bytes read”, neither of which can be 0 at this point. I included below [*] the latest version of this method, including all comments so far. > If one concludes that's an empty interval then its ok; it just reads oddly > and can > make the reader think its wrong. > In some cases it is easier if the upper bound is defined to be exclusive. > Then if lower == upper, its empty. > > If better language were constructed for the new method then perhaps it could > be worked back into methods with similar behavior later. If the wording > changes > in any significant way, the conformance team will have to go back and > re-evaluate > it in detail to see if it really has changed. So I'd leave it alone. > > Roger -Chris. [*] /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, end * of stream is detected, or an exception is thrown. The number of bytes * actually read, possibly zero, is returned. This method does not close * the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In the case where {@code off > 0}, elements {@code b[0]} through * {@code b[off-1]} are unaffected. In every case, elements * {@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * In every case, elements {@code b[0]} through {@code b[off-1]} and * elements {@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some, but not all, bytes of {@code b} have been updated with * data from the input stream. Consequently the input stream and {@code b} * may be in an inconsistent state. It is strongly recommended that the * stream be promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@co
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 23/04/2015 21:22, Remi Forax wrote: On 04/23/2015 04:41 PM, Alan Bateman wrote: On 23/04/2015 13:22, Remi Forax wrote: I think the name readBytes is not very informative and the name is too close to read + an array of bytes, we can not use readFully (from DataInput/DataInputStream) because instead of returning the number of bytes read, it throws a EOFException if the end of the stream is reached. so what about readAllBytes ? (There is also a readAllBytes in java.nio.file.Files that has an equivalent semantics). For pure convenience then a readAllBytes() that returns a byte[] would be useful. It avoids all the issues around short reads too, and of course easy to specify that the input stream would be left in an inconsistent state if there are any exceptions thrown. you mean readAllBytes on Files or readAllBytes on InputStream ? I mean on InputStream, for the cases where want to read to EOF. There are also cases where you want readNbytes and there are variations of that where you specify the byte[] to copy into or just have it returned. So I suspect we'll end up with more than one method but I hope not more than two. -Alan
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Pavel, On 4/23/2015 5:12 PM, Pavel Rappo wrote: Hey Roger, 1. Good catch! This thing also applies to java.io.InputStream.read(byte[], int, int): * In every case, elements b[0] through * b[off] and elements b[off+len] through * b[b.length-1] are unaffected. I suppose the javadoc for the method proposed by Chris has started its life as a copy of the javadoc read(byte[], int, int) which was assumed to be perfectly polished. Unfortunately it was a false assumption. it happens... many many people have read those descriptions (or didn't because it was too obvious or thought to be redundant). 2. About awkward sentences. This paragraph also has to be rephrased for the same reason: * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. If k == 0 then spec claims to store values in b[off]... b[off - 1]. If one concludes that's an empty interval then its ok; it just reads oddly and can make the reader think its wrong. In some cases it is easier if the upper bound is defined to be exclusive. Then if lower == upper, its empty. If better language were constructed for the new method then perhaps it could be worked back into methods with similar behavior later. If the wording changes in any significant way, the conformance team will have to go back and re-evaluate it in detail to see if it really has changed. So I'd leave it alone. Roger The former thing (1) is a real bug, the latter is... I don't know, do we need this level of strictness or should we assume it's obvious? -Pavel On 23 Apr 2015, at 21:25, Roger Riggs wrote: A minor inconsistency about the unmodified contents of b[off]. * The first byte read is stored into element {@code b[off]} vs. * In every case, elements {@code b[0]} through {@code b[off *- 1*]} and * elements__{@code b[off+len]} through {@code b[b.length-1]} are unaffected. I think there's a missing -1; and perhaps an awkward sentence when off = 0;
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hey Roger, 1. Good catch! This thing also applies to java.io.InputStream.read(byte[], int, int): * In every case, elements b[0] through * b[off] and elements b[off+len] through * b[b.length-1] are unaffected. I suppose the javadoc for the method proposed by Chris has started its life as a copy of the javadoc read(byte[], int, int) which was assumed to be perfectly polished. Unfortunately it was a false assumption. 2. About awkward sentences. This paragraph also has to be rephrased for the same reason: * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. If k == 0 then spec claims to store values in b[off]... b[off - 1]. The former thing (1) is a real bug, the latter is... I don't know, do we need this level of strictness or should we assume it's obvious? -Pavel > On 23 Apr 2015, at 21:25, Roger Riggs wrote: > > A minor inconsistency about the unmodified contents of b[off]. > > * The first byte read is stored into element {@code b[off]} > > vs. > > * In every case, elements {@code b[0]} through {@code b[off *- 1*]} and > * elements__{@code b[off+len]} through {@code b[b.length-1]} are unaffected. > > I think there's a missing -1; and perhaps an awkward sentence when off = 0;
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Chris, A minor inconsistency about the unmodified contents of b[off]. * The first byte read is stored into element {@code b[off]} vs. * In every case, elements {@code b[0]} through {@code b[off *- 1*]} and * elements__{@code b[off+len]} through {@code b[b.length-1]} are unaffected. I think there's a missing -1; and perhaps an awkward sentence when off = 0; Roger On 4/23/2015 5:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In every case, elements {@code b[0]} through {@code b[off]} and * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some bytes have been read. Consequently the input stream may be * in an inconsistent state. It is strongly recommended that the stream be * promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} *is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 */ public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count = read(b, off + n, len - n); if (count < 0) break; n += count; } return n; } -Chris.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 04/23/2015 04:41 PM, Alan Bateman wrote: On 23/04/2015 13:22, Remi Forax wrote: I think the name readBytes is not very informative and the name is too close to read + an array of bytes, we can not use readFully (from DataInput/DataInputStream) because instead of returning the number of bytes read, it throws a EOFException if the end of the stream is reached. so what about readAllBytes ? (There is also a readAllBytes in java.nio.file.Files that has an equivalent semantics). For pure convenience then a readAllBytes() that returns a byte[] would be useful. It avoids all the issues around short reads too, and of course easy to specify that the input stream would be left in an inconsistent state if there are any exceptions thrown. you mean readAllBytes on Files or readAllBytes on InputStream ? -Alan Rémi
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 04/23/2015 06:21 PM, Chris Hegarty wrote: On 23 Apr 2015, at 16:58, Bernd Eckenfels wrote: Hello, I would use the already established name readFully(byte[]) and readFully(byte[],int,int) to be consistent with DataInputStream. I purposefully stayed away from the name ‘readFully' as the spec for DataInputStream.readFully and the proposed InputStream.readBytes differs. The latter does not throw EOFException. So, currently, on the table we have: readBytes readAllBytes readNBytes ( thanks Alan ) -Chris. bikesheding:on readNBytes is Ok for me too. cheers, Rémi
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 04/23/2015 06:21 PM, Chris Hegarty wrote: On 23 Apr 2015, at 16:58, Bernd Eckenfels wrote: Hello, I would use the already established name readFully(byte[]) and readFully(byte[],int,int) to be consistent with DataInputStream. I purposefully stayed away from the name ‘readFully' as the spec for DataInputStream.readFully and the proposed InputStream.readBytes differs. The latter does not throw EOFException. So, currently, on the table we have: readBytes readAllBytes readNBytes ( thanks Alan ) What about readLoop ? ...since it's a loop of read()s Peter -Chris. Gruss Bernd
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 23 Apr 2015, at 16:58, Bernd Eckenfels wrote: > Hello, > > I would use the already established name readFully(byte[]) and > readFully(byte[],int,int) to be consistent with DataInputStream. I purposefully stayed away from the name ‘readFully' as the spec for DataInputStream.readFully and the proposed InputStream.readBytes differs. The latter does not throw EOFException. So, currently, on the table we have: readBytes readAllBytes readNBytes ( thanks Alan ) -Chris. > Gruss > Bernd
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 23 Apr 2015, at 17:10, Peter Levart wrote: > On 04/23/2015 04:50 PM, Chris Hegarty wrote: >> Peter, >> >> Ah, good point. Do we really need a new Exception type for this, is this >> information really that useful? How can you recover? >> >> What if this error condition was better cover in the API. >> >> * If an I/O error occurs reading from the input stream, then it may do >> * so after some, but not all, bytes of {@code b} have been updated with >> * data from the input stream. Consequently the input stream and {@code b} >> * may be in an inconsistent state. It is strongly recommended that the >> * stream be promptly closed if an I/O error occurs. >> >> -Chris. > > Right. Either that or special exception. Perhaps this utility method can be > meant for those uses where the caller doesn't care and if she does, she must > roll her own loop… Right. I think simple is best here. The pattern in the implementation occurs many times in real world code. This method is attempting to provide that to developers so that they don’t have to roll there own ( and risk getting it wrong ). But if you need something more sophisticated, then you can write it yourself. -Chris. > Peter > >> >> On 23 Apr 2015, at 15:20, Peter Levart wrote: >> >>> Hi Chris, >>> >>> Currently InputStream guarantees that either some bytes are read *xor* EOF >>> (-1) is returned *xor* IOException is thrown. Even with default >>> implementation of read(byte[], int, int) which is implemented in terms of >>> int read(). This new method can throw IOException after some bytes have >>> successfully been read from stream and the caller does not get to know how >>> many. Would something like the following make any more sense? >>> >>> public int readBytes(byte[] b, int off, int len) throws IOException { >>> Objects.requireNonNull(b); >>> if (off < 0 || len < 0 || len > b.length - off) >>> throw new IndexOutOfBoundsException(); >>> int n = 0; >>> while (n < len) { >>> int count; >>> try { >>> count = read(b, off + n, len - n); >>> } catch (IOException e) { >>> if (n == 0) { >>> throw e; >>> } else { >>> throw new IncompleteReadBytesException(e, n); >>> } >>> } >>> if (count < 0) >>> break; >>> n += count; >>> } >>> return n; >>> } >>> >>> /** >>> * Thrown from {@link #readBytes(byte[], int, int)} when at least one >>> byte >>> * has successfully been read from stream into the byte buffer when >>> IOException >>> * was thrown. >>> */ >>> public static class IncompleteReadBytesException extends IOException { >>> private final int bytesRead; >>> public IncompleteReadBytesException(IOException cause, int >>> bytesRead) { >>> super(cause); >>> this.bytesRead = bytesRead; >>> } >>> >>> /** >>> * @return number of bytes read successfully from stream into byte >>> array >>> * before exception was thrown. >>> */ >>> public int getBytesRead() { >>> return bytesRead; >>> } >>> } >>> >>> >>> Regards, Peter >>> >>> >>> On 04/23/2015 11:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k >>>
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 04/23/2015 04:50 PM, Chris Hegarty wrote: Peter, Ah, good point. Do we really need a new Exception type for this, is this information really that useful? How can you recover? What if this error condition was better cover in the API. * If an I/O error occurs reading from the input stream, then it may do * so after some, but not all, bytes of {@code b} have been updated with * data from the input stream. Consequently the input stream and {@code b} * may be in an inconsistent state. It is strongly recommended that the * stream be promptly closed if an I/O error occurs. -Chris. Right. Either that or special exception. Perhaps this utility method can be meant for those uses where the caller doesn't care and if she does, she must roll her own loop... Peter On 23 Apr 2015, at 15:20, Peter Levart wrote: Hi Chris, Currently InputStream guarantees that either some bytes are read *xor* EOF (-1) is returned *xor* IOException is thrown. Even with default implementation of read(byte[], int, int) which is implemented in terms of int read(). This new method can throw IOException after some bytes have successfully been read from stream and the caller does not get to know how many. Would something like the following make any more sense? public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count; try { count = read(b, off + n, len - n); } catch (IOException e) { if (n == 0) { throw e; } else { throw new IncompleteReadBytesException(e, n); } } if (count < 0) break; n += count; } return n; } /** * Thrown from {@link #readBytes(byte[], int, int)} when at least one byte * has successfully been read from stream into the byte buffer when IOException * was thrown. */ public static class IncompleteReadBytesException extends IOException { private final int bytesRead; public IncompleteReadBytesException(IOException cause, int bytesRead) { super(cause); this.bytesRead = bytesRead; } /** * @return number of bytes read successfully from stream into byte array * before exception was thrown. */ public int getBytesRead() { return bytesRead; } } Regards, Peter On 04/23/2015 11:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In every case, elements {@code b[0]} through {@code b[off]} and * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some bytes have been read. Consequently the input stream may be * in an inconsistent state. It is strongly recommended that the stream be * promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Right you are, David. So this exception could be reused albeit the cause of "interruption" can be arbitrary IOException thrown in the midst of two read() calls. But the caller need not care as the outcome is the same. Regards, Peter On 04/23/2015 04:30 PM, David M. Lloyd wrote: I believe this is similar to how InterruptedIOException works, FWIW. On 04/23/2015 09:20 AM, Peter Levart wrote: Hi Chris, Currently InputStream guarantees that either some bytes are read *xor* EOF (-1) is returned *xor* IOException is thrown. Even with default implementation of read(byte[], int, int) which is implemented in terms of int read(). This new method can throw IOException after some bytes have successfully been read from stream and the caller does not get to know how many. Would something like the following make any more sense? public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count; try { count = read(b, off + n, len - n); } catch (IOException e) { if (n == 0) { throw e; } else { throw new IncompleteReadBytesException(e, n); } } if (count < 0) break; n += count; } return n; } /** * Thrown from {@link #readBytes(byte[], int, int)} when at least one byte * has successfully been read from stream into the byte buffer when IOException * was thrown. */ public static class IncompleteReadBytesException extends IOException { private final int bytesRead; public IncompleteReadBytesException(IOException cause, int bytesRead) { super(cause); this.bytesRead = bytesRead; } /** * @return number of bytes read successfully from stream into byte array * before exception was thrown. */ public int getBytesRead() { return bytesRead; } } Regards, Peter On 04/23/2015 11:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In every case, elements {@code b[0]} through {@code b[off]} and * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some bytes have been read. Consequently the input stream may be * in an inconsistent state. It is strongly recommended that the stream be * promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} *is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 *
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hello, I would use the already established name readFully(byte[]) and readFully(byte[],int,int) to be consistent with DataInputStream. Gruss Bernd
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Peter, Ah, good point. Do we really need a new Exception type for this, is this information really that useful? How can you recover? What if this error condition was better cover in the API. * If an I/O error occurs reading from the input stream, then it may do * so after some, but not all, bytes of {@code b} have been updated with * data from the input stream. Consequently the input stream and {@code b} * may be in an inconsistent state. It is strongly recommended that the * stream be promptly closed if an I/O error occurs. -Chris. On 23 Apr 2015, at 15:20, Peter Levart wrote: > Hi Chris, > > Currently InputStream guarantees that either some bytes are read *xor* EOF > (-1) is returned *xor* IOException is thrown. Even with default > implementation of read(byte[], int, int) which is implemented in terms of int > read(). This new method can throw IOException after some bytes have > successfully been read from stream and the caller does not get to know how > many. Would something like the following make any more sense? > > public int readBytes(byte[] b, int off, int len) throws IOException { > Objects.requireNonNull(b); > if (off < 0 || len < 0 || len > b.length - off) > throw new IndexOutOfBoundsException(); > int n = 0; > while (n < len) { > int count; > try { > count = read(b, off + n, len - n); > } catch (IOException e) { > if (n == 0) { > throw e; > } else { > throw new IncompleteReadBytesException(e, n); > } > } > if (count < 0) > break; > n += count; > } > return n; > } > > /** > * Thrown from {@link #readBytes(byte[], int, int)} when at least one byte > * has successfully been read from stream into the byte buffer when > IOException > * was thrown. > */ > public static class IncompleteReadBytesException extends IOException { > private final int bytesRead; > > public IncompleteReadBytesException(IOException cause, int bytesRead) > { > super(cause); > this.bytesRead = bytesRead; > } > > /** > * @return number of bytes read successfully from stream into byte > array > * before exception was thrown. > */ > public int getBytesRead() { > return bytesRead; > } > } > > > Regards, Peter > > > On 04/23/2015 11:01 AM, Chris Hegarty wrote: >> A while back when we added the long overdue java.io.InputStream.transferTo >> method, there was support for adding a blocking bulk read operation. This >> has been sitting in a branch in the sandbox since then. I would like to >> revive it with the intention of bringing it into 9. The motivation for this >> addition is provide library support for a common pattern found when reading >> from input streams. >> >> /** >> * Reads some bytes from the input stream into the given byte array. This >> * method blocks until {@code len} bytes of input data have been read, or >> * end of stream is detected. The number of bytes actually read, possibly >> * zero, is returned. This method does not close the input stream. >> * >> * In the case where end of stream is reached before {@code len} bytes >> * have been read, then the actual number of bytes read will be returned. >> * When this stream reaches end of stream, further invocations of this >> * method will return zero. >> * >> * If {@code len} is zero, then no bytes are read and {@code 0} is >> * returned; otherwise, there is an attempt to read up to {@code len} bytes. >> * >> * The first byte read is stored into element {@code b[off]}, the next >> * one in to {@code b[off+1]}, and so on. The number of bytes read is, at >> * most, equal to {@code len}. Let k be the number of bytes actually >> * read; these bytes will be stored in elements {@code b[off]} through >> * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k >> * {@code ]} through {@code b[off+len-1]} unaffected. >> * >> * In every case, elements {@code b[0]} through {@code b[off]} and >> * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. >> * >> * The behavior for the case where the input stream is asynchronously >> * closed, or the thread interrupted during the read, is highly input >> * stream specific, and therefore not specified. >> * >> * If an I/O error occurs reading from the input stream, then it may do >> * so after some bytes have been read. Consequently the input stream may be >> * in an inconsistent state. It is strongly recommended that the stream be >> * promptly closed if an I/O error occurs. >> * >> * @param b the buffer into which the data is read >> * @param off the start offset in {@code b} at which the data is written >> * @param len the
Re: RFR [9] Add blocking bulk read to java.io.InputStream
On 23/04/2015 13:22, Remi Forax wrote: I think the name readBytes is not very informative and the name is too close to read + an array of bytes, we can not use readFully (from DataInput/DataInputStream) because instead of returning the number of bytes read, it throws a EOFException if the end of the stream is reached. so what about readAllBytes ? (There is also a readAllBytes in java.nio.file.Files that has an equivalent semantics). For pure convenience then a readAllBytes() that returns a byte[] would be useful. It avoids all the issues around short reads too, and of course easy to specify that the input stream would be left in an inconsistent state if there are any exceptions thrown. -Alan
Re: RFR [9] Add blocking bulk read to java.io.InputStream
I believe this is similar to how InterruptedIOException works, FWIW. On 04/23/2015 09:20 AM, Peter Levart wrote: Hi Chris, Currently InputStream guarantees that either some bytes are read *xor* EOF (-1) is returned *xor* IOException is thrown. Even with default implementation of read(byte[], int, int) which is implemented in terms of int read(). This new method can throw IOException after some bytes have successfully been read from stream and the caller does not get to know how many. Would something like the following make any more sense? public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count; try { count = read(b, off + n, len - n); } catch (IOException e) { if (n == 0) { throw e; } else { throw new IncompleteReadBytesException(e, n); } } if (count < 0) break; n += count; } return n; } /** * Thrown from {@link #readBytes(byte[], int, int)} when at least one byte * has successfully been read from stream into the byte buffer when IOException * was thrown. */ public static class IncompleteReadBytesException extends IOException { private final int bytesRead; public IncompleteReadBytesException(IOException cause, int bytesRead) { super(cause); this.bytesRead = bytesRead; } /** * @return number of bytes read successfully from stream into byte array * before exception was thrown. */ public int getBytesRead() { return bytesRead; } } Regards, Peter On 04/23/2015 11:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In every case, elements {@code b[0]} through {@code b[off]} and * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some bytes have been read. Consequently the input stream may be * in an inconsistent state. It is strongly recommended that the stream be * promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} *is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 */ public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count = read(b, off + n, len -
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Chris, Currently InputStream guarantees that either some bytes are read *xor* EOF (-1) is returned *xor* IOException is thrown. Even with default implementation of read(byte[], int, int) which is implemented in terms of int read(). This new method can throw IOException after some bytes have successfully been read from stream and the caller does not get to know how many. Would something like the following make any more sense? public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count; try { count = read(b, off + n, len - n); } catch (IOException e) { if (n == 0) { throw e; } else { throw new IncompleteReadBytesException(e, n); } } if (count < 0) break; n += count; } return n; } /** * Thrown from {@link #readBytes(byte[], int, int)} when at least one byte * has successfully been read from stream into the byte buffer when IOException * was thrown. */ public static class IncompleteReadBytesException extends IOException { private final int bytesRead; public IncompleteReadBytesException(IOException cause, int bytesRead) { super(cause); this.bytesRead = bytesRead; } /** * @return number of bytes read successfully from stream into byte array * before exception was thrown. */ public int getBytesRead() { return bytesRead; } } Regards, Peter On 04/23/2015 11:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In every case, elements {@code b[0]} through {@code b[off]} and * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some bytes have been read. Consequently the input stream may be * in an inconsistent state. It is strongly recommended that the stream be * promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} *is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 */ public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count = read(b, off + n, len - n); if (count < 0) break; n += count; } return n; } -Chris.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
I think the name readBytes is not very informative and the name is too close to read + an array of bytes, we can not use readFully (from DataInput/DataInputStream) because instead of returning the number of bytes read, it throws a EOFException if the end of the stream is reached. so what about readAllBytes ? (There is also a readAllBytes in java.nio.file.Files that has an equivalent semantics). regards, Rémi On 04/23/2015 11:01 AM, Chris Hegarty wrote: A while back when we added the long overdue java.io.InputStream.transferTo method, there was support for adding a blocking bulk read operation. This has been sitting in a branch in the sandbox since then. I would like to revive it with the intention of bringing it into 9. The motivation for this addition is provide library support for a common pattern found when reading from input streams. /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or * end of stream is detected. The number of bytes actually read, possibly * zero, is returned. This method does not close the input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. * When this stream reaches end of stream, further invocations of this * method will return zero. * * If {@code len} is zero, then no bytes are read and {@code 0} is * returned; otherwise, there is an attempt to read up to {@code len} bytes. * * The first byte read is stored into element {@code b[off]}, the next * one in to {@code b[off+1]}, and so on. The number of bytes read is, at * most, equal to {@code len}. Let k be the number of bytes actually * read; these bytes will be stored in elements {@code b[off]} through * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k * {@code ]} through {@code b[off+len-1]} unaffected. * * In every case, elements {@code b[0]} through {@code b[off]} and * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. * * The behavior for the case where the input stream is asynchronously * closed, or the thread interrupted during the read, is highly input * stream specific, and therefore not specified. * * If an I/O error occurs reading from the input stream, then it may do * so after some bytes have been read. Consequently the input stream may be * in an inconsistent state. It is strongly recommended that the stream be * promptly closed if an I/O error occurs. * * @param b the buffer into which the data is read * @param off the start offset in {@code b} at which the data is written * @param len the maximum number of bytes to read * @return the actual number of bytes read into the buffer * @throws IOException if an I/O error occurs * @throws NullPointerException if {@code b} is {@code null} * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} *is negative, or {@code len} is greater than {@code b.length - off} * * @since 1.9 */ public int readBytes(byte[] b, int off, int len) throws IOException { Objects.requireNonNull(b); if (off < 0 || len < 0 || len > b.length - off) throw new IndexOutOfBoundsException(); int n = 0; while (n < len) { int count = read(b, off + n, len - n); if (count < 0) break; n += count; } return n; } -Chris.
Re: RFR [9] Add blocking bulk read to java.io.InputStream
Hi Chris, The spec looks good. The only single detail I've noticed is that the "an exception is thrown" as a possible cause of unblocking the method is missing: --- dev/jdk/src/java.base/share/classes/java/io/InputStream.java (revision ) +++ dev/jdk/src/java.base/share/classes/java/io/InputStream.java (revision ) @@ -408,8 +408,9 @@ /** * Reads some bytes from the input stream into the given byte array. This * method blocks until {@code len} bytes of input data have been read, or - * end of stream is detected. The number of bytes actually read, possibly - * zero, is returned. This method does not close the input stream. + * end of stream is detected, or an exception is thrown. The number of bytes + * actually read, possibly zero, is returned. This method does not close the + * input stream. * * In the case where end of stream is reached before {@code len} bytes * have been read, then the actual number of bytes read will be returned. This fix would be very consistent with other `read` methods of the class. I'd be happy to write a comprehensive test for the whole thing. Thanks. -Pavel > On 23 Apr 2015, at 10:01, Chris Hegarty wrote: > > A while back when we added the long overdue java.io.InputStream.transferTo > method, there was support for adding a blocking bulk read operation. This has > been sitting in a branch in the sandbox since then. I would like to revive it > with the intention of bringing it into 9. The motivation for this addition is > provide library support for a common pattern found when reading from input > streams. > > /** > * Reads some bytes from the input stream into the given byte array. This > * method blocks until {@code len} bytes of input data have been read, or > * end of stream is detected. The number of bytes actually read, possibly > * zero, is returned. This method does not close the input stream. > * > * In the case where end of stream is reached before {@code len} bytes > * have been read, then the actual number of bytes read will be returned. > * When this stream reaches end of stream, further invocations of this > * method will return zero. > * > * If {@code len} is zero, then no bytes are read and {@code 0} is > * returned; otherwise, there is an attempt to read up to {@code len} bytes. > * > * The first byte read is stored into element {@code b[off]}, the next > * one in to {@code b[off+1]}, and so on. The number of bytes read is, at > * most, equal to {@code len}. Let k be the number of bytes actually > * read; these bytes will be stored in elements {@code b[off]} through > * {@code b[off+}k{@code -1]}, leaving elements {@code b[off+}k > * {@code ]} through {@code b[off+len-1]} unaffected. > * > * In every case, elements {@code b[0]} through {@code b[off]} and > * elements{@code b[off+len]} through {@code b[b.length-1]} are unaffected. > * > * The behavior for the case where the input stream is asynchronously > * closed, or the thread interrupted during the read, is highly input > * stream specific, and therefore not specified. > * > * If an I/O error occurs reading from the input stream, then it may do > * so after some bytes have been read. Consequently the input stream may be > * in an inconsistent state. It is strongly recommended that the stream be > * promptly closed if an I/O error occurs. > * > * @param b the buffer into which the data is read > * @param off the start offset in {@code b} at which the data is written > * @param len the maximum number of bytes to read > * @return the actual number of bytes read into the buffer > * @throws IOException if an I/O error occurs > * @throws NullPointerException if {@code b} is {@code null} > * @throws IndexOutOfBoundsException If {@code off} is negative, {@code len} > *is negative, or {@code len} is greater than {@code b.length > - off} > * > * @since 1.9 > */ > public int readBytes(byte[] b, int off, int len) throws IOException { >Objects.requireNonNull(b); >if (off < 0 || len < 0 || len > b.length - off) >throw new IndexOutOfBoundsException(); >int n = 0; >while (n < len) { >int count = read(b, off + n, len - n); >if (count < 0) >break; >n += count; >} >return n; > } > > -Chris.