[ http://issues.apache.org/jira/browse/AXIS-2084?page=comments#action_12316470 ]
Tom Ziemer commented on AXIS-2084: ---------------------------------- Hi again, thank you very much, Brian & dims for the patch you supplied. I was really astonished to see a fix for this problem in such a short time. We are now using the latest CVS version (including version 1.30 of org.apache.axis.attachments.DimeBodyPart) of Axis and have run out tests again. - Java (Axis) client: No problem at all. - .NET 2.0 Beta: We were able to transfer files up to a size of 256 MB. Any additional byte will cause an "System.OutOfMemoryException" on the client side, which has 2 GB of RAM. We are not exactly sure though whether this is an axis or a .Net issue. We are currently working on another client implementation to figure out, whether this problem is caused by the server. Anyway, since we will not be sending files larger than 100MB, the current version is perfectly fine for us. Thanks again! Regards, Tom > Dime attachements: Type_Length of the final record chunk must be zero > --------------------------------------------------------------------- > > Key: AXIS-2084 > URL: http://issues.apache.org/jira/browse/AXIS-2084 > Project: Apache Axis > Type: Bug > Components: Serialization/Deserialization > Versions: 1.2, 1.2.1 > Environment: Microsoft XP > Reporter: Coralia Silvana Popa > Assignee: Davanum Srinivas > Attachments: DimeBodyPart.java, DimeBodyPartDiff.txt, > DimeBodyPartDiff_2.txt, DimeBodyPart_2.java, EchoAttachment.java > > Large files sent as DIME attachments are not correct serialized. Seems that > the > When reading a series of chunked records, the parser assumes that the first > record without the CF flag is the final record in the chunk; in this case, > it's the last record in my sample. The record type is specified only in the > first record chunk, and all remaining chunks must have the TYPE_T field and > all remaining header fields (except for the DATA_LENGTH field) set to zero. > Seems that Type_Length (and maybe other header fields) is not set to 0 for > the last chunk. The code work correct when there is only one chunck. > The problem is in class: org.apache.axis.attachments.DimeBodyPart, in method > void send(java.io.OutputStream os, byte position, DynamicContentDataHandler > dh, final long maxchunk) > I suggest the following code the fix this problem: > void send(java.io.OutputStream os, byte position, DynamicContentDataHandler > dh, > final long maxchunk) > throws java.io.IOException { > > BufferedInputStream in = new > BufferedInputStream(dh.getInputStream()); > > final int myChunkSize = dh.getChunkSize(); > > byte[] buffer1 = new byte[myChunkSize]; > byte[] buffer2 = new byte[myChunkSize]; > > int bytesRead1 = 0 , bytesRead2 = 0; > bytesRead1 = in.read(buffer1); > > if(bytesRead1 < 0) { > sendHeader(os, position, 0, (byte) 0); > os.write(pad, 0, dimePadding(0)); > return; > } > byte chunknext = 0; > do { > bytesRead2 = in.read(buffer2); > > if(bytesRead2 < 0) { > //last record...do not set the chunk bit. > //buffer1 contains the last chunked record! > sendChunk(os, position, buffer1, 0, bytesRead1, > chunknext); > break; > } > > sendChunk(os, position, buffer1, 0, bytesRead1,(byte) > (CHUNK | chunknext) ); > chunknext = CHUNK_NEXT; > //now that we have written out buffer1, copy buffer2 > into to buffer1 > System.arraycopy(buffer2,0,buffer1,0,myChunkSize); > bytesRead1 = bytesRead2; > > }while(bytesRead2 > 0); > } -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
