Hi, This is in reference to the org.apache.commons.lang.SerializationUtils class implementation written you. We are currently using this class to serialize objects(graph) which are extremely heavy in size. I am looking forward to optimize this serialization process. Following are few tips I got from one of the websites. I had like to know your opinion on this.
1. ByteArrayOutputStream, by default, begins with a 32 byte array for the output. As content is written to the stream, the required size of the content is computed and (if necessary), the array is expanded to the greater of the required size or twice the current size. Java Object Serialization produces output that is somewhat bloated (for example, fully qualifies path names are included in uncompressed string form), so the 32 byte default starting size means that lots of small arrays are created, copied into, and thrown away as data is written. This has an easy fix: construct the array with a larger inital size. 2. All of the methods of ByteArrayOutputStream that modify the contents of the byte array are synchronized. In general this is a good idea, but in this case we can be certain that only a single thread will ever be accessing the stream. Removing the synchronization will speed things up a little. ByteArrayInputStream's methods are also synchronized. 3. The toByteArray() method creates and returns a copy of the stream's byte array. Again, this is usually a good idea: If you retrieve the byte array and then continue writing to the stream, the retrieved byte array should not change. For this case, however, creating another byte array and copying into it merely wastes cycles and makes extra work for the garbage collector. Regards Deepak Saini This message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the email by you is prohibited.
