Hi,
    All I want to do is that,
    1. read from some source
    2. do some calculation to get some byte array
    3. write the byte array to hdfs
    In hadoop, I can share an ImmutableByteWritable, and do some
System.arrayCopy, it will prevent the application from creating a lot of
small objects which will improve the gc latency.
    *However I was wondering if there is any solution like above in spark
that can avoid creating small objects*

Reply via email to