If we want eventually add multipart encoders/decoders to codec, should we:

(1) reopen the codec sandbox,
(2) start a new sandbox project, or
(3) work directly in the codec project?

I currently have extracted the multipart post encoding from the HttpClient multipart post code and am working on an initial framework to support its addition to codec. I would like to get this into the sandbox soon but want to be flexible o others needs, if there is interest in also working on Streamable Codecs, my code would benefit from Streamable encoder/decoder interfaces, thus we should probably work on that code in the same place.

-Mark

Mark R. Diggory wrote:

Hmmm, thats a Hot Potato, I suspect we won't be using it (at least initially). ;-)

Maybe to focus in the initial deliverable we would require:

2.) Multipart Mime Codecs

Maintaining Multipart Encoding in the various multipart mimetypes.


Multipart Types (mostly email) Messages with multiple parts multipart/mixed
Messages with multiple, alternative parts multipart/alternative
Message with multiple, related parts multipart/related
Multiple parts are digests multipart/digest
For reporting of email status (admin.) multipart/report
Order of parts does not matter multipart/parallel
Macintosh file data multipart/appledouble
Aggregate messages; descriptor as header multipart/header-set
Container for voice-mail multipart/voice-message
HTML FORM data (see Ch. 9 and App. B) multipart/form-data
Infinite multiparts - See Chapter 9 (Netscape) multipart/x-mixed-replace



1.) Which do we want to support?


2.) How might we write an extensible (and nestable) API to encode/decode various multiparts elegantly?

Currently, HttpClient maintains this efficiently in the form of "Parts"

MultipartPostMethod.addPart(Part part);

Part
   ---StringPart
   ---FilePart

All parts have a Part.send(OutputStream out) method responsible for encoding and streaming the Parts contents to the MultipartPostMethod.

1.) This means that Parts are kept in their native Format/State/Object until they are processed into the Steam.

2.) Currently in HttpClient, Parts also generate the "Multipart" portions of the encoding (ie boundaries etc). Ideally, the "processor" of the Part should probably be more responsible for this, with the Part just generating a Stream of its content and the "codec" responsible for the boundaries/encoding etc. This way, no matter the codec (multipart/related, multipart/mixed, etc) the user is still working with he same "Parts" and the only difference is the "codec"/"factory"). This is similar in nature to the idea of service providers and JNDI. Your still working with "Context/Attributes/SearchResults" no matter what Service provider your using (DNS, LDAP, FS).

3.) This is why I initially though JAF would be a responsible choice, delivering Parts that also had "mimetypes" associated with them, the codec could then simply use the mimetype to determine the appropriate encoding for that specific multipart codec.

-Mark

Serge Knystautas wrote:

Noel J. Bergman wrote:

Mark R. Diggory asked:

Shouldn't [mime] be dependent on JAF?



Why?



Looks like others already stated this, but here's my why/why not:


why: it's an established API that doesn't have anything to do with MIME handling. JAF handles creating Java objects from streams. (This was listed as one of the tasks for this initiative, and I don't think it really belongs).

why not: it's really really painful to use. It has a a decent idea of a data handler, but it's just way to complicated

I would prefer JAF and otherwise object instantiation from streams be unrelated to mime. The reason it current works this way is so you can "nest" MimeMultipart objects, but I think there's a better way to wrap this.



-- Mark Diggory Software Developer Harvard MIT Data Center http://osprey.hmdc.harvard.edu

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to