Here's what I've discovered. I will file a JIRA once I've finished gathering data.
In my transform plugin I use TSIOBufferCreate to a create the buffer which gets written into by TSIOBufferWrite with the desired response body. When a response is peer fetched from cache, the address of the transformed response header's values (I'm printing address using TSHttpTxnTransformRespGet and other API's) starts halfway into the buffer allocated by TSIOBufferCreate. So if I write about 2k into it, I overwrite my response transform headers. When the response is not from cache, or when the response is from cache and we're not in a cluster, this does NOT happen, ever. Could be coincidence, but its repeatable for response sizes varying from 5k to 10 MB. I don't know how the Transform response headers values can share the same memory as what I get from calling TSIOBufferCreate, but it does. My concern is that at a deeper level the memory is being mismanaged, and while I can check in my transform plugin if I'll overwrite my transform resp header buffer, but what about other parts of ATS that are utilizing buffers, or when I have multiple transforms happening at the same time? On 8/28/13 9:51 AM, "Walsh, Peter" <peter.wa...@disney.com> wrote: >Here's an odd one. > >I have a transform that caches the untransformed response, so that way I >perform the transformation for each subsequent cache hit of that >document. This works well enough, until my plugin is in a cluster. >When in a cluster, if the cached document is peer fetched the response >headers returned to the client are corrupted. If it is not peer fetched, >it works fine. > >I've done many tests with a single node, and in a cluster, and am >convinced being in the cluster is the key factor to this bad behavior. > >I am at a loss. Why does being part of a cluster cause this? Should I >be handling response headers differently?