Hi, Thank you for the details, more questions to come.
On Tue, Mar 19, 2019 at 1:42 AM Hardik <[email protected]> wrote: > > Hi Dridi, > > Can you give me a list of log records you need to collect? > > SLT_Timestamp : Do you need all timestamps or a specific metric? > SLT_ReqStart : > SLT_ReqMethod : > SLT_ReqURL: > SLT_ReqProtocol : > SLT_RespStatus : > SLT_ReqHeader : > SLT_RespHeader : > SLT_ReqAcct : > SLT_BereqAcct : Do you need the BereqAcct records for all transactions? Including cache hits? This one is tricky in terms of billing. > SLT_VCL_Log : > > And > possibly how you are trying to group them if they come from different > transactions? You can do the grouping with the -g option, but that didn't go well for you so that's what I'm trying to figure out. > I am reading based xid ( by FD ). Means reading full records per fd. What does FD mean here? File descriptor? From ReqStart? > Please let me know if any other information I can provide.. > > > If this is not related to my problem still I am curious to know how grouping > is happening. You can point out some code or links with some details, I will > go through. Well, utilities like varnishlog or varnishncsa accumulate transactions via libvarnishapi in memory (which may take a long time) and then libvarnishapi presents them in order. So utilities don't assume this logic and simply get the data presented to them in a callback function. That's where timeouts, overruns or transaction limits may result in data loss since slow log consumers don't slow down Varnish, and Varnish isn't slowed down by logs more than writing them to memory requires. Dridi _______________________________________________ varnish-misc mailing list [email protected] https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
