Ulf Lamping wrote:
> However, if noone is going to solve the current situation, tshark will 
> keep the limitations that it currently has - I don't plan to spend any 
> more effort on this topic ... if someone is seriously going to improve 
> the current situation, I'm really willing to explain devel things, but 
> not on the current level of discussion.

Sorry, I thought we were mostly discussing devel things to help us all 
understand why it was implemented the way it was, the issues, what we 
shouldn't try because you already did, etc...

Don't get me wrong, your work is a huge step in the right 
direction--thank you!

> WHY NOT SIMPLY USE A PIPE BETWEEN DUMPCAP AND TSHARK?
> 
> Because it just won't work.
> 
> Sending everything through a pipe is not a portability issue, but has a 
> different problem: pipes are pretty limited in the amount of bytes they 
> can store. If there's a network burst coming in and dumpcap pushes the 
> packets into the pipe very fast, the receiving side of the pipe probably 
> can't process the packets in the required very, very short time (which 
> is *very* likely), packet loss is the result.

Packets should be lost going from the kernel up to dumpcap, not between 
dumpcap and *shark (unless I miss something: normally I would expect 
that writing to a full pipe results in your write blocking, not message 
disposal).  So how is that different then the old model where *shark 
only read stuff from the kernel as fast as it could?

(Not that finding a way to make packet loss less of an issue is a bad 
thing, but I don't see this issue as a regression.)

> The "temporary file model" is working in Wiresharks "update list of 
> packets" mode for quite a while and is working ok.

Except (unless my idea about that problem is incorrect) when you're 
using a ring buffer (see bug 1650).

I see two ways of solving that problem:

- keep dumpcap and *shark synchronized all the time (for example if a
   pipe was used between the two to transfer the packets)
        - if *shark can't keep up then packets will be lost but _when_
          they get lost is really dependent on when *shark is too slow

- have dumpcap and *shark synchronize only when changing files
        - in this case dumpcap would be fast up until changing files at
          which point it might block for a potentially huge amount of
          time (while *shark catches up).  In this case all the packet
          loss would happen in "bursts" at file change time.  That seems
          rather unattractive to me.

Another method would be to have dumpcap create all the ring buffer files 
and to have *shark delete them (when it has finished with them).  That 
would avoid the problem but it defeats the (common) purpose of using the 
ring buffer which is to avoid going over some specified amount of disk 
usage (because dumpcap could go off and create hundreds of files while 
*shark is still busy processing the first ones).


I don't have any other ideas to solve that problem (though all this 
stuff is significantly out of my realm of experience).

(One could also argue that using ring buffers with "update packets in 
real time" in the GUI is a bad idea too, but...)
_______________________________________________
Wireshark-dev mailing list
[email protected]
http://www.wireshark.org/mailman/listinfo/wireshark-dev

Reply via email to