On 07/28/2010 07:57 AM, Andreas wrote: > Jaap Keuter<jaap.keu...@...> writes: > >> >> Hi, >> >> What's your transport protocol? >> >> Thanks, >> Jaap >> >> On Mon, 26 Jul 2010 16:29:42 +0300 (EEST), andreas.akes...@... >> wrote: >>> Hello, >>> >>> I'm currently writing a dissector which requires packet buffering to >>> work. The dissector more or less has to brute-force the packet stream >>> to find the actual data, but it needs at least a dozen packets of data >>> before it can do anything. So, it doesn't know when the data begins, >>> and how much data it needs (there is a maximum possible length >>> though). >>> >>> Is there any built-in support for this? I was able to store the tvb >>> buffers into a circular buffer, but I'm not quite sure what to do with >>> the packet_info structure (I may be wrong, but it didn't seem to be on >>> the heap, so I couldn't just store the pointer to it). >>> >>> Any help is appreciated! >>> >>> Sincerely, >>> Andreas >>> > > Hi, > > I'm using UDP for testing purposes, just to get the data into Wireshark. > I have sample files of raw PCM data which I then export to pcap format using > text2pcap, and insert dummy UDP headers. > Basically, I get 40 PCM samples per packet, from which I extract a few bits > here > and there. That's why I have to scan through a lot of packets, because I do > not > know where I find the sync bits. > > Br, > > Andreas >
Hi, It looks like you want to packetize a streaming protocol in a datagram protocol. That causes inherent problems. You may want to consider packing in TCP, a stream oriented protocol, which should have better support in Wireshark. I know that RTP is a streaming datagram protocol, and uses specific RTP support routines in Wireshark. Thanks, Jaap ___________________________________________________________________________ Sent via: Wireshark-dev mailing list <[email protected]> Archives: http://www.wireshark.org/lists/wireshark-dev Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev mailto:[email protected]?subject=unsubscribe
