kamal kc wrote:
i am trying to compress/decompress ip packets.
for this i have implemented the adaptive lzw compression.
i put the code in the ip_output.c and do my compression/decompression
just before the if_output() function call so that i won't interfere with
the ip processing of the kernel.
for my compression/decompression i use string tables and temporary buffers which take about 14KB of memory per packet.

It's highly likely that the problem you are trying to solve has already been implemented elsewhere, you ought to look at the code for these kernel options in particular:

# The PPP_BSDCOMP option enables support for compress(1) style entire
# packet compression, the PPP_DEFLATE is for zlib/gzip style compression.

[ ... ]
These are the memory operations i perform in my code.
Now when i run the modified kernel the behaviour is unpredictable.
The compression/decompression
works fine with expected results. But soon the kernel would crash with
vm_fault: message.
-Is the memory requirement of 14KB per packet too high to be allocated by the kernel ?? - Are there any other techniqures to allocate memory in kernel without producing vm_faults ?? - Am I not following the correct procedures to allocate and deallocate memory in kernel space ??
- Or is the problem elsewhere ??

You should allocate buffers once and reuse them, not continually free and reallocate 14KB of memory per packet.

Look at "man 9 malloc" and the output of "sysctl kern.malloc".

Perhaps you're leaking memory with each packet and run the kernel out of KVA pages, but without knowing more about the problem you are trying to solve, and without seeing more of the code you've written or at least a backtrace, it's not really useful to make random guesses.

You should be looking to get a dump and run kgdb...

--
-Chuck

_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to