Re: allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-04 Thread Peter Jeremy
[dropping -net] On Thu, 2005-Nov-03 22:56:30 -0800, kamal kc wrote: as i said before the compression/decompression works fine. but soon the kernel would panic with one of the vm_fault: error message. What's the exact panic and traceback? Have you enabled the various sanity checks (WITNESS,

Re: allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-04 Thread João Carlos Mendes Luis
Peter Jeremy wrote: what would be the best possible way to allocate/deallocate 14KB memory per packet without causing vm_faults ?? The most efficient way would be to statically allocate the dictionary and string tables. The downside is that you then need to serialise the [de]compression.

Re: allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-04 Thread Giorgos Keramidas
On 2005-11-03 22:56, kamal kc [EMAIL PROTECTED] wrote: for my compression/decompression i use string tables and temporary buffers which take about 14KB of memory per packet. If you're allocating 14 KB of data just to send (approximately) 1.4 KB and then you throw away the 14 KB immediately,

Re: allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-04 Thread Joseph Koshy
- Am I not following the correct procedures to allocate and deallocate memory in kernel space ?? - Or is the problem elsewhere ?? You didn't say whether you've checked your code for buffer overruns. If the fault is happening in seemingly unrelated parts of the kernel with your module is

Re: allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-03 Thread Giorgos Keramidas
On 2005-11-02 18:39, kamal kc [EMAIL PROTECTED] wrote: dear everybody, i am trying to compress/decompress ip packets. for this i have implemented the adaptive lzw compression. i put the code in the ip_output.c and do my compression/decompression just before the if_output() function call so

Re: allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-03 Thread kamal kc
for my compression/decompression i use string tables and temporary buffers which take about 14KB of memory per packet. If you're allocating 14 KB of data just to send (approximately) 1.4 KB and then you throw away the 14 KB immediately, it sounds terrible. yes that's true. since i

allocating 14KB memory per packet compression/decompression results in vm_fault

2005-11-02 Thread kamal kc
dear everybody, i am trying to compress/decompress ip packets. for this i have implemented the adaptive lzw compression. i put the code in the ip_output.c and do my compression/decompression just before the if_output() function call so that i won't interfere with the ip processing of the kernel.