Hi Dave,

Thanks for all of the help with the use of packet sockets. I will try to be
concise, but as detailed as I can. So if you need more details, I will
definitely provide them.

 I was able to install hashpipe with the suid bit set as you suggested
previously. So far, I have been able to capture data with the first round
of frames of the circular buffer i.e. if I have 160 frames, I am able to
capture packets of frames 0 to 159 at which point right at the memcpy() in
the process_packet() function of the net thread, I get a segmentation fault.

I believe this has to do with how I am implementing the release_frame()
function in the net thread. I call the release_frame function right after
process_packet() which makes sense to me because as I understand it, after
a packet is read/processed, the user must zero the status field so the
kernel can use that frame again as stated in packet_mmap.txt. So calling
the release_frame function after process_packet makes the most sense to me.

I am using the PKT_UDP_DATA(frame) macro to acquire the pointer to the
packet payload, and right at the memcpy() at the end of the first round of
the buffer, I get that segmentation fault. Given what I've done to debug
the code, as well as the information I have acquired about TP_STATUS, I
haven't yet seen how I'm accessing memory out of  allocated range. There is
probably something I'm missing or don't understand about the release_frame
function or otherwise. As of right now, it seems as though the
release_frame function is freeing the memory entirely or there are some
priviledge issues that I'm unable to see.

Hopefully, my explanation makes sense. Let me know whether you need
additional information and I can definitely provide it. Thanks again for
the help.

Mark Ruzindana

On Fri, Apr 17, 2020 at 9:00 PM David MacMahon <dav...@berkeley.edu> wrote:

> Hi, Mark,
>
> Yeah packet sockets do require extra privileges.  The solution/workaround
> that Hashpipe uses is to instal hashpipe with the suid bit set.  The init()
> functions of the threads will be called with the privileges of the suid
> user.  Then hashpipe will drop the suid privileges before invoking the
> run() functions of the threads.  If you setup the packet sockets in the
> init() function, you can then use them in the run() functions.  It's not an
> ideal solution and could be considered a security hole, but given the
> limited and generally tightly controlled environments in which Hashpipe is
> typically used this is a working compromise.  The other option, as you
> indicated is to do something with the NAP_NET_RAW privilege, but I've not
> explored how to utilize that.  My limited understanding of that is that it
> is for users rather than executables, but like I said I haven't explored
> that route so I'm not really sure what's possible there.  If you figure out
> something useful, please post here.
>
> Cheers,
> Dave
>
> On Apr 17, 2020, at 14:25, Mark Ruzindana <ruziem...@gmail.com> wrote:
>
> Hi all,
>
> Hope you're doing fine. I was able to add packet sockets and the functions
> provided by Hashpipe in hashpipe_pktsock.h, but I get permission issues
> when trying to capture packets as a non-root user.
>
> The method I am trying to use to overcome this is owning the
> plugins/executables as root and using the setuid flag to give root
> privileges to hashpipe. At this point, I still get an 'operation not
> permitted' when trying to open the socket. Then when trying to use the
> CAP_NET_RAW privilege (setcap cap_net_raw=pe 'program'), I'm told that the
> operation is not supported.
>
> Just to be clear, I don't have any of these issues when running the
> process as root, but I'd rather have non-root users running hashpipe. How
> were you able to overcome the permission issues when trying to capture raw
> packets with hashpipe as a non-root user? If you were running it as a
> non-root user.
>
> Let me know whether you need any more information or whether I'm not
> stating anything clearly.
>
> Thanks a lot for the help.
>
> Mark Ruzindana
>
> On Tue, Mar 31, 2020 at 5:08 PM Mark Ruzindana <ruziem...@gmail.com>
> wrote:
>
>> Thanks a lot for the quick responses John and David! I really appreciate
>> it.
>>
>> I will definitely be updating the version of Hashpipe that I currently
>> have on the server as well as ensure that the network tuning is good.
>>
>> I'm currently using the standard "socket()" function, and a switch to
>> packet sockets, with the description that you gave, seems like it will
>> definitely be beneficial.
>>
>> I also currently pin the threads to the desired cores with a "-c #" on
>> the command line, but thank you for mentioning it, I might have not been
>> doing so. The NUMA info is also very helpful. I'll make sure that the
>> architecture is as optimal as it should be.
>>
>> Thanks again! This was very helpful and I'll update you with the progress
>> that I make.
>>
>> Mark
>>
>>
>>
>>
>> On Tue, Mar 31, 2020 at 4:38 PM David MacMahon <dav...@berkeley.edu>
>> wrote:
>>
>>> Just to expand on John's excellent tips, Hashpipe does lock its shared
>>> memory buffers with mlock.  These buffers will have the NUMA node affinity
>>> of the thread that created them so be sure to pin the threads to the
>>> desired core or cores by preceding the thread names on the command line
>>> with a -c # (set thread affinity to a single core) or -m # (set thread
>>> affinity to multiple cores) option.  Alternatively (or additional) you can
>>> run the entire hashpipe process with numactl.  For example...
>>>
>>> numactl --cpunodebind=1 --membind=1 hashpipe [...]
>>>
>>> ...will restrict hashpipe and all its threads to run on NUMA node 1 and
>>> all memory allocations will (to the extent possible) be made within memory
>>> that is affiliated with NUMA node 1.  You can use various tools to find out
>>> which hardware is associated with which NUMA node such as "numactl
>>> --hardware" or "lstopo".  Hashpipe includes its own such utility:
>>> "hashpipe_topology.sh".
>>>
>>> On NUMA (i.e. multi-socket) systems, each PCIe slot is associated with a
>>> specific NUMA node.  It can be beneficial to have relevant peripherals
>>> (e.g. NIC and GPU) be in PCIe slots that are on the same NUMA node.
>>>
>>> Of course, if you have as single socket mainboard, then all this NUMA
>>> stuff is irrelevant. :P
>>>
>>> Cheers,
>>> Dave
>>>
>>> On Mar 31, 2020, at 15:04, John Ford <jmfor...@gmail.com> wrote:
>>>
>>>
>>>
>>> Hi Mark.  Since the newer version has a script called
>>> "hashpipe_irqaffinity.sh" I would think that the most expedient thing to do
>>> is to upgrade to the newer version.  It's likely to fix some or all of this.
>>>
>>> That said, there are a lot of things that you can check, and not only
>>> the irq affinity, but also make sure that your network tuning is good, that
>>> your network card irqs are attached to processes where the memory is local
>>> to that processor, and that the hashpipe threads are mapped to processor
>>> cores that are also local to that memory.   Sometimes it's
>>> counterproductive to map processes to processor cores by themselves if they
>>> need data that is produced by a different core that's far away, NUMA-wise.
>>> And lock all the memory in core with mlockall() or one of his friends.
>>>
>>> Good luck with it!
>>>
>>> John
>>>
>>>
>>>
>>>
>>> On Tue, Mar 31, 2020 at 12:09 PM Mark Ruzindana <ruziem...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I am fairly new to asking questions on a forum so if I need to provide
>>>> more details, please let me know.
>>>>
>>>> Worth noting that just as I was about to send this out, I checked and I
>>>> don't have the most recent version of HASHPIPE with hashpipe_irqaffinity.sh
>>>> among other additions and modifications. So this might fix my problem, but
>>>> maybe not and someone else has more insight. I will update everyone if it
>>>> does.
>>>>
>>>> I am trying to reduce the number of packets lost/dropped when running
>>>> HASHPIPE on a 32 core RHEL 7 server. I have run enough tests and
>>>> diagnostics to be confident that the problem is not any HASHPIPE thread
>>>> running for too long. Also, the percentage of packets dropped on any given
>>>> scan is between about 0.3 and 0.8%. Approx. 5,000 packets in a 30 second
>>>> scan with a total of 1,650,000 packets. So while it's a small percentage,
>>>> the number of packets lost is still quite large. I have also done enough
>>>> tests with 'top', 'iostat' as well as timing HASHPIPE in between time
>>>> windows where there are no packets dropped to diagnose the issue further. I
>>>> (as well as my colleagues) have come to the conclusion that the kernel is
>>>> allowing processes to interrupt HASHPIPE as it is running.
>>>>
>>>> So I have researched and run tests involving 'niceness' and I am
>>>> currently trying to configure smp affinities and irq balancing, but the
>>>> changes that I make to the smp_affinity files aren't doing anything. My
>>>> plan was to have the interrupts run on the 20 cores that aren't being used
>>>> by HASHPIPE. Also, disabling 'irqbalance' didn't do anything either. I also
>>>> restarted the machine to see whether the changes made are permanent, but
>>>> the system reverts back to what it was.
>>>>
>>>> I might be missing something, or trying the wrong things. Has anyone
>>>> experienced this? And could you point me in the right direction if you have
>>>> any insight?
>>>>
>>>> If you need anymore details, please let me know. I didn't add as much
>>>> as I could because I wanted this to be a reasonably sized message.
>>>>
>>>> Thanks,
>>>>
>>>> Mark Ruzindana
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "casper@lists.berkeley.edu" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to casper+unsubscr...@lists.berkeley.edu.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CA%2B41hpxcwSQT-EsjuyqXpGmmBzykDeLt6JbfUUg_ZYpkXyat2w%40mail.gmail.com
>>>> <https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CA%2B41hpxcwSQT-EsjuyqXpGmmBzykDeLt6JbfUUg_ZYpkXyat2w%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "casper@lists.berkeley.edu" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to casper+unsubscr...@lists.berkeley.edu.
>>> To view this discussion on the web visit
>>> https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CABmH8B_4MoNDsO4yZNYH608u6DVtbSPkKz0YBS8%2Bb%3DffqS%3DwaA%40mail.gmail.com
>>> <https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CABmH8B_4MoNDsO4yZNYH608u6DVtbSPkKz0YBS8%2Bb%3DffqS%3DwaA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "casper@lists.berkeley.edu" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to casper+unsubscr...@lists.berkeley.edu.
>>> To view this discussion on the web visit
>>> https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/F1F8AB17-9030-4875-8792-69CBA99F816F%40berkeley.edu
>>> <https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/F1F8AB17-9030-4875-8792-69CBA99F816F%40berkeley.edu?utm_medium=email&utm_source=footer>
>>> .
>>>
>>
> --
> You received this message because you are subscribed to the Google Groups "
> casper@lists.berkeley.edu" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to casper+unsubscr...@lists.berkeley.edu.
> To view this discussion on the web visit
> https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CA%2B41hpwo%2B2p%3Dohrb9jPOUyuhDs6uErNowE%2BvM4%3D67eu_Q_0KiQ%40mail.gmail.com
> <https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CA%2B41hpwo%2B2p%3Dohrb9jPOUyuhDs6uErNowE%2BvM4%3D67eu_Q_0KiQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> casper@lists.berkeley.edu" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to casper+unsubscr...@lists.berkeley.edu.
> To view this discussion on the web visit
> https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/DEFCBD75-D69F-41ED-95A2-B2641ADBE0EB%40berkeley.edu
> <https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/DEFCBD75-D69F-41ED-95A2-B2641ADBE0EB%40berkeley.edu?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"casper@lists.berkeley.edu" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to casper+unsubscr...@lists.berkeley.edu.
To view this discussion on the web visit 
https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CA%2B41hpwQLQTp3mGUWpFmjVQ3S1LnPY2UFAVshm7h8jAmpNrKnQ%40mail.gmail.com.

Reply via email to