Hi Alfredo,
In my last email, I have highlighted the hash value calculated for each
packet. Hash value of all the fragments which are not first
(Eth->IP->Data) is *2780252203* whereas it is *2780252186 *for first
fragments (Eth->IP->udp->gtp->IP->tcp). Shouldn't this value be same for
fragments with same source and destination IP addresses if clustering
mechanism used 2 tuple?
I tried changing the pfring code for hash_pkt_cluster to below one ; could
see all the fragments of same source and dest ip are generating same hash
but still packet got segregated for first run.
static inline u_int32_t hash_pkt_header(struct pfring_pkthdr *hdr,
u_int32_t flags)
{
if (hdr->extended_hdr.pkt_hash == 0) {
hdr->extended_hdr.pkt_hash =
hash_pkt(0,0,hdr->extended_hdr.parsed_pkt.ip_src,
hdr->extended_hdr.parsed_pkt.ip_dst, 0,0) ; }
return hdr->extended_hdr.pkt_hash;
}
While checking pfring code further, I came across this piece of code which
seems will not work for out of order packets correctly.
For ex -
First packet (fragment but not first having fragment offset !=0) received
is out of order, As per below piece of code, It will try to retrieve any
element from cluster hash but get_fragment_app_id () will return -1 and pf
ring will set skb_hash to 0 and eventually will add to the
queue 0 whereas doing correct calculation based on ipsrc,ipdst and
ip_fragment_id could have land this fragment_but_not_first to a different
queue.
if (enable_frag_coherence && fragment_not_first) {
if (skb_hash == -1) { /* read hash once */
skb_hash = get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_
src,hdr.extended_hdr.parsed_pkt.ipv4_dst,ip_id, more_fragments);
if (skb_hash < 0)
skb_hash = 0;
}
I changed this code so that skb_hash is generated based on the packet
headers rather than setting it to 0 but no success again.
Can you please help to check this. This has really put our project on hold
since we have used this clustering mechanism to scale our application.
Let me know if you need any more info.
Regards,
Gautam
On Mon, Nov 14, 2016 at 2:01 PM, Chandrika Gautam <
[email protected]> wrote:
> Hi Alfredo,
>
> There is an observation further on this.
>
> PFA for the new traces having 8 packets from same source and destination.
> On first run. They are getting segregated across pfcount different
> instances. When I send the same file again, It goes to one instance of
> pfcount.
>
>
>
>
> *Output of first run ----------------------------------------*
>
> userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
> 10000Mb/s]
> # Device RX channels: 16
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/6442-ens2f0.2
> 15:25:05.593222239 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 ->* 49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0][caplen
> =64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
> 15:25:05.593439521 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:25:05.593618032 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
>
> userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
> 10000Mb/s]
> # Device RX channels: 16
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/6441-ens2f0.1
> 15:25:05.593070816 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:25:05.593123086 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4]*[116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:25:05.593326381 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0][caplen=128][len=1518
> ][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:25:05.593529674 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:25:05.593776442 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0][caplen=128][len=1518
> ][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
>
>
> *Output of second run ----------------------------------------*
>
>
> 15:28:03.255165805 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:28:03.255217727 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:28:03.255367715 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0][
> caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38]
> [payload_offset=0]
> 15:28:03.255416304 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:28:03.255551827 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:28:03.255616828 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:28:03.255765232 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:28:03.255917611 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>
>
>
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 3:42 PM, Chandrika Gautam <
> [email protected]> wrote:
>
>> My bad !!!
>>
>> I am checking this for longer run and will update.
>>
>> Thanks & Regards,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <
>> [email protected]> wrote:
>>
>>> Gautam
>>> they are not all the same, you have 4 flows 199.223.102.6 ->
>>> 49.103.1.132 and 2 flows 220.159.237.103 -> 203.118.242.166
>>>
>>> Alfredo
>>>
>>> On 11 Nov 2016, at 10:51, Chandrika Gautam <
>>> [email protected]> wrote:
>>>
>>>
>>> If you check the outer src and dst IP addresses of all these 6 packets
>>> are same, then shouldn't all these 6 packets go to 1 pfcount instance if we
>>> have chosen cluster_type as cluster_per_2_flow?
>>>
>>> Regards,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.
>>> org> wrote:
>>>
>>>> This is what I am receiving, it looks correct as they are distributed
>>>> by 2-tuple:
>>>>
>>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>>> Using PF_RING v.6.5.0
>>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>>> 10000Mb/s]
>>>> # Device RX channels: 1
>>>> # Polling threads: 1
>>>> pfring_set_cluster returned 0
>>>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
>>>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>>> 6.58.194.110:443 -> 100.83.201.244:43485]
>>>> [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offs
>>>> et=0][l3_offset=14][l4_offset=34][payload_offset=42]
>>>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>>> =34][payload_offset=0]
>>>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>>> 6.58.194.97:443 -> 100.83.201.244:55379]
>>>> [hash=4182140810][tos=0][tcp_seq_num=0]
>>>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>>>> 34][payload_offset=42]
>>>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>>> =34][payload_offset=0]
>>>> ^CLeaving...
>>>>
>>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>>> Using PF_RING v.6.5.0
>>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>>> 10000Mb/s]
>>>> # Device RX channels: 1
>>>> # Polling threads: 1
>>>> pfring_set_cluster returned 0
>>>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
>>>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
>>>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80
>>>> <http://49.96.0.26/> -> 10.160.153.151:60856]
>>>> [hash=2820071437][tos=104][tcp_seq_num=0]
>>>> [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_offset
>>>> =34][payload_offset=42]
>>>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20
>>>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
>>>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38
>>>> ][payload_offset=0]
>>>> ^CLeaving...
>>>>
>>>> Alfredo
>>>>
>>>> On 11 Nov 2016, at 10:41, Chandrika Gautam <
>>>> [email protected]> wrote:
>>>>
>>>> I tried with above. I found the same result one instance of pfcount
>>>> receiving 2 packets and 6 in other instance for the file shared
>>>> multiple_fragments_id35515_wo_vlan.pcap.
>>>>
>>>> Are you receiving all 6 packets in one pfcount instance ?
>>>>
>>>> Regards,
>>>> Chandrika
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>
>>
>
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc