Hi Andras,

Many thanks for that – sorry the outputs were misleading but I have already 
tried all 6 variants of the mls flow ipv6 <…> command, including "mls flow ipv6 
full" – they all produce the same results, culminating in the 
%FM-2-FLOWMASK_CONFLICT error.

I have removed all other policies off the device to rule them out, so a clean 
test / demonstration of the issue is as follows:

mls commands currently set:

mls ipv6 acl compress address unicast
mls netflow interface                      <- (there is no non-interface 
version of this command)
mls nde sender
mls qos
mls flow ip interface-destination-source   <- (there is no version of this 
command which doesn’t include the ‘interface’ keyword for ipv4)
mls flow ipv6 full

flow commands currently set:

ip flow-export source xx
ip flow-export version 9
ip flow-export destination x.x.x.x xxxx
ip flow-top-talkers


With all the above in place, I do get the Full Flow mask as you say:

                      IPv6:       0   reserved    none
                      IPv6:       1   Full Flo    FM_IPV6_GUARDIAN
                      IPv6:       2   Null
                      IPv6:       3   reserved    none

Plus there is a space in slot 2 - however, when I try to apply my policy:

policy-map test-policy
  class class-default
    police flow mask dest-only 200000000 512000 conform-action transmit 
exceed-action drop

interface xx
 service-policy input test-policy

I still get: %FM-2-FLOWMASK_CONFLICT: Features configured on interface xx have 
conflicting flowmask requirements, traffic may be switched in software

My policy has to be destination based so I cannot change that, but apart from 
that I think I'm trying what you are describing, but my apologies if I'm still 
missing the point!

Cheers!


From: Tóth András [mailto:[email protected]]
Sent: 09 December 2012 14:47
To: Robert Williams
Cc: cisco-nsp NSP
Subject: Re: [c-nsp] Multiple flow-masks

The outputs you pasted suggests that you're using "interface-full" flowmask. 
The workaround is to use "full" flowmask instead of "interface-full" as 
mentioned in my last email. IPv6 entries consume more TCAM space than IPv4, so 
comparing them should take this fact into account.

Best regards,
Andras

On Sun, Dec 9, 2012 at 2:52 PM, Robert Williams <[email protected]> wrote:
Hi,

Thanks very much for that, I’ll have to run through everything in that document 
because the tests I’m doing suggest that it ‘should’ work, for example:

With both my policy and the mls ipv6 flow interface-full disabled I see one 
entry, which is presumably because ‘mls qos’ is enabled:

      IPv6:       1   Intf Ful    FM_IPV6_QOS
      IPv6:       2   Null

Then if I enable only “mls ipv6 flow interface-full”, the FM_IPV6_GUARDIAN 
‘feature’ appears:

      IPv6:       1   Intf Ful    FM_IPV6_GUARDIAN FM_IPV6_QOS
      IPv6:       2   Null

As you can see there is still a gap for the policy to take the second flow mask 
and use whatever type of mask it wants.

As a test - if I disable the ipv6 flow and just enable my policy by itself, it 
goes in the second slot correctly - and uses the Destination Only mask:

      IPv6:       1   Intf Ful    FM_IPV6_QOS
      IPv6:       2   Dest onl    FM_IPV6_QOS

So, I was assuming that a combination of these two features would be acceptable 
to it, since they operate in different mask slots when enabled separately 
anyway I didn’t see why they should collide.

I am correct as far as its operation in IPv4 goes because for v4 there is no 
conflict warning (and the policy works in hardware perfectly!). However, in 
IPv6 that does not appear to be the case.

Looks like yet another unhappy IPv6 feature on the Sup-720, unless anyone can 
see a way around it that I’m missing?

As an aside, does anybody know why it is called FM_IPV6_GUARDIAN instead of 
FM_IPV6_QOS (like in v4)? I’m wondering if this difference is the reason for 
its inability to combine the two masks successfully…

Cheers!

From: Tóth András [mailto:[email protected]]
Sent: 08 December 2012 21:09
To: Robert Williams
Cc: cisco-nsp NSP
Subject: Re: [c-nsp] Multiple flow-masks

Hi Robert,

A few things to keep in mind.

With Release 12.2(33)SXI4 and later releases, when appropriate for the 
configuration of the policer, microflow policers use the interface-full flow 
mask, which can reduce flowmask conflicts. Releases earlier than Release 
12.2(33)SXI4 use the full flow mask.

The flowmask requirements of QoS, NetFlow, and NetFlow data export (NDE) might 
conflict, especially if you configure microflow policing.

http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/qos.html

To add to this, note the following restrictions/recommendations well:

The micro-flow policing full flow mask is compatible with NDE’s flow masks that 
are shorter than or equal to full flow (except for destination source 
interface).
With any micro-flow policing partial mask an error message is displayed and 
either the micro-flow policer or NDE might get disabled.

Best regards,
Andras


On Sat, Dec 8, 2012 at 3:50 PM, Robert Williams <[email protected]> wrote:
Hi,

Unfortunately we use Netflow for an automated system we have (it doesn't need 
to accurately record everything, just the highest number of flows / packets 
etc). So I cannot just remove it, however I have made some progress.

I've tracked it down to the problem actually being with the IPv6 netflow / 
masks. With all netflow removed I am able to add my policy-map in and it works. 
Then by adding netflow commands back in I can get everything back except the 
command:

 mls flow ipv6 <any command>

So even if I specify:

 mls flow ipv6 destination

I still get:

%FM-2-FLOWMASK_CONFLICT: Features configured on interface <name> have 
conflicting flowmask requirements, traffic may be switched in software

At this point in time, with my policy attached and working I'm showing:

                 Flowmasks:   Mask#   Type        Features
                      IPv4:       0   reserved    none
                      IPv4:       1   Intf Ful    FM_QOS Intf NDE L3 Feature
                      IPv4:       2   Dest onl    FM_QOS             <--- My 
policy (V4)
                      IPv4:       3   reserved    none

                      IPv6:       0   reserved    none
                      IPv6:       1   Intf Ful    FM_IPV6_QOS
                      IPv6:       2   Dest onl    FM_IPV6_QOS        <--- My 
policy (V6)
                      IPv6:       3   reserved    none

The command "mls flow ipv6 <anything>" just plain refuses to go active in the 
config, so if I re-send it I get the error shown above every time.

The flowmasks are correctly showing "Intf Full" and "Dest only" in slots 1 and 
2 respectively. So, why does my netflow request not attach alongside either one 
of them when it's looking for the same mask as is already active in those slots?

The policy itself is working correctly at this point, but I cannot enable IPv6 
netflow.

Can anyone help?

Robert Williams
Backline / Operations Team
Custodian DataCentre
tel: +44 (0)1622 230382
email: [email protected]
http://www.custodiandc.com/disclaimer.txt

Robert Williams
Backline / Operations Team
Custodian DataCentre
tel: +44 (0)1622 230382
email: [email protected]
http://www.custodiandc.com/disclaimer.txt


Robert Williams
Backline / Operations Team
Custodian DataCentre
tel: +44 (0)1622 230382
email: [email protected]
http://www.custodiandc.com/disclaimer.txt

_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/




_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to