Hi Everton,

I added the entry "ip pim ssm" on ra_ap0  as you suggested. I still don't
see join request coming into the source. Below is what the configuration
looks like on the individual nodes:

Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!


 Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!

On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
<[email protected]>wrote:

> Hi,
>
> Yes, pimd should route the join request towards the source.
>
> However, you need to enable "ip pim ssm" on ra_ap0 as well.
> If you enable only "ip igmp" on a interface, pimd won't inject
> IGMP-learnt membership into the pim protocol.
>
> Cheers,
> Everton
>
> On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek <[email protected]> wrote:
> > Hi Everton,
> >
> > Thanks for the suggestions. I made the changes to the config files on
> both
> > nodes as you suggested. Since it is not possible for me to force the
> client
> > to do a source specific join I added the following line at interface
> ra_ap0
> > on node 2 where the client is attached:
> >
> > interface ra_ap0
> > ip igmp
> > ip igmp query-interval 125
> > ip igmp query-max-response-time-dsec 100
> > ip igmp join 239.255.255.250 192.168.4.60
> >
> > I do see the source-specific IGMPv3 join group 239.255.255.250 for source
> > 192.168.4.60 which is addressed to 224.0.0.22 on the side of node2.
> However
> > this join request never makes it to node 1 where the source is located on
> > ra_ap0.
> > Shouldn't the pimd route this join request to the node where the source
> is
> > attached ?
> >
> > Thanks,
> >
> >
> >
> >
> > On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques <
> [email protected]>
> > wrote:
> >>
> >> Hi,
> >>
> >> You did not mention whether you got a source-specific IGMPv3 join to the
> >> channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
> >> unable to program the multicast forwarding cache with
> non-source-specific
> >> groups. Usually the key issue is to instruct the receiver application to
> >> join the source-specific channel (S,G).
> >>
> >> Regarding the config, the basic rule is:
> >> 1) Enable "ip pim ssm" everywhere (on every interface that should pass
> >> mcast).
> >> 2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
> >> the receivers (IGMPv3 hosts).
> >>
> >> An even simpler config rule to remember is to enable both commands
> >> everywhere. They should not cause any harm.
> >>
> >> Hence, if your mcast receiver is attached to Node 2 at  ra_ap0, I think
> >> you will
> >> need at least the following config:
> >>
> >> !
> >> ! Node 1
> >> !
> >> interface ra_ap0
> >>  ip pim ssm
> >> interface ra_sta0
> >>  ip pim ssm
> >>
> >> !
> >> ! Node 2
> >> !
> >> interface ra_ap0
> >>  ip pim ssm
> >>  ip igmp
> >> interface ra_sta0
> >>  ip pim ssm
> >>
> >> Hope this helps,
> >> Everton
> >>
> >> On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek <[email protected]>
> >> wrote:
> >> > Hi Everton & Fellow  qpimd users,
> >> >
> >> > We're trying to stream multicast video traffic between a Tversity
> server
> >> > and
> >> > a multicast client separated by 2 nodes (node1 and node2). Each node
> is
> >> > running quagga suite (version 0.99.15) along with qpimd (version
> 0.158)
> >> > running on top of Linux 2.6.26.
> >> > Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
> >> > Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
> >> > The Tversity server talks to interface ra_ap0 on Node 1 and the
> >> > multicast
> >> > client talks to interface ra_ap0 on Node 2
> >> > Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
> >> >
> >> > Below is a graphical depiction :
> >> >
> >> > Tversity server   -----------ra_ap0--> Node 1
> >> > --ra_sta0-----------------ra_sta0-->Node
> >> > 2-----ra_ap0------------------------> Video Client
> >> > ===========             ======================
> >> > ======================                      =============
> >> >
> >> >
> >> > Node 1 pimd.conf file
> >> > ==================
> >> > !
> >> > ! Zebra configuration saved from vty
> >> > ! 2009/08/01 20:26:06
> >> > !
> >> > hostname node1
> >> > password zebra
> >> > enable password zebra
> >> > log stdout
> >> > !
> >> > interface eth0
> >> > !
> >> > interface eth1
> >> > !
> >> > interface lo
> >> > !
> >> > interface ra_ap0
> >> > ip pim ssm
> >> > ip igmp
> >> > ip igmp query-interval 125
> >> > ip igmp query-max-response-time-dsec 100
> >> > ip igmp join 239.255.255.250 192.168.4.60
> >> > !
> >> > interface ra_sta0
> >> > ip igmp
> >> > ip igmp query-interval 125
> >> > ip igmp query-max-response-time-dsec 100
> >> > !
> >> > !
> >> > ip multicast-routing
> >> > !
> >> > line vty
> >> > !
> >> >
> >> > Node 2 pimd.conf configuration file
> >> > ============================
> >> > !
> >> > ! Zebra configuration saved from vty
> >> > ! 2009/08/02 21:54:14
> >> > !
> >> > hostname node2
> >> > password zebra
> >> > enable password zebra
> >> > log stdout
> >> > !
> >> > interface eth0
> >> > !
> >> > interface eth1
> >> > !
> >> > interface lo
> >> > !
> >> > interface ra_ap0
> >> > ip igmp
> >> > ip igmp query-interval 125
> >> > ip igmp query-max-response-time-dsec 100
> >> > ip igmp join 239.255.255.250 192.168.4.60
> >> > !
> >> > interface ra_sta0
> >> > ip igmp
> >> > ip igmp query-interval 125
> >> > ip igmp query-max-response-time-dsec 100
> >> > !
> >> > !
> >> > ip multicast-routing
> >> > !
> >> > line vty
> >> > !
> >> >
> >> > From the above configuration you can see that interface ra_ap0 on node
> 1
> >> > is
> >> > configured to be multicast source (ip pim ssm).
> >> > We do see some multicast join requests in wireshark from both the
> server
> >> > and
> >> > the client however no data flow. Initially we started qpimd without
> >> > the entry "igmp join ..." on either client side node or server side
> >> > node.
> >> > Looking at node 1 configuration through "show  ip igmp groups" we
> didn't
> >> > see
> >> > the group membership for "239.255.255.250" while this group membership
> >> > was
> >> > observed on node 2. I put this group membership on both nodes to force
> >> > them
> >> > to join this multicast group - however without success.
> >> >
> >> > Just to give you a background - when both client and server are
> talking
> >> > to
> >> > same node - say node 2 and same interface ra_ap0 (without qpimd
> running)
> >> > multicast video gets served flawlessly from Tversity server to client
> >> > through the node.
> >> > But with the 2 node setup we aren't able to see the video streams go
> >> > through
> >> > to the client.
> >> >
> >> > Could you please review  the above configuration for errors or have
> any
> >> > suggestions to reseolve this issue ? Any help would be greatly
> >> > appreciated.
> >> >
> >> > Thanks,
> >> >
> >> >
> >
> >
>

Reply via email to