Dear, I tried it, and i always get response from cbench like:
01:51:29.823 32 switches: fmods/sec: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms and from pox: INFO:openflow.of_01:[Con 30/11] Connected to 00-00-00-00-00-0b INFO:openflow.of_01:[Con 41/24] Connected to 00-00-00-00-00-18 INFO:openflow.of_01:[Con 42/25] Connected to 00-00-00-00-00-19 INFO:openflow.of_01:[Con 43/26] Connected to 00-00-00-00-00-1a INFO:openflow.of_01:[Con 44/27] Connected to 00-00-00-00-00-1b INFO:openflow.of_01:[Con 45/28] Connected to 00-00-00-00-00-1c INFO:openflow.of_01:[Con 46/29] Connected to 00-00-00-00-00-1d INFO:openflow.of_01:[Con 47/30] Connected to 00-00-00-00-00-1e INFO:openflow.of_01:[Con 48/31] Connected to 00-00-00-00-00-1f INFO:openflow.of_01:[Con 49/32] Connected to 00-00-00-00-00-20 INFO:openflow.of_01:[Con 50/18] Connected to 00-00-00-00-00-12 I got the last oflops(with path) I run pox as ./pox.py log.level --DEBUG openflow.of_01 --address=localhost --port=6633 and cbench as /cbench -c localhost -p 6633 -m 10000 -l 10 -s 32 -M 1000000 -t -d Sometimes i got this msg from cbench: make_tcp_connection: connect: Operation now in progress make_nonblock_tcp_connection and this messages from pox : POX> INFO:openflow.of_01:[Con 2/None] Connection reset INFO:openflow.of_01:[Con 2/None] closing connection INFO:openflow.of_01:[Con 9/None] Connection r Anyone has any idea about it?eset INFO:openflow.of_01:[Con 9/None] closing connection INFO:openflow.of_01:[Con 17/None] Connection reset Anyone has any idea about it? Thks! 2012/10/26 Murphy McCauley <murphy.mccau...@gmail.com> > Thanks for looking at this, you guys. cbench has, indeed, been run on > POX, but it was a pretty long time ago -- either so long ago that it > predated the barrier during connection or I just hacked POX's connection > process to work at the time and forgot about it. It'd be great to have it > work out of the box. > > (And yeah, I think 8k is probably reasonable with CPython. Under PyPy, > you can get significantly better.) > > -- Murphy > > > Thanks, that worked a treat! I added the following patch to > > fakeswitch.c, and now I'm able to get responses from my POX controller: > > > > Index: cbench/fakeswitch.c > > =================================================================== > > --- cbench/fakeswitch.c (revision 135) > > +++ cbench/fakeswitch.c (working copy) > > @@ -261,6 +261,7 @@ > > int count; > > struct ofp_header * ofph; > > struct ofp_header echo; > > + struct ofp_header barrier; > > char buf[BUFLEN]; > > count = msgbuf_read(fs->inbuf, fs->sock); // read any queued data > > if (count <= 0) > > @@ -357,6 +358,14 @@ > > echo.xid = ofph->xid; > > msgbuf_push(fs->outbuf,(char *) &echo, sizeof(echo)); > > break; > > + case OFPT_BARRIER_REQUEST: > > + debug_msg(fs, "got barrier, sent barrier_resp"); > > + barrier.version= OFP_VERSION; > > + barrier.length = htons(sizeof(barrier)); > > + barrier.type = OFPT_BARRIER_REPLY; > > + barrier.xid = ofph->xid; > > + msgbuf_push(fs->outbuf,(char *) &barrier, > sizeof(barrier)); > > + break; > > case OFPT_STATS_REQUEST: > > stats_req = (struct ofp_stats_request *) ofph; > > if ( ntohs(stats_req->type) == OFPST_DESC ) { > > > > > > On Thu, Oct 25, 2012 at 03:50:41PM -0700, Rob Sherwood wrote: > >> On Thu, Oct 25, 2012 at 3:41 PM, Peter Fales > >> <peter.fa...@alcatel-lucent.com> wrote: > >>> On Thu, Oct 25, 2012 at 02:59:12PM -0700, Rob Sherwood wrote: > >>>> 2) it would be pretty easy (~5 lines of code) to add a barrier reply > >>>> handler to cbench if you were so inclined. > >>> > >>> Thanks! > >>> > >>> I would like to pursue that, but I don't know anything out the cbench > >>> code. Any suggestions on where to start looking? > >> > >> In fakeswitch.c:259 or so, there is the function: > >> > >> void fakeswitch_handle_read(struct fakeswitch *fs) > >> > >> that has the openflow state machine. It should be a matter of adding > >> a case for a barrier request and creating and sending the barrier > >> reply. For the example for echo request. > >> > >> Would love to get support for POX in place -- thanks! > >> > >> - Rob > > _______________________________________________ > openflow-discuss mailing list > openflow-discuss@lists.stanford.edu > https://mailman.stanford.edu/mailman/listinfo/openflow-discuss >
_______________________________________________ openflow-discuss mailing list openflow-discuss@lists.stanford.edu https://mailman.stanford.edu/mailman/listinfo/openflow-discuss