It seems the last mediaproxy module change was at 6704. I backed down to 6702. No more of these errors.
Am I correct in my thinking that CentOS users, who are not comfortable with installing Python 2.5 parallel to 2.4, are stuck at Mediaproxy 2.3.8 and Opensips 1.6 rev 6702? - Jeff On May 25, 2010, at 10:54 PM, Jeff Pyle wrote: > Dan, > > I increased the traffic_sampling_period to 30. I don't know if it's related, > but I now receive the following errors on any new relay session: > > Traceback (most recent call last): > File "/usr/lib/pymodules/python2.5/mediaproxy/relay.py", line 175, in > lineReceived > response = self.factory.parent.got_command(self.factory.host, > self.command, self.headers) > File "/usr/lib/pymodules/python2.5/mediaproxy/relay.py", line 387, in > got_command > local_media = self.session_manager.update_session(dispatcher, **headers) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 693, in > update_session > session = Session(self, dispatcher, call_id, from_tag, from_uri, to_tag, > to_uri, cseq, user_agent, media, is_downstream, is_caller_cseq) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 432, in > __init__ > self.update_media(cseq, to_tag, user_agent, media_list, is_downstream, > is_caller_cseq) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 463, in > update_media > for media_type, media_ip, media_port, media_direction in media_list: > ValueError: too many values to unpack > Traceback (most recent call last): > File "/usr/lib/pymodules/python2.5/mediaproxy/relay.py", line 175, in > lineReceived > response = self.factory.parent.got_command(self.factory.host, > self.command, self.headers) > File "/usr/lib/pymodules/python2.5/mediaproxy/relay.py", line 387, in > got_command > local_media = self.session_manager.update_session(dispatcher, **headers) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 693, in > update_session > session = Session(self, dispatcher, call_id, from_tag, from_uri, to_tag, > to_uri, cseq, user_agent, media, is_downstream, is_caller_cseq) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 432, in > __init__ > self.update_media(cseq, to_tag, user_agent, media_list, is_downstream, > is_caller_cseq) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 463, in > update_media > for media_type, media_ip, media_port, media_direction in media_list: > ValueError: too many values to unpack > Traceback (most recent call last): > File "/usr/lib/pymodules/python2.5/mediaproxy/relay.py", line 175, in > lineReceived > response = self.factory.parent.got_command(self.factory.host, > self.command, self.headers) > File "/usr/lib/pymodules/python2.5/mediaproxy/relay.py", line 387, in > got_command > local_media = self.session_manager.update_session(dispatcher, **headers) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 693, in > update_session > session = Session(self, dispatcher, call_id, from_tag, from_uri, to_tag, > to_uri, cseq, user_agent, media, is_downstream, is_caller_cseq) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 432, in > __init__ > self.update_media(cseq, to_tag, user_agent, media_list, is_downstream, > is_caller_cseq) > File "/usr/lib/pymodules/python2.5/mediaproxy/mediacontrol.py", line 463, in > update_media > for media_type, media_ip, media_port, media_direction in media_list: > ValueError: too many values to unpack > > > This is on Mediaproxy 2.3.8. The relay machines are Debian unstable, with > the Mediaproxy pre-reqs from the APT repository. I originally tried 2.4.2 > but got some weirdness on the dispatcher side, a CentOS machine. I figured > it was some python-application or gnutls version issue. I'm tapped out on > the dispatcher machine because of the Python 2.4 issue in CentOS. > > The same time I changed the sampling period I also updated from Opensips 1.6 > rev 6617 to rev 6898. Thinking perhaps there has been some mediaproxy module > change that isn't jiving with 2.3.8, I'll go back to a previous version to > see if I still see the relay problems. > > > - Jeff > > > On May 25, 2010, at 3:19 PM, Dan Pascu wrote: > >> >> On 25 May 2010, at 19:52, Jeff Pyle wrote: >> >>> Hi Saúl, >>> >>> That makes sense. I'm more busy today than curious but thanks for >>> the info! >>> >>> Is this calculation used for documentation only? Does disabling it >>> affect any operational aspect? >> >> It won't affect any operational aspect. It's only used to gather >> statistics. If you disable it you won't know how much data was moved >> through the relay by each session. >> >>> Would its running too slow cause any traffic issues through the >>> relay, particularly unscheduled disconnections from the dispatcher? >> >> No. It has no relation with the connections. The only downside of >> having it enabled when it takes too much to compute is that the >> function that gathers the statistics is blocking. That means that if >> the relay is asked to add a new session at that time it won't do it >> until the statistics gathering function returns. Existing streams that >> belong to already established sessions won't be affected as they are >> already processed inside the kernel. The only thing affected is >> setting up new streams which is only delayed until the statistics >> gathering function finishes its job. In your case if such a request >> comes it will be delayed by approximately 11ms, which will not affect >> your operations in any visible way. The message is printed as a >> warning. We considered that blocking the processing of requests for >> more than 10ms is undesirable, but in practice you won't see any >> difference even if it takes 100ms to gather the statistics. >> >>> I'm trying to diagnose the root causes of some issues we had this >>> morning and this information will be most helpful. >> >> If you have one way audio or disconnections, those are in no way >> caused by this. As I said, once the session is established, the data >> is forwarded by conntrack rules in the kernel and it will not be >> affected by any delay in response from the relay. The only negative >> effect you can expect if the statistics gathering time is too high, is >> a delay in establishing the session. >> >>> >>> >>> >>> - Jeff >>> >>> >>> On May 25, 2010, at 12:19 PM, Saúl Ibarra Corretgé wrote: >>> >>>> Hi Jeff and Lazslo, >>>> >>>> On 25/5/10 4:45 PM, Jeff Pyle wrote: >>>>> Laszlo, >>>>> >>>>> Good suggestion... I haven't dug into the source to see if the two >>>>> are >>>>> related. I've set that value to 0 to disable it. We'll see. >>>>> >>>> >>>> Indeed, that setting disables that calculation. If too many streams >>>> are >>>> traversing the MediaProxy relay at the same time, the calculation >>>> could >>>> take too long (depending on the configuration setting) so it's not a >>>> good idea to do it. You may want to increase the traffic sampling >>>> period. >>>> >>>> In case you are curious, look at the _measure_speed function in the >>>> SessionManager class (mediacontrol.py file). That function is called >>>> recurrently each traffic_sample_rate seconds, that's why is not a >>>> good >>>> idea for it to take long making the calculation. >>>> >>>> >>>> Regards, >>>> >>>> -- >>>> Saúl Ibarra Corretgé >>>> AG Projects >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> [email protected] >>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users >>> >>> >>> _______________________________________________ >>> Users mailing list >>> [email protected] >>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users >>> >> >> >> -- >> Dan >> >> >> >> >> >> >> >> _______________________________________________ >> Users mailing list >> [email protected] >> http://lists.opensips.org/cgi-bin/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > [email protected] > http://lists.opensips.org/cgi-bin/mailman/listinfo/users _______________________________________________ Users mailing list [email protected] http://lists.opensips.org/cgi-bin/mailman/listinfo/users
