Hello Michael,
as advertised I've used your new interface. I had no issues while
running it on a small scale, but when ran on a cluster a few of the
computing nodes crashed giving following traceback:
Traceback (most recent call last):
....
File "erb.py", line 93, in simulate
tad.route.add('probe_route', edges)
File "simsuite/sumo/tools/traci/_route.py", line 50, in add
self._connection._sendExact()
File "simsuite/sumo/tools/traci/connection.py", line 97, in _sendExact
err = result.readString()
File "simsuite/sumo/tools/traci/storage.py", line 54, in readString
return str(self.read("!%ss" % length)[0].decode("latin1"))
File "simsuite/sumo/tools/traci/storage.py", line 38, in read
return struct.unpack(format, self._content[oldPos:self._pos])
struct.error: unpack requires a string argument of length 55363200
If I understand it right Traci server returned malformed/unexpected message.
The offending call is tad.route.add(), where tad is Connection()
instance. I am sorry for not having more information but this bug is
rare and I do not have the resources (for the moment at least) to re-run
it all again to tell you what was in the list of edges.
I have seen that in at least one case it crashed on the same input (list
of edges is only variable input here), so it is likely a systematic
error rather than microwave ray from outer space randomly striking the
server :-).
Matej.
On 17.3.2016 21:44, Michael Behrisch wrote:
> Hi Matej,
> I picked up your proposal and did not put the Connection objects in the
> central dict when created via "connect". So calling connect or creating
> the connection object manually should be thread safe now. I keep the
> connect function as a shortcut but it should already be possible to
> create connection objects directly.
>
> I disagree a little about inventing a traci2 module because we are
> fairly object oriented now and have only a little overhead for the
> backwards compatibility (without using any wrappers except for the
> simStep, getVersion and close function). Unfortunately the changes broke
> the embedded python functionality but I will try to fix that soon.
>
> Best regards,
> Michael
>
> Am 17.03.2016 um 02:50 schrieb Matěj Kubička:
>> There seems to be race condition, but it can be taken care of by putting
>> a lock on _connections array.
>>
>> Without really seeing full picture here I propose to go different way
>> about it. First, the bussiness with labels seems unnecesarry since the
>> server:port identifier is as unique within connection as any label can
>> be. Second point is that there doesn't seem to be any practical need to
>> have overlap in between instances of different connections. Not only
>> that with it you get these race conditions but also from stability and
>> maintainability point of view you are better off keep different
>> connections as separate as possible.
>>
>> So what about creating traci fully instantiable: (1) the Connection
>> class would become access point to everything; (2) connection will be
>> created when Connection class gets instantiated, without the need for
>> traci.connect(). The concerns about backward compatibility can be
>> addressed in a way that would prevent bloating of the code as well.
>> Simply create a new module such as traci2 with the new functionality and
>> adapt the old traci module such that it will use services of traci2.
>> This will decouple the old approach from new one nicely & the old traci
>> module will get reduced to simple wrapper routines which is easy to
>> maintain and doesn't need to be explicitly tested.
>>
>> Matej.
>>
>>
>>
>>
>> On 16.3.2016 21:43, Lockhart, Thomas G (398I) wrote:
>>>> On Mar 16, 2016, at 1:30 PM, Michael Behrisch
>>>> <[email protected]> wrote:
>>>>
>>>> Hi,
>>>> the easiest solution would be to simply change init to return a
>>>> connection. This breaks backwards compatibility but I don't know whether
>>>> anyone ever used the return value of init anyway.
>>>> What do you think?
>>> Hmm. I would try to refactor slightly by moving the connection code (mostly
>>> just the retry loop) into the module-level connect() method, have that
>>> return the Connection() object directly, and then have the module-level
>>> init() method stuff that result into the _connections array after calling
>>> connect() itself.
>>>
>>> Things would stay backward compatible for those using the old interface,
>>> but not be brittle for those using the new class-based interface.
>>>
>>> Would you like me to send along some code as an example?
>>>
>>> - Tom
>>>
>>>> Best regards,
>>>> Michael
>>>>
>>>> Am 16.03.2016 um 16:22 schrieb Lockhart, Thomas G (398I):
>>>>> Works fine for me for the simple case of a single connection.
>>>>>
>>>>> Glancing at the code, I’m wondering if the gap between making a physical
>>>>> connection and setting the _connections array, and then looking up the
>>>>> connection afterwards to return in the connect() method, might be a
>>>>> problem in multi-threaded code. Could the connection be made instead
>>>>> without losing contact with the parameters in between? Perhaps just by
>>>>> using the explicit connection label to look up the connection?
>>>>>
>>>>> - Tom
>>>>>
>>>>>> On Mar 15, 2016, at 11:27 PM, Michael Behrisch
>>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> Hi,
>>>>>> I did a first shot on this one yesterday and I am quite happy with the
>>>>>> result so far. We now have instances of connections and instances of
>>>>>> domains (vehicle, edge, ...) which are mainly needed because they can
>>>>>> refer to a connection when the call is made which they can hide as a
>>>>>> member. So it is now possible to do either
>>>>>> traci.init(...) and continue with the usual traci.vehicle... stuff or
>>>>>> you call conn = traci.connect(...) and do conn.vehicle... Everything
>>>>>> integrates much nicer now with the parameter and subscription API as
>>>>>> well. The tests already run well except for the subscriptions (and I
>>>>>> think it is only a problem with the default parameters here).
>>>>>>
>>>>>> I only did single instance testing so far, so if you have time to spend
>>>>>> on this one please test extensively (also the old API with switch).
>>>>>>
>>>>>> Best regards,
>>>>>> Michael
>>>>>>
>>>>>> Am 07.01.2016 um 08:40 schrieb Jakob Erdmann:
>>>>>>> Great. I've made this into a ticket (
>>>>>>> http://sumo.dlr.de/trac.wsgi/ticket/2091) and will let you know when we
>>>>>>> start with the implementation after discussing with my colleagues.
>>>>>>>
>>>>>>> 2016-01-07 7:19 GMT+01:00 Lockhart, Thomas G (398I) <
>>>>>>> [email protected]>:
>>>>>>>
>>>>>>>> I had noticed this earlier but not written a solution. I’d be happy to
>>>>>>>> help with a conversion to class-based encapsulation or with testing.
>>>>>>>>
>>>>>>>> - Tom
>>>>>>>>
>>>>>>>>> On Jan 6, 2016, at 9:45 PM, Matěj Kubička <
>>>>>>>> [email protected]> wrote:
>>>>>>>>> If it comes to that I am sure I can find few hours to help with it.
>>>>>>>>> The
>>>>>>>>> changes to do are extensive, but they affect only structure, not
>>>>>>>>> functionality. This should simplify both the conversion and testing.
>>>>>>>>>
>>>>>>>>> Matej.
>>>>>>>>>
>>>>>>>>> PS: In this particular case the interpreter lock will not have much of
>>>>>>>>> an effect on performance as the worker threads are mostly in the
>>>>>>>>> suspended state (or at least I think they are). They should either
>>>>>>>>> sleep, or put nonblocking send request, or process response .. they
>>>>>>>>> don't use spinlocks and they don't do any heavy computations. The real
>>>>>>>>> work is done by sumo.
>>>>>>>>>
>>>>>>>>> Anyway, threads are evil :-), see
>>>>>>>>> http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf. If
>>>>>>>>> performance is an issue you can circumvent the interpreter lock by
>>>>>>>>> implementing the time-critical stuff in C/C++ with posix threads.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 6.1.2016 08:14, Jakob Erdmann wrote:
>>>>>>>>>> So far we didn't use traci in a multi-threaded application. Rather we
>>>>>>>>>> had a single thread of control due to the rather tight coupling of
>>>>>>>>>> the
>>>>>>>>>> different simulations.
>>>>>>>>>> I understand the problem for your queue/worker example and I think
>>>>>>>>>> your approach to solving it is straightforward.
>>>>>>>>>> With a global default TraciAdapter it would even be possible to
>>>>>>>>>> maintain full backward compatibility for people that use a single
>>>>>>>>>> instance (which I consider quite important).
>>>>>>>>>> Would you be able to help with converting the remaining traci
>>>>>>>>>> modules?
>>>>>>>>>>
>>>>>>>>>> regards,
>>>>>>>>>> Jakob
>>>>>>>>>>
>>>>>>>>>> PS: Note, that multi-threading in the default python implementation
>>>>>>>>>> is
>>>>>>>>>> still not efficient when spending lots of time in python code due to
>>>>>>>>>> the
>>>>>>>>>> GIL: https://wiki.python.org/moin/GlobalInterpreterLock
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2016-01-06 5:15 GMT+01:00 Matěj Kubička
>>>>>>>>>> <[email protected]
>>>>>>>>>> <mailto:[email protected]>>:
>>>>>>>>>>
>>>>>>>>>> Thanks for the information, I didn't know about this.
>>>>>>>>>>
>>>>>>>>>> The method you describe on the wiki is single resource access
>>>>>>>>>> multiplexing. Why not having traci instance specific to each
>>>>>>>>>> connection?
>>>>>>>>>>
>>>>>>>>>> Consider this setup: I have N worker threads to which I assign
>>>>>>>>>> jobs dynamically from a single queue. The traci.switch() is
>>>>>>>>>> useless within the workers as Python can preempt anytime.
>>>>>>>>>>
>>>>>>>>>> I can call traci.switch() before every call to anything
>>>>>>>>>> traci-related and pack it within some global lock in order to
>>>>>>>>>> ensure proper synchronization. Like this I get 4 statements for
>>>>>>>>>> every single call to traci. This is inelegant, to say the least..
>>>>>>>>>>
>>>>>>>>>> How do you usually use it? Maybe there is better way for me to
>>>>>>>>>> implement it.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Matej.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 5.1.2016 08:16, Jakob Erdmann wrote:
>>>>>>>>>>> Hello,
>>>>>>>>>>> As I wrote before it is already possible to run multiple
>>>>>>>>>>> simulations from the same script. I updated the documentation to
>>>>>>>>>>> make this more obvious in the future:
>>>>>>>>>>>
>>>>>>>> http://sumo.dlr.de/wiki/TraCI/Interfacing_TraCI_from_Python#Controlling_parallel_simulations_from_the_same_TraCI_script
>>>>>>>>>>> regards,
>>>>>>>>>>> Jakob
>>>>>>>>>>>
>>>>>>>>>>> 2016-01-05 2:16 GMT+01:00 Matěj Kubička
>>>>>>>>>>> <[email protected]
>>>>>>>>>>> <mailto:[email protected]>>:
>>>>>>>>>>>
>>>>>>>>>>> Hi Jakob (et al.),
>>>>>>>>>>> I needed the same - to run multiple sumo instances in
>>>>>>>>>>> parallel. Unfortunately Python bindings for Traci do not
>>>>>>>>>>> support that since the traci package is written as a state
>>>>>>>>>>> machine.
>>>>>>>>>>>
>>>>>>>>>>> I've worked on it a bit and adapted the traci to support
>>>>>>>>>>> multiple coexistent connections. The changes are extensive,
>>>>>>>>>>> but trivial. I have wrapped the IO-related stuff in a class
>>>>>>>>>>> TraciAdapter, whose constructor is now derived from what
>>>>>>>>>>> originally was traci.init(). Then I wrapped functionality of
>>>>>>>>>>> vehicle and route modules in classes Vehicle and Route and I
>>>>>>>>>>> instantiate them in TraciAdapter's constructor.
>>>>>>>>>>>
>>>>>>>>>>> Except that now you have to access traci objects through
>>>>>>>>>>> instances of TraciAdapter, the interface remains the same
>>>>>>>>>>> otherwise.
>>>>>>>>>>>
>>>>>>>>>>> My experiments are limited to adding a vehicle and to
>>>>>>>>>>> collecting data about it. I worked only on concerned parts of
>>>>>>>>>>> the package. I am sending you the code as mere proof of
>>>>>>>>>>> concept, in case you are interested in such a functionality.
>>>>>>>>>>>
>>>>>>>>>>> Matej.
>>>>>>>>>>>
>>>>>>>>>>> PS: I tried to send you full package, but our mailserver
>>>>>>>>>>> blacklists zipped attachements, so I am sending you the three
>>>>>>>>>>> afffected files only.
>>>>>>>>>>>
>>>>>>>>>>> PS2: simplified example that controls 2 sumo instances, adds
>>>>>>>>>>> a vehicle to each and dumps their speeds as they progress in
>>>>>>>> time
>>>>>>>>>>> import traci
>>>>>>>>>>>
>>>>>>>>>>> tad=traci.TraciAdapter(port)
>>>>>>>>>>> tad.route.add('probe_route', edges)
>>>>>>>>>>> tad.vehicle.add('probe', 'probe_route')
>>>>>>>>>>>
>>>>>>>>>>> tad1=traci.TraciAdapter(port+1)
>>>>>>>>>>> tad1.route.add('probe_route', edges)
>>>>>>>>>>> tad1.vehicle.add('probe', 'probe_route')
>>>>>>>>>>>
>>>>>>>>>>> while(True):
>>>>>>>>>>> tad.simulationStep()
>>>>>>>>>>> tad1.simulationStep()
>>>>>>>>>>> print tad.vehicle.getSpeed('probe'),
>>>>>>>>>>> tad1.vehicle.getSpeed('probe')
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 17.12.2015 15:54, Jakob Erdmann wrote:
>>>>>>>>>>>
>>>>>>>>>>> Yes. You can run multiple instances of sumo at the same
>>>>>>>>>>> time. It is even
>>>>>>>>>>> possible to control multiple instances from the same
>>>>>>>>>>> TraCI script as longs
>>>>>>>>>>> as you are careful with the port numbers.
>>>>>>>>>>> regards,
>>>>>>>>>>> Jakob
>>>>>>>>>>>
>>>>>>>>>>> 2015-12-17 14:39 GMT+01:00 Phuong Nguyen
>>>>>>>>>>> <[email protected]
>>>>>>>>>>> <mailto:[email protected]>>:
>>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> I'm trying to optimize a traffic scenario using
>>>>>>>>>>> optimization algorithm and
>>>>>>>>>>> sumo. In the optimization process, I need to call
>>>>>>>>>>> sumo to run the scenario
>>>>>>>>>>> simulation so many time. Can a number of the
>>>>>>>>>>> simulations run parallel?
>>>>>>>>>>>
>>>>>>>>>>> Thanks so much.
>>>>>>>>>>> --
>>>>>>>>>>> Ms. Nguyen Thi Mai Phuong
>>>>>>>>>>> Division of Science Management and International
>>>>>>>>>>> Relations,
>>>>>>>>>>> Department of Network and Communications,
>>>>>>>>>>> Thai Nguyen University of Information and
>>>>>>>>>>> Communication Technology,
>>>>>>>>>>> Thai Nguyen city, Thai Nguyen province, Vietnam.
>>>>>>>>>>> Email:[email protected]
>>>>>>>>>>> <mailto:email%[email protected]>
>>>>>>>>>>> Tel: 0985 18 38 48
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> sumo-user mailing list
>>>>>>>>>>> [email protected]
>>>>>>>>>>> <mailto:[email protected]>
>>>>>>>>>>>
>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> sumo-user mailing list
>>>>>>>>>>> [email protected]
>>>>>>>>>>> <mailto:[email protected]>
>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> sumo-user mailing list
>>>>>>>>>>> [email protected]
>>>>>>>>>>> <mailto:[email protected]>
>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>> _______________________________________________
>>>>>>>>> sumo-user mailing list
>>>>>>>>> [email protected]
>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>>> ------------------------------------------------------------------------------
>>>>>>> _______________________________________________
>>>>>>> sumo-user mailing list
>>>>>>> [email protected]
>>>>>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>>>
>>> ------------------------------------------------------------------------------
>>> Transform Data into Opportunity.
>>> Accelerate data analysis in your applications with
>>> Intel Data Analytics Acceleration Library.
>>> Click to learn more.
>>> http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
>>> _______________________________________________
>>> sumo-user mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>
>> ------------------------------------------------------------------------------
>> Transform Data into Opportunity.
>> Accelerate data analysis in your applications with
>> Intel Data Analytics Acceleration Library.
>> Click to learn more.
>> http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
>> _______________________________________________
>> sumo-user mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>
------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785471&iu=/4140
_______________________________________________
sumo-user mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sumo-user