Works fine for me for the simple case of a single connection.

Glancing at the code, I’m wondering if the gap between making a physical 
connection and setting the _connections array, and then looking up the 
connection afterwards to return in the connect() method, might be a problem in 
multi-threaded code. Could the connection be made instead without losing 
contact with the parameters in between? Perhaps just by using the explicit 
connection label to look up the connection?

- Tom

> On Mar 15, 2016, at 11:27 PM, Michael Behrisch 
> <[email protected]> wrote:
> 
> Hi,
> I did a first shot on this one yesterday and I am quite happy with the
> result so far. We now have instances of connections and instances of
> domains (vehicle, edge, ...) which are mainly needed because they can
> refer to a connection when the call is made which they can hide as a
> member. So it is now possible to do either
> traci.init(...) and continue with the usual traci.vehicle... stuff or
> you call conn = traci.connect(...) and do conn.vehicle... Everything
> integrates much nicer now with the parameter and subscription API as
> well. The tests already run well except for the subscriptions (and I
> think it is only a problem with the default parameters here).
> 
> I only did single instance testing so far, so if you have time to spend
> on this one please test extensively (also the old API with switch).
> 
> Best regards,
> Michael
> 
> Am 07.01.2016 um 08:40 schrieb Jakob Erdmann:
>> Great. I've made this into a ticket (
>> http://sumo.dlr.de/trac.wsgi/ticket/2091) and will let you know when we
>> start with the implementation after discussing with my colleagues.
>> 
>> 2016-01-07 7:19 GMT+01:00 Lockhart, Thomas G (398I) <
>> [email protected]>:
>> 
>>> I had noticed this earlier but not written a solution. I’d be happy to
>>> help with a conversion to class-based encapsulation or with testing.
>>> 
>>> - Tom
>>> 
>>>> On Jan 6, 2016, at 9:45 PM, Matěj Kubička <
>>> [email protected]> wrote:
>>>> 
>>>> If it comes to that I am sure I can find few hours to help with it. The
>>>> changes to do are extensive, but they affect only structure, not
>>>> functionality. This should simplify both the conversion and testing.
>>>> 
>>>> Matej.
>>>> 
>>>> PS: In this particular case the interpreter lock will not have much of
>>>> an effect on performance as the worker threads are mostly in the
>>>> suspended state (or at least I think they are). They should either
>>>> sleep, or put nonblocking send request, or process response .. they
>>>> don't use spinlocks and they don't do any heavy computations. The real
>>>> work is done by sumo.
>>>> 
>>>> Anyway, threads are evil :-), see
>>>> http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf. If
>>>> performance is an issue you can circumvent the interpreter lock by
>>>> implementing the time-critical stuff in C/C++ with posix threads.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On 6.1.2016 08:14, Jakob Erdmann wrote:
>>>>> So far we didn't use traci in a multi-threaded application. Rather we
>>>>> had a single thread of control due to the rather tight coupling of the
>>>>> different simulations.
>>>>> I understand the problem for your queue/worker example and I think
>>>>> your approach to solving it is straightforward.
>>>>> With a global default TraciAdapter it would even be possible to
>>>>> maintain full backward compatibility for people that use a single
>>>>> instance (which I consider quite important).
>>>>> Would you be able to help with converting the remaining traci modules?
>>>>> 
>>>>> regards,
>>>>> Jakob
>>>>> 
>>>>> PS: Note, that multi-threading in the default python implementation is
>>>>> still not efficient when spending lots of time in python code due to the
>>>>> GIL: https://wiki.python.org/moin/GlobalInterpreterLock
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 2016-01-06 5:15 GMT+01:00 Matěj Kubička
>>>>> <[email protected]
>>>>> <mailto:[email protected]>>:
>>>>> 
>>>>>   Thanks for the information, I didn't know about this.
>>>>> 
>>>>>   The method you describe on the wiki is single resource access
>>>>>   multiplexing. Why not having traci instance specific to each
>>>>>   connection?
>>>>> 
>>>>>   Consider this setup: I have N worker threads to which I assign
>>>>>   jobs dynamically from a single queue. The traci.switch() is
>>>>>   useless within the workers as Python can preempt anytime.
>>>>> 
>>>>>   I can call traci.switch() before every call to anything
>>>>>   traci-related and pack it within some global lock in order to
>>>>>   ensure proper synchronization. Like this I get 4 statements for
>>>>>   every single call to traci. This is inelegant, to say the least..
>>>>> 
>>>>>   How do you usually use it? Maybe there is better way for me to
>>>>>   implement it.
>>>>> 
>>>>>   Thanks,
>>>>>   Matej.
>>>>> 
>>>>> 
>>>>>   On 5.1.2016 08:16, Jakob Erdmann wrote:
>>>>>>   Hello,
>>>>>>   As I wrote before it is already possible to run multiple
>>>>>>   simulations from the same script. I updated the documentation to
>>>>>>   make this more obvious in the future:
>>>>>> 
>>> http://sumo.dlr.de/wiki/TraCI/Interfacing_TraCI_from_Python#Controlling_parallel_simulations_from_the_same_TraCI_script
>>>>>>   regards,
>>>>>>   Jakob
>>>>>> 
>>>>>>   2016-01-05 2:16 GMT+01:00 Matěj Kubička
>>>>>>   <[email protected]
>>>>>>   <mailto:[email protected]>>:
>>>>>> 
>>>>>>       Hi Jakob (et al.),
>>>>>>       I needed the same -  to run multiple sumo instances in
>>>>>>       parallel. Unfortunately Python bindings for Traci do not
>>>>>>       support that since the traci package is written as a state
>>>>>>       machine.
>>>>>> 
>>>>>>       I've worked on it a bit and adapted the traci to support
>>>>>>       multiple coexistent connections. The changes are extensive,
>>>>>>       but trivial. I have wrapped the IO-related stuff in a class
>>>>>>       TraciAdapter, whose constructor is now derived from what
>>>>>>       originally was traci.init(). Then I wrapped functionality of
>>>>>>       vehicle and route modules in classes Vehicle and Route and I
>>>>>>       instantiate them in TraciAdapter's constructor.
>>>>>> 
>>>>>>       Except that now you have to access traci objects through
>>>>>>       instances of TraciAdapter, the interface remains the same
>>>>>>       otherwise.
>>>>>> 
>>>>>>       My experiments are limited to adding a vehicle and to
>>>>>>       collecting data about it. I worked only on concerned parts of
>>>>>>       the package. I am sending you the code as mere proof of
>>>>>>       concept, in case you are interested in such a functionality.
>>>>>> 
>>>>>>       Matej.
>>>>>> 
>>>>>>       PS: I tried to send you full package, but our mailserver
>>>>>>       blacklists zipped attachements, so I am sending you the three
>>>>>>       afffected files only.
>>>>>> 
>>>>>>       PS2: simplified example that controls 2 sumo instances, adds
>>>>>>       a vehicle to each and dumps their speeds as they progress in
>>> time
>>>>>> 
>>>>>>       import traci
>>>>>> 
>>>>>>       tad=traci.TraciAdapter(port)
>>>>>>       tad.route.add('probe_route', edges)
>>>>>>       tad.vehicle.add('probe', 'probe_route')
>>>>>> 
>>>>>>       tad1=traci.TraciAdapter(port+1)
>>>>>>       tad1.route.add('probe_route', edges)
>>>>>>       tad1.vehicle.add('probe', 'probe_route')
>>>>>> 
>>>>>>       while(True):
>>>>>>            tad.simulationStep()
>>>>>>            tad1.simulationStep()
>>>>>>            print tad.vehicle.getSpeed('probe'),
>>>>>>       tad1.vehicle.getSpeed('probe')
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>       On 17.12.2015 15:54, Jakob Erdmann wrote:
>>>>>> 
>>>>>>           Yes. You can run multiple instances of sumo at the same
>>>>>>           time. It is even
>>>>>>           possible to control multiple instances from the same
>>>>>>           TraCI script as longs
>>>>>>           as you are careful with the port numbers.
>>>>>>           regards,
>>>>>>           Jakob
>>>>>> 
>>>>>>           2015-12-17 14:39 GMT+01:00 Phuong Nguyen
>>>>>>           <[email protected]
>>>>>>           <mailto:[email protected]>>:
>>>>>> 
>>>>>>               Hi,
>>>>>> 
>>>>>>               I'm trying to optimize a traffic scenario using
>>>>>>               optimization algorithm and
>>>>>>               sumo. In the optimization process, I need to call
>>>>>>               sumo to run the scenario
>>>>>>               simulation so many time. Can a number of the
>>>>>>               simulations run parallel?
>>>>>> 
>>>>>>               Thanks so much.
>>>>>>               --
>>>>>>               Ms. Nguyen Thi Mai Phuong
>>>>>>               Division of Science Management and International
>>>>>>               Relations,
>>>>>>               Department of Network and Communications,
>>>>>>               Thai Nguyen University of Information and
>>>>>>               Communication Technology,
>>>>>>               Thai Nguyen city, Thai Nguyen province, Vietnam.
>>>>>>               Email:[email protected]
>>>>>>               <mailto:email%[email protected]>
>>>>>>               Tel: 0985 18 38 48
>>>>>> 
>>>>>> 
>>> ------------------------------------------------------------------------------
>>>>>>               _______________________________________________
>>>>>>               sumo-user mailing list
>>>>>>               [email protected]
>>>>>>               <mailto:[email protected]>
>>>>>>               https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>> 
>>>>>> 
>>> ------------------------------------------------------------------------------
>>>>>>           _______________________________________________
>>>>>>           sumo-user mailing list
>>>>>>           [email protected]
>>>>>>           <mailto:[email protected]>
>>>>>>           https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>> ------------------------------------------------------------------------------
>>>>>> 
>>>>>>       _______________________________________________
>>>>>>       sumo-user mailing list
>>>>>>       [email protected]
>>>>>>       <mailto:[email protected]>
>>>>>>       https://lists.sourceforge.net/lists/listinfo/sumo-user
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> ------------------------------------------------------------------------------
>>>> _______________________________________________
>>>> sumo-user mailing list
>>>> [email protected]
>>>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>>> 
>>> 
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> sumo-user mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/sumo-user
>> 
> 

------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
_______________________________________________
sumo-user mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sumo-user

Reply via email to