Hello Sayan​,

thanks for looking into the different methods, I think one thing to remember
here is that in most cases we observe the environment at a specific time and
based on that observation we perform a specific action. So I don't think the
producer-consumer pattern is necessary here, there is only one producer and only
one consumer, each step depends on the other observer -> action -> observe ->
action -> observe -> action -> ...

> For the simulation, would Gazebo be a good choice? Its merits include inbuilt
> support for communication over sockets(using ProtoBuf), support for adanced 3d
> graphics and physics engines.

I think that this could be an option yes, I guess another option would be
https://github.com/openai/gym/tree/master/gym/envs/robotics, we should take a
closer look at each framework and evaluate which makes sense in our case,
ideally we just use the simulator to train the methods, based on a camera image,
this has the advantage that you could easily transfer this to other situations,
since all you need is a camera.

Let me know what you think.

Thanks,
Marcus

> On 2. Mar 2018, at 10:47, Sayan Goswami <goswami.saya...@gmail.com> wrote:
> 
> Hi Marcus,
> 
> I have tested a couple of methods of interprocess communication over the week.
> Using TCP sockets are very fast indeed and libraries like ZeroMQ help 
> implement them with nearly negligible effort. 
> Another method that I did try was to use a producer-consumer queue. The 
> producer and the consumer run on different threads. The producer adds 
> consumables to a processing queue. The consumer then processes the items on 
> this queue in a FIFO manner. This method is a little bit faster than using 
> scokets as there is no overhead involved for the protocol. 
> As per your suggestion, I went through a few popular methods of interprocess 
> communication (namely, named pipes, sockets and producer consumer queues), 
> link to the repo containing the same is here: https://github.com/Sayan98/ipc 
> <https://github.com/Sayan98/ipc> . Using flask (ie. HTTP) would be a bad fit. 
> My apologies for my oversight. 
> 
> For the simulation, would Gazebo be a good choice? Its merits include inbuilt 
> support for communication over sockets(using ProtoBuf), support for adanced 
> 3d graphics and physics engines.
> 
> Sincerely,
> Sayan​ Goswami​
> 
> 
> On Tue, Feb 27, 2018 at 3:50 AM, Marcus Edel <marcus.e...@fu-berlin.de 
> <mailto:marcus.e...@fu-berlin.de>> wrote:
> Hello Sayan,
> 
> Thanks for the feedback, I like the general idea, not sure about flask, as I
> wrote https://github.com/zoq/gym_tcp_api <https://github.com/zoq/gym_tcp_api> 
> I was also testing out HTTP but it was
> way slower as using a socket, not sure flask is faster; I guess we will have 
> to
> test it out. Http is probably easier to use since you could use a simple 
> server
> as a backend, on the other side our usage isn't exactly what HTTP is commonly
> used for. But this is easy to test out, we could just come up with a simple
> example and run some tests. Let me know what you think.
> 
> Thanks,
> Marcus
> 
>> On 26. Feb 2018, at 03:53, Sayan Goswami <goswami.saya...@gmail.com 
>> <mailto:goswami.saya...@gmail.com>> wrote:
>> 
>> Hi Marcus,
>> 
>> Using Python (with OpenGL) for writing the simulator seems like a better 
>> option to me primarily because it will allow for more flexibility. Graphics 
>> libraries like pygame should provide a good a starting point for the 
>> simulation. As far as communication between the simulation and the agent go, 
>> I was thinking of having an HTTP server to serialize data and transfer it 
>> between the learning agent and the server(using flask). The agent can poll 
>> the server at time intervals for new content and process them.
>> Having used the libraries above for numerous personal projects, I am quite 
>> comfortable with them. 
>> Faster communication could be made possible using persistent sockets instead 
>> of polling from an HTTP endpoint. 
>> Purpose built software like ROS seems to be a tangible alternative too, but, 
>> as per your suggestion in the other thread about this project, that most of 
>> the available compute should be utilised for the learning algorithm, using a 
>> pure Python implementation should not only make it performant but also 
>> easily maintainable and installable.
>> 
>> I would love to hear your suggestions on the ideas suggested above. Thank 
>> you for your time.
>> 
>> Best,
>> Sayan
>> 
>> 
>> Sayan Goswami
>> Undergraduate Student (UG-II)
>> Dept. of Electronics & Telecomm. Engineering
>> Jadavpur University
>> 
>> On Sun, Feb 25, 2018 at 7:09 PM, Marcus Edel <marcus.e...@fu-berlin.de 
>> <mailto:marcus.e...@fu-berlin.de>> wrote:
>> Hello Sayan,
>> 
>> welcome and thanks for getting in touch.
>> 
>>> My research interests lie in the field of Deep Learning (specifically Deep
>>> Reinforcement Learning). I have been exploring deep neural network 
>>> architecture
>>> for the past year and a half. I am familiar with deep learning. I have a
>>> completed numerous massively open online courses on the field with close to 
>>> 100%
>>> grade in each. I am presently a community Course Mentor at deeplearning.ai 
>>> <http://deeplearning.ai/>'s
>>> course on Convolutional Neural Networks. I am also quite comfortable using
>>> processing (Having used it to explore generative art).
>> 
>> Using processing is one idea, there are various possibilities each with its 
>> own
>> strengths. So if you say you like to implement the simulator in plain python
>> using OpenGL this is also an option we should take a closer look into.
>> 
>>> I have already compiled the codebase from source and have run the tests.
>>> However, as is mentioned in the proposal documentation, I am unable to find
>>> issues pertaining to this specific project. It would be great if a mentor 
>>> could
>>> guide me in the right direction.
>> 
>> We are trying to publish more issues over the next days that are project 
>> related.
>> 
>> I hope anything I said was helpful, let me know if I should clarify anything.
>> 
>> Thanks,
>> Marcus
>> 
>>> On 25. Feb 2018, at 05:57, Sayan Goswami <goswami.saya...@gmail.com 
>>> <mailto:goswami.saya...@gmail.com>> wrote:
>>> 
>>> Hi,
>>> 
>>> I am Sayan Goswami, a sophomore year undergraduate student at Jadavpur 
>>> University, India. I would like to work on the "Robotic Arm" project as a 
>>> part of GSoC '18.
>>> 
>>> My research interests lie in the field of Deep Learning (specifically Deep 
>>> Reinforcement Learning). I have been exploring deep neural network 
>>> architecture for the past year and a half. I am familiar with deep 
>>> learning. I have a completed numerous massively open online courses on the 
>>> field with close to 100% grade in each. I am presently a community Course 
>>> Mentor at deeplearning.ai <http://deeplearning.ai/>'s course on 
>>> Convolutional Neural Networks. I am also quite comfortable using processing 
>>> (Having used it to explore generative art).
>>> 
>>> I am presently going through the mlpack methods API and getting familiar 
>>> with the codebase. 
>>> 
>>> I have already compiled the codebase from source and have run the tests. 
>>> However, as is mentioned in the proposal documentation, I am unable to find 
>>> issues pertaining to this specific project. It would be great if a mentor 
>>> could guide me in the right direction.
>>> 
>>> I have attached my resume for your kind perusal.
>>> 
>>> Looking forward to a hearing back soon. Thank you for your time.
>>> 
>>> Resume: https://goo.gl/6jLhru 
>>> <https://drive.google.com/open?id=0BwAJkZuRJT6tWmN3NHdSM0t6TGs>
>>> 
>>> 
>>> Sincerely,
>>> Sayan Goswami
>>> 
>>> _______________________________________________
>>> mlpack mailing list
>>> mlpack@lists.mlpack.org <mailto:mlpack@lists.mlpack.org>
>>> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack 
>>> <http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack>
>> 
> 
> 

_______________________________________________
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to