Hello Andrei,

thanks for the update, I don't have anything to add, sounds totally reasonable
to me. As an overall timeline, this could definitely work.

Thanks,
Marcus

> On 19. Mar 2020, at 13:07, Andrei M <[email protected]> wrote:
> 
> Hello again,
> 
> Thank you for the feedback.
> 
> After a longer though process, I decided I would like to implement the 
> DeepLabv3+ model for semantic segmentation as part of the ANN models project. 
> This implies several phases of implementation and this is the split I propose:
> 
> Step 1:
> Implement a dataloader for a semantic segmentation dataset: This will be 
> either Pascal VOC 2012 or ADE20K.
> 
> Step 2:
> Implement an Xception model backbone, with atrous depth-wise separable 
> convolutions. This is the backbone that makes the model yield the best 
> performance, according to the original paper, overpassing the ResNet-101 
> backbone.
> 
> Step 3:
> a. Implement the encoder architecture of the model, which is a DeepLabv3, 
> that uses the previously mentioned Xception backbone. This task also implies 
> the building of the atrous spatial pyramid pooling module.
> b. Implement the decoder architecture, which is a simple architecture based 
> on convolutions, which refine the segmentation results of the encoder
> 
> Step 4:
> Train and test the implemented model on the selected dataset, then compare 
> the results with the ones obtained in the paper. Visualize the results and 
> create relevant plots and statistics.
> 
> That would be a shorter version of my proposal.
> 
> Best,
> Andrei
> 
> On Wed, 11 Mar 2020 at 23:35, Marcus Edel <[email protected] 
> <mailto:[email protected]>> wrote:
> Hello Andrei,
> 
> 1. RL: I've taken a more in-depth look on the reinforcement learning module. 
> The
> DQN, Double DQN and prioritized replay are already implemented, so as part of
> the rainbow the remaining components are Dueling networks, Multi-step 
> learning,
> Distributional RL, Noisy. Therefore, I suggest finishing the implementation of
> the Rainbow DQN and then an implementation of the ACKTR algorithm.
> 
> Sounds totally reasonable to me.
> 
> 2. Applications of ANN: Implementing a U-Net or DeepLabv3 architecture for
> semantic segmentation.
> 
> I like both models, also good that you mentioned you like to focus on either
> U-Net or DeepLabv3.
> 
> I would like to know if the ideas above would make enough for a summer project
> for each of the two sections.
> 
> Definitely, a big part of each project is documentation and testing, writing
> good tests takes time.
> 
> Let me know if I should clarify anything further.
> 
> Thanks,
> Marcus
> 
>> On 10. Mar 2020, at 15:50, Andrei M <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hello,
>> 
>> Thank you for the response.
>> 
>> I've been thinking more about the ideas for the GSoC and I've established a 
>> top 2 I'd like to work on: Reinforcement learning or Applications of ANN. 
>> (I'll only select one for the final proposal)
>> 
>> 1. RL: I've taken a more in-depth look on the reinforcement learning module. 
>> The DQN, Double DQN and prioritized replay are already implemented, so as 
>> part of the rainbow the remaining components are Dueling networks, 
>> Multi-step learning, Distributional RL, Noisy. Therefore, I suggest 
>> finishing the implementation of the Rainbow DQN and then an implementation 
>> of the ACKTR algorithm.
>> 
>> 2. Applications of ANN: Implementing a U-Net or DeepLabv3 architecture for 
>> semantic segmentation.
>> 
>> I would like to know if the ideas above would make enough for a summer 
>> project for each of the two sections.
>> 
>> Thank you,
>> Andrei
>> 
>> On Mon, Mar 9, 2020 at 1:22 AM Marcus Edel <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Hello Andrei,
>> 
>> welcome and thanks for you interest. Looks like you already brainstormed 
>> about
>> the ideas, that great. I think each method you proposed made sense, there is
>> alrady a PR open for PPO (https://github.com/mlpack/mlpack/pull/1912 
>> <https://github.com/mlpack/mlpack/pull/1912>) which is
>> very close to being merged, so I think you can remove that from the list.
>> 
>> Also, I think both ideas could be combined, like if you add a new layer to 
>> the
>> codebase. That said, we don't have project priorities, so you a free to go 
>> with
>> anything you find interesting.
>> 
>> Let me know if I should clarify anything.
>> 
>> Thanks,
>> Marcus
>> 
>>> On 6. Mar 2020, at 15:47, Andrei M <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Hello,
>>> 
>>> I'm a second year master's degree student in the field of artificial 
>>> intelligence and I've been thinking about applying to Google Summer of Code 
>>> for this summer and mlpack is the project I want to work on.
>>> 
>>> I've spent the last few weeks to get familiar with the code base and write 
>>> some code for a new feature (a loss function that wasn't implemented). 
>>> There are several ideas in the list that peaked my interest and I consider 
>>> them equally interesting: reinforcement learning, essential deep learning 
>>> modules, application of ANN algorithms implemented in mlpack and 
>>> improvisation and implementation of ANN modules.
>>> 
>>> I think these ideas would fit well for me since I've been implementing 
>>> neural networks such as DQN, Double DQN, Dueling networks, GANS and several 
>>> others in PyTorch and I've also been in touch with the state-of-art 
>>> research in various fields, like the ones mentioned above. Therefore, I 
>>> think I would equally enjoy working on the reinforcement learning path and 
>>> working on bringing features and modules that are present in other 
>>> libraries, like PyTorch.
>>> 
>>> Below are some summaries of the ideas I'm thinking about:
>>> Reinforcement learning: Here I would like to work on Rainbow and Proximal 
>>> Policy Optimization Algorithms, train and test them on different 
>>> environments and empirically show their advantages and disadvantages (for 
>>> example how double DQN can reduce the overestimation problem that appears 
>>> in DQN).
>>> Application of ANN algorithms implemented in mlpack: For this idea, I have 
>>> two options that come to my mind: first one is implementing a sequence to 
>>> sequence model for language translation and the other consists of 
>>> implementing U-Net like architectures which are usually employed for 
>>> segmentation tasks or depth prediction.
>>> Essential deep learning modules: The plan I propose for this idea is 
>>> implementing some of the GAN architectures that aren't yet implemented, 
>>> starting from the first types of GANs that appeared, like conditional GANs 
>>> and info GANs, then advancing to more modern ones, trying to obtain and 
>>> visualize the results shown in the papers they've been presented on.
>>> 
>>> I would also like to know what are the features with high priority for 
>>> mlpack to have and if you have any suggestion on what I should propose to 
>>> match these priorities.
>>> 
>>> Also, can more ideas be proposed in a single application?
>>> Any feedback and suggestions are appreciated.
>>> 
>>> Best,
>>> Andrei
>>> _______________________________________________
>>> mlpack mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack 
>>> <http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack>
>> 
> 

_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to