Dear Dr. Pasechnik,

Thanks for the prompt reply. Yes, I read the paper and did understand the 
use of the Diophantine approximation to bound the intermediate bit sizes of 
iterates by a certain polynomial of input size. Also, I agree with you that 
instead of implementing Diophantine approximation, we can just bound the 
intermediate of iterates by rounding to appropriate precision. I would be 
starting work on the implementation in full phase after 13th March as I 
have exams till then. I hope that won't be a problem.

Sincerely,
Yash

On Monday, March 5, 2018 at 1:44:42 PM UTC+1, Dima Pasechnik wrote:
>
> Dear Yash,
>
> On Monday, March 5, 2018 at 11:46:02 AM UTC, Yash Patel wrote:
>>
>> Dear Dr. Pasechnik and Dr. Forets,
>>
>> I am Yash Patel, pursuing M.Sc. in CS at the University of Bonn, and my 
>> primary area of focus is approximation algorithms, combinatorial 
>> optimization and complexity theory. I am currently self-educating myself in 
>> the area of SOS optimization by reading the course material 
>> <http://www.sumofsquares.org/public/index.html> available online, as I 
>> am strongly aligned towards doing my master thesis on this topic. I am 
>> fascinated by its ability capture modern algorithmic viewpoint which is 
>> amenable to computation and deeply rooted in semidefinite programming. 
>>
>> I would like to extend the module of semidefinite programming (SDP) by 
>> contributing to the Sagemath community by making it as my GSOC project. 
>> Polynomial optimization problem is NP-hard, several researchers have 
>> proposed approximation procedures by semidefinite relaxations. Such 
>> relaxations can be constructed by representing non-negative polynomials as 
>> SOS polynomials and dual theory of moments (moment matrices). Hence, it is 
>> pivotal to implement the aforementioned representations which would serve 
>> as a great deliverable to the SDP module. 
>>
>> Secondly, it would be awesome to interface more SDP solvers (like SDPA, 
>> SCS, Ncpol2spda, SPDT3, Mosek, Gurobi, Xpress, SCIP-SDP) to Sagemath, as it 
>> would make the module robust and allow the Sagemath community to use them 
>> according to their specific needs. Though a bunch of SDP solvers use 
>> preconditioning to enhance numerical stability, there might be a 
>> possibility of inaccuracy due to large input coefficients, which can occur 
>> even after preconditioning. This is called ill-conditioning, which makes 
>> difficult to obtain a suboptimal solution with off-the-shelf solvers. For 
>> example, Roux et al. 
>> <https://hal.archives-ouvertes.fr/hal-01358703/document> show that SPDT3 
>> is more robust to ill-conditioning than SeDuMi for their example. Hence, 
>> implementing the prototype of Klerk et al. 
>> <https://arxiv.org/pdf/1507.03549.pdf>, which uses the short step primal 
>> interior point method (IPM) by incorporating Diophantine approximation at 
>> each iteration to bound the intermediate bit-sizes of iterates. This 
>> methodology could potentially enhance the performance of the solver. It 
>> could also be interesting to see how would the solver performs for more 
>> practical variants of IPM. For example, one could also implement the idea 
>> of applying inner-iteration preconditioned Krylov subspace method to IPM 
>> proposed by Cui et al. <https://arxiv.org/pdf/1604.07491.pdf> to 
>> overcome the ill-conditioning problem. Currently, their code is tuned 
>> solely for LP but further investigation can be done to extend their method 
>> to quadratic programming and SDP.
>>
>> I look forward hearing from either of you for further guidance on what 
>> should I be doing other than getting used to the codebase (especially the 
>> SDP module) and git. 
>>
>
> For instance, you might indeed start by implementing de Klerk-Vallentin 
> algorithm as a backend to Sage's SDP frontend.
> Naturally, you would need arbitrary precision floating point 
> numbers---available in Sage, which provides interfaces to MPFR (
> http://www.mpfr.org/) and to Arb  (http://arblib.org/). Not sure which 
> would be more suitable (Arb is much never, and in active development, MPFR 
> is a mature project), and instead of Diophantine approximation you could 
> just do rounding to appropriate precision (I hope you understand why the 
> rounding is needed - it's cause arithmetic operations naturally produce 
> increasingly long numbers, and one needs to control this grows for a 
> suitable compromise between precision and speed/memory consumption).
>
> It seems that such a project (first a prototype in Python, then use Cython 
> for speeding up) would naturally be one GSoC project.
>
> HTH
> Dima
>
>
>> Sincerely,
>> Yash  
>>
>> *The best theory is inspired by practice. The best practice is inspired 
>> by theory. -Donald Knuth*
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-gsoc" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sage-gsoc.
For more options, visit https://groups.google.com/d/optout.

Reply via email to