Re: [sympy] Proposal draft - Probabilities project

2020-03-30 Thread Basilis Kalos


Τη Κυριακή, 29 Μαρτίου 2020 - 9:22:19 μ.μ. UTC+3, ο χρήστης czgdp1807 
έγραψε:
Hi and once again thank you very much for your comments!

Thanks for pointing out. This is because the Markov Chain in the example is 
> reducible and therefore it doesn't have a unique solution to sP = s. Have 
> you described algorithms(a rough idea or any paper/document where such 
> algorithms are given) in you proposal to find communication classes of 
> reducible markov chains. Would it be possible to use Tarjan's algorithm or 
> Kosuraja's algorithm for finding strongly connected components using the 
> transition matrix(acting as adjacency matrix)? 
>
>   I had an algorithm described in my application to find communication 
classes of reducible markov chains but using Tarjan's or Kosuraja's 
algorithm seems a much better idea. Especially for the case of Markov 
chains Kosuraja's algorithm seems a great opthion, as its more tailored to 
work with directed graphs.  We could potetionaly also use it to derive 
structural properties of the chain other that reducibility, like 
periodicity and recurrency. I was also thinking that, by looking the chain 
like a linked list, we could use floyd's tortoise and hare algorithm, to 
find circles on the list, and derive that way the periodicity. Does that 
sound like an applicable idea to you?

> There are three phases for GSoC and a community bonding period at the 
> start. Please plan work such that most of the coding part lies in those 
> three phases. Currently, your proposal has plan for four phases, it is not 
> clear which phase corresponds to community bonding. Also, more code(not 
> necessarily clean and fully working) can be added in implementation parts.
>
> Thank you very much for your help! I have updated my proposal. I hope it 
looks beter now

Thank you!

All the best


On Sun, Mar 29, 2020 at 10:52 PM Basilis Kalos  > wrote:
>
>> Hi, thank you very much for your comments!
>> Τη Παρασκευή, 27 Μαρτίου 2020 - 7:08:38 μ.μ. UTC+2, ο χρήστης czgdp1807 
>> έγραψε:
>>>
>>> Can you please provide some examples from other CAS(Wolfram, etc.) for 
>>> the features which you plan to implement? 
>>>
>>> An example from woulfram, fore the features i want to implement is here 
>> 
>>
>>  
>>
>>> > If we tried to find the limiting distribution of that chain by 
>>> calculating the stable distribution, it may not exist. By dividing the 
>>> chain into classes T and C and calculate the stable distributions of this 
>>> classes, we could find the limiting distribution.
>>>
>>> It would be better to understand if some examples are given where the 
>>> current code fails.
>>>  
>>>
>>  An example will be the folowing: Let T = Matrix([[1/2, 1/2, 0, 0],[1/2, 
>> 1/2, 0, 0], [0, 0, 1/4, 3/4], [0, 0, 1/5, 4/5]]) is the transition matrix 
>> of a chain in the state space (1, 2, 3, 4). In that case a unique stable 
>> distribution doesn't exist wich means that neither does the limiting 
>> distribution (for example for both p1 = [0.5, 0.5, 0, 0] and p2 = [0, 0, 
>> 0.5, 0.5] it will be true that p1*T = p1 and p2*T = p2). The current code, 
>> when asked for the limiting distribution will return [0.5, 0.5, 0, 0] which 
>> is incorrect (for example if the chain starts from the state 3 it can never 
>> reach either state 1 nor 2). 
>>
>>>
>>> > The third functionality can be very useful to calculate P(Eq(X(j), 2), 
>>> Eq(X(1), 3)), for every j, as this is equal to the sum of the probability 
>>> of all walks from the state 1 to the state j. Moreover, it could be used by 
>>> the first to functionalities. For example, to find all states j that belong 
>>> to the same class as i, we will have to find walks from the state i to the 
>>> state j and from the state j to the state i.
>>>
>>> Any examples where the current code fails or is inefficient? Can you 
>>> please provide a rough time complexity analysis of the algorithm you plan 
>>> to implement and compare it with the current code?
>>>
>>> I am sorry, i have misread your previous comments. The current algorithm 
>> is very sufficient for calculating the P(Eq(X(j), 2), Eq(X(1), 3)). I 
>> thought that you were asking for P(Eq(X(j), 2), Eq(X(1), i)) for any i, 
>> meaning the probability of moving from state i to the state 2, in j steps, 
>> without knowing for sure that the chain will start from the state i. For 
>> something like that we will have to use the initial distribution which is 
>> not yet implemented. Sorry again for my mistake
>>
>>  I have also updated my proposal and posted it to the mailing list. I 
>> used your suggestions and removed the parts for graph theory. I also send 
>> you a link here. I will really apreciated any coments and suggestions.
>>
>>
>> https://docs.google.com/document/d/1yn6LcXUE70FnSpQ7KQ3KbAwzuzLVtlSHF1sJO0E54tg/edit?usp=sharing
>>
>> Thank you.
>>
>>
>>> On Thu, Mar 26, 2020 at 5:19 PM Basilis Kalos  
>>> wrote:
>>>
 Hi all, thank

Re: [sympy] Proposal draft - Probabilities project

2020-03-29 Thread Gagandeep Singh (B17CS021)
>  An example will be the folowing: Let T = Matrix([[1/2, 1/2, 0, 0],[1/2,
1/2, 0, 0], [0, 0, 1/4, 3/4], [0, 0, 1/5, 4/5]]) is the transition matrix
of a chain in the state space (1, 2, 3, 4). In that case a unique stable
distribution doesn't exist which means that neither does the limiting
distribution (for example for both p1 = [0.5, 0.5, 0, 0] and p2 = [0, 0,
0.5, 0.5] it will be true that p1*T = p1 and p2*T = p2). The current code,
when asked for the limiting distribution will return [0.5, 0.5, 0, 0] which
is incorrect (for example if the chain starts from the state 3 it can never
reach either state 1 nor 2).

Thanks for pointing out. This is because the Markov Chain in the example is
reducible and therefore it doesn't have a unique solution to sP = s. Have
you described algorithms(a rough idea or any paper/document where such
algorithms are given) in you proposal to find communication classes of
reducible markov chains. Would it be possible to use Tarjan's algorithm or
Kosuraja's algorithm for finding strongly connected components using the
transition matrix(acting as adjacency matrix)?

> For something like that we will have to use the initial distribution
which is not yet implemented. Sorry again for my mistake

What all is needed to implement it? What should be the result of
`P(Eq(X(j), 2), Eq(X(1), i))` look like. Better to discuss on the issue
tracker.

There are three phases for GSoC and a community bonding period at the
start. Please plan work such that most of the coding part lies in those
three phases. Currently, your proposal has plan for four phases, it is not
clear which phase corresponds to community bonding. Also, more code(not
necessarily clean and fully working) can be added in implementation parts.

On Sun, Mar 29, 2020 at 10:52 PM Basilis Kalos 
wrote:

> Hi, thank you very much for your comments!
> Τη Παρασκευή, 27 Μαρτίου 2020 - 7:08:38 μ.μ. UTC+2, ο χρήστης czgdp1807
> έγραψε:
>>
>> Can you please provide some examples from other CAS(Wolfram, etc.) for
>> the features which you plan to implement?
>>
>> An example from woulfram, fore the features i want to implement is here
> 
>
>
>
>> > If we tried to find the limiting distribution of that chain by
>> calculating the stable distribution, it may not exist. By dividing the
>> chain into classes T and C and calculate the stable distributions of this
>> classes, we could find the limiting distribution.
>>
>> It would be better to understand if some examples are given where the
>> current code fails.
>>
>>
>  An example will be the folowing: Let T = Matrix([[1/2, 1/2, 0, 0],[1/2,
> 1/2, 0, 0], [0, 0, 1/4, 3/4], [0, 0, 1/5, 4/5]]) is the transition matrix
> of a chain in the state space (1, 2, 3, 4). In that case a unique stable
> distribution doesn't exist wich means that neither does the limiting
> distribution (for example for both p1 = [0.5, 0.5, 0, 0] and p2 = [0, 0,
> 0.5, 0.5] it will be true that p1*T = p1 and p2*T = p2). The current code,
> when asked for the limiting distribution will return [0.5, 0.5, 0, 0] which
> is incorrect (for example if the chain starts from the state 3 it can never
> reach either state 1 nor 2).
>
>>
>> > The third functionality can be very useful to calculate P(Eq(X(j), 2),
>> Eq(X(1), 3)), for every j, as this is equal to the sum of the probability
>> of all walks from the state 1 to the state j. Moreover, it could be used by
>> the first to functionalities. For example, to find all states j that belong
>> to the same class as i, we will have to find walks from the state i to the
>> state j and from the state j to the state i.
>>
>> Any examples where the current code fails or is inefficient? Can you
>> please provide a rough time complexity analysis of the algorithm you plan
>> to implement and compare it with the current code?
>>
>> I am sorry, i have misread your previous comments. The current algorithm
> is very sufficient for calculating the P(Eq(X(j), 2), Eq(X(1), 3)). I
> thought that you were asking for P(Eq(X(j), 2), Eq(X(1), i)) for any i,
> meaning the probability of moving from state i to the state 2, in j steps,
> without knowing for sure that the chain will start from the state i. For
> something like that we will have to use the initial distribution which is
> not yet implemented. Sorry again for my mistake
>
>  I have also updated my proposal and posted it to the mailing list. I used
> your suggestions and removed the parts for graph theory. I also send you a
> link here. I will really apreciated any coments and suggestions.
>
>
> https://docs.google.com/document/d/1yn6LcXUE70FnSpQ7KQ3KbAwzuzLVtlSHF1sJO0E54tg/edit?usp=sharing
>
> Thank you.
>
>
>> On Thu, Mar 26, 2020 at 5:19 PM Basilis Kalos 
>> wrote:
>>
>>> Hi all, thank you very much for your useful comments!
>>>
>>>
>>> The functionalities I was suggesting, are mainly intended to be
>>> implemented for the Markov chains modules in the stochastic processes
>

Re: [sympy] Proposal draft - Probabilities project

2020-03-29 Thread Basilis Kalos
Hi, thank you very much for your comments!
Τη Παρασκευή, 27 Μαρτίου 2020 - 7:08:38 μ.μ. UTC+2, ο χρήστης czgdp1807 
έγραψε:
>
> Can you please provide some examples from other CAS(Wolfram, etc.) for the 
> features which you plan to implement? 
>
> An example from woulfram, fore the features i want to implement is here 


 

> > If we tried to find the limiting distribution of that chain by 
> calculating the stable distribution, it may not exist. By dividing the 
> chain into classes T and C and calculate the stable distributions of this 
> classes, we could find the limiting distribution.
>
> It would be better to understand if some examples are given where the 
> current code fails.
>  
>
 An example will be the folowing: Let T = Matrix([[1/2, 1/2, 0, 0],[1/2, 
1/2, 0, 0], [0, 0, 1/4, 3/4], [0, 0, 1/5, 4/5]]) is the transition matrix 
of a chain in the state space (1, 2, 3, 4). In that case a unique stable 
distribution doesn't exist wich means that neither does the limiting 
distribution (for example for both p1 = [0.5, 0.5, 0, 0] and p2 = [0, 0, 
0.5, 0.5] it will be true that p1*T = p1 and p2*T = p2). The current code, 
when asked for the limiting distribution will return [0.5, 0.5, 0, 0] which 
is incorrect (for example if the chain starts from the state 3 it can never 
reach either state 1 nor 2). 

>
> > The third functionality can be very useful to calculate P(Eq(X(j), 2), 
> Eq(X(1), 3)), for every j, as this is equal to the sum of the probability 
> of all walks from the state 1 to the state j. Moreover, it could be used by 
> the first to functionalities. For example, to find all states j that belong 
> to the same class as i, we will have to find walks from the state i to the 
> state j and from the state j to the state i.
>
> Any examples where the current code fails or is inefficient? Can you 
> please provide a rough time complexity analysis of the algorithm you plan 
> to implement and compare it with the current code?
>
> I am sorry, i have misread your previous comments. The current algorithm 
is very sufficient for calculating the P(Eq(X(j), 2), Eq(X(1), 3)). I 
thought that you were asking for P(Eq(X(j), 2), Eq(X(1), i)) for any i, 
meaning the probability of moving from state i to the state 2, in j steps, 
without knowing for sure that the chain will start from the state i. For 
something like that we will have to use the initial distribution which is 
not yet implemented. Sorry again for my mistake

 I have also updated my proposal and posted it to the mailing list. I used 
your suggestions and removed the parts for graph theory. I also send you a 
link here. I will really apreciated any coments and suggestions.

https://docs.google.com/document/d/1yn6LcXUE70FnSpQ7KQ3KbAwzuzLVtlSHF1sJO0E54tg/edit?usp=sharing

Thank you.


> On Thu, Mar 26, 2020 at 5:19 PM Basilis Kalos  > wrote:
>
>> Hi all, thank you very much for your useful comments!  
>>
>>
>> The functionalities I was suggesting, are mainly intended to be 
>> implemented for the Markov chains modules in the stochastic processes 
>> package and they are:
>>
>>
>>- · Checking for periodicity and reducibility.
>>
>>
>>- · In the case that the chain is reducible, finding the 
>>communication classes of the chain and their types, as well as the 
>>probabilities of moving from one class to another.
>>
>>
>>- · Finding walks from a state to another.
>>
>>For example, let’s say that we have a discrete Markov chain of finite 
>> space. We could use the first two functionalities described above to 
>> implement the central theorem of Markov chains. More specifically if we 
>> implement the first one and find the chain to be both aperiodic and 
>> irreducible, then we can be sure that the stable distribution exists. The 
>> code, as it is now implemented, it has already a function to find the 
>> stable distribution, but it does not always succeed. The advantage of the 
>> proposed solution is that it will offer a more concrete way to derive the 
>> existence of a stable distribution or in the case that the code cannot 
>> derive a solution, inform the user that it still exists.
>>
>>
>>   Furthermore, as the algorithm is now, it calculates only the stable 
>> distribution of the whole chain. However, it might be the case that the 
>> chain is not irreducible and so a stable distribution for the whole chain 
>> can not exist. By implementing the second functionality described above, we 
>> could divide the chain in communication classes. Continuing we would be 
>> able to implement the central theorem to each one of this classes (as there 
>> by definition irreducible) and find a stable distribution for each one of 
>> this.
>>
>>
>>   Let’s say for example that the chain consists from two communication 
>> classes: an open class T and closed class C. If we tried to find the 
>> limiting distribution of that chain by ca

Re: [sympy] Proposal draft - Probabilities project

2020-03-27 Thread Gagandeep Singh (B17CS021)
Hi,

Can you please provide some examples from other CAS(Wolfram, etc.) for the
features which you plan to implement?

> If we tried to find the limiting distribution of that chain by
calculating the stable distribution, it may not exist. By dividing the
chain into classes T and C and calculate the stable distributions of this
classes, we could find the limiting distribution.

It would be better to understand if some examples are given where the
current code fails.


> The third functionality can be very useful to calculate P(Eq(X(j), 2),
Eq(X(1), 3)), for every j, as this is equal to the sum of the probability
of all walks from the state 1 to the state j. Moreover, it could be used by
the first to functionalities. For example, to find all states j that belong
to the same class as i, we will have to find walks from the state i to the
state j and from the state j to the state i.

Any examples where the current code fails or is inefficient? Can you please
provide a rough time complexity analysis of the algorithm you plan to
implement and compare it with the current code?


On Thu, Mar 26, 2020 at 5:19 PM Basilis Kalos 
wrote:

> Hi all, thank you very much for your useful comments!
>
>
> The functionalities I was suggesting, are mainly intended to be
> implemented for the Markov chains modules in the stochastic processes
> package and they are:
>
>
>- · Checking for periodicity and reducibility.
>
>
>- · In the case that the chain is reducible, finding the
>communication classes of the chain and their types, as well as the
>probabilities of moving from one class to another.
>
>
>- · Finding walks from a state to another.
>
>For example, let’s say that we have a discrete Markov chain of finite
> space. We could use the first two functionalities described above to
> implement the central theorem of Markov chains. More specifically if we
> implement the first one and find the chain to be both aperiodic and
> irreducible, then we can be sure that the stable distribution exists. The
> code, as it is now implemented, it has already a function to find the
> stable distribution, but it does not always succeed. The advantage of the
> proposed solution is that it will offer a more concrete way to derive the
> existence of a stable distribution or in the case that the code cannot
> derive a solution, inform the user that it still exists.
>
>
>   Furthermore, as the algorithm is now, it calculates only the stable
> distribution of the whole chain. However, it might be the case that the
> chain is not irreducible and so a stable distribution for the whole chain
> can not exist. By implementing the second functionality described above, we
> could divide the chain in communication classes. Continuing we would be
> able to implement the central theorem to each one of this classes (as there
> by definition irreducible) and find a stable distribution for each one of
> this.
>
>
>   Let’s say for example that the chain consists from two communication
> classes: an open class T and closed class C. If we tried to find the
> limiting distribution of that chain by calculating the stable distribution,
> it may not exist. By dividing the chain into classes T and C and calculate
> the stable distributions of this classes, we could find the limiting
> distribution.
>
>
>   The third functionality can be very useful to calculate P(Eq(X(j), 2),
> Eq(X(1), 3)), for every j, as this is equal to the sum of the probability
> of all walks from the state 1 to the state j. Moreover, it could be used by
> the first to functionalities. For example, to find all states j that belong
> to the same class as i, we will have to find walks from the state i to the
> state j and from the state j to the state i.
>
>
>   Lastly, I’m not suggesting changing the API to define chains through
> graphs, but to make a module that will implement these functionalities to a
> Chain as is already defined by the current API. Moreover, if we define a
> chain with the matrix K = [[a, 1-a], [1-b, b]] where a and b are symbols,
> then the networkX package cannot work around it. By implementing the
> algorithms as described above, we could give us the advantage to work with
> transition probabilities that are defined by symbols.
>
>
>
>These functionalities do not  have to be a separate Graph module, but
> they can easily be a part of the current stochastic processes.
>
> Thank you very mach for the comments and your suggestions!
> All the best.
>
> Τη Πέμπτη, 26 Μαρτίου 2020 - 9:56:45 π.μ. UTC+2, ο χρήστης Gagandeep Singh
> (B17CS021) έγραψε:
>>
>> Currently, Markov chains use transition matrix with T[i][j] representing
>> the probability of the process going to state j from state i. Infact,
>> that's what we expect from user to give as an important information for
>> Markov chains on the basis of which computations will be performed. Most of
>> the mathematical literature on Markov chains(which I have come across) uses

Re: [sympy] Proposal draft - Probabilities project

2020-03-26 Thread Basilis Kalos
 

Hi all, thank you very much for your useful comments!  


The functionalities I was suggesting, are mainly intended to be implemented 
for the Markov chains modules in the stochastic processes package and they 
are:


   - · Checking for periodicity and reducibility.


   - · In the case that the chain is reducible, finding the 
   communication classes of the chain and their types, as well as the 
   probabilities of moving from one class to another.


   - · Finding walks from a state to another.

   For example, let’s say that we have a discrete Markov chain of finite 
space. We could use the first two functionalities described above to 
implement the central theorem of Markov chains. More specifically if we 
implement the first one and find the chain to be both aperiodic and 
irreducible, then we can be sure that the stable distribution exists. The 
code, as it is now implemented, it has already a function to find the 
stable distribution, but it does not always succeed. The advantage of the 
proposed solution is that it will offer a more concrete way to derive the 
existence of a stable distribution or in the case that the code cannot 
derive a solution, inform the user that it still exists.


  Furthermore, as the algorithm is now, it calculates only the stable 
distribution of the whole chain. However, it might be the case that the 
chain is not irreducible and so a stable distribution for the whole chain 
can not exist. By implementing the second functionality described above, we 
could divide the chain in communication classes. Continuing we would be 
able to implement the central theorem to each one of this classes (as there 
by definition irreducible) and find a stable distribution for each one of 
this.


  Let’s say for example that the chain consists from two communication 
classes: an open class T and closed class C. If we tried to find the 
limiting distribution of that chain by calculating the stable distribution, 
it may not exist. By dividing the chain into classes T and C and calculate 
the stable distributions of this classes, we could find the limiting 
distribution.


  The third functionality can be very useful to calculate P(Eq(X(j), 2), 
Eq(X(1), 3)), for every j, as this is equal to the sum of the probability 
of all walks from the state 1 to the state j. Moreover, it could be used by 
the first to functionalities. For example, to find all states j that belong 
to the same class as i, we will have to find walks from the state i to the 
state j and from the state j to the state i.


  Lastly, I’m not suggesting changing the API to define chains through 
graphs, but to make a module that will implement these functionalities to a 
Chain as is already defined by the current API. Moreover, if we define a 
chain with the matrix K = [[a, 1-a], [1-b, b]] where a and b are symbols, 
then the networkX package cannot work around it. By implementing the 
algorithms as described above, we could give us the advantage to work with 
transition probabilities that are defined by symbols. 

  

   These functionalities do not  have to be a separate Graph module, but 
they can easily be a part of the current stochastic processes.   

Thank you very mach for the comments and your suggestions! 
All the best.

Τη Πέμπτη, 26 Μαρτίου 2020 - 9:56:45 π.μ. UTC+2, ο χρήστης Gagandeep Singh 
(B17CS021) έγραψε:
>
> Currently, Markov chains use transition matrix with T[i][j] representing 
> the probability of the process going to state j from state i. Infact, 
> that's what we expect from user to give as an important information for 
> Markov chains on the basis of which computations will be performed. Most of 
> the mathematical literature on Markov chains(which I have come across) uses 
> transition matrix as the base for carrying out computations. So, what are 
> the compelling reasons to change the public API of Markov chains to accept 
> graphs and not transition matrices? Which graph algorithms will be useful 
> for which test cases in `sympy.stats.tests.test_stochastic_process.py`? 
> How will graph representation(especially the adjacency list representation 
> as adjacency matrix and transition matrix are no different) help in 
> improving the current code. 
>
> Which stochastic processes will benefit from adjacency list representation 
> of graphs? Explanations will help in getting a deeper understanding.
>
> Any papers which can show how to handle the queries, using graphs, at 
> https://github.com/sympy/sympy/blob/7485df5e724ad6d9df0e0a8acaa5d861d9a35077/sympy/stats/tests/test_stochastic_process.py#L62-L71
>
> How will we go about continuous stochastic processes using graphs. Say, 
> finding, `P(Eq(X(1.5), 2), Eq(X(1), 3))`?
>
>
> On Thu, Mar 26, 2020 at 6:28 AM Aaron Meurer  > wrote:
>
>> If you are implementing a graph thing that cannot be done by networkx
>> due to it's symbolic nature, then you should go into more detail about
>> what this will look like and what the im

Re: [sympy] Proposal draft - Probabilities project

2020-03-26 Thread Gagandeep Singh (B17CS021)
Currently, Markov chains use transition matrix with T[i][j] representing
the probability of the process going to state j from state i. Infact,
that's what we expect from user to give as an important information for
Markov chains on the basis of which computations will be performed. Most of
the mathematical literature on Markov chains(which I have come across) uses
transition matrix as the base for carrying out computations. So, what are
the compelling reasons to change the public API of Markov chains to accept
graphs and not transition matrices? Which graph algorithms will be useful
for which test cases in `sympy.stats.tests.test_stochastic_process.py`? How
will graph representation(especially the adjacency list representation as
adjacency matrix and transition matrix are no different) help in improving
the current code.

Which stochastic processes will benefit from adjacency list representation
of graphs? Explanations will help in getting a deeper understanding.

Any papers which can show how to handle the queries, using graphs, at
https://github.com/sympy/sympy/blob/7485df5e724ad6d9df0e0a8acaa5d861d9a35077/sympy/stats/tests/test_stochastic_process.py#L62-L71

How will we go about continuous stochastic processes using graphs. Say,
finding, `P(Eq(X(1.5), 2), Eq(X(1), 3))`?


On Thu, Mar 26, 2020 at 6:28 AM Aaron Meurer  wrote:

> If you are implementing a graph thing that cannot be done by networkx
> due to it's symbolic nature, then you should go into more detail about
> what this will look like and what the implementation will be. Note
> that if you simply mean edge or node labels, networkx lets you use any
> Python object as a label, including SymPy expressions.
>
> Also you should make it clear how the graph algorithms will enable
> computations in statistics, not just the other way around. I don't
> think we should have a graph of a markov chain just for the sake of
> having it, but if if some statistical computation requires
> constructing and computing something on a graph then it can make
> sense.
>
> Aaron Meurer
>
> On Wed, Mar 25, 2020 at 6:16 PM Basilis Kalos 
> wrote:
> >
> > Hi! Thank you very much for your comments!
> >
> >   Yes, i believe that the algorithms that i was going to implement from
> Graph’s theory are going to be very useful to the Stochastic processes. The
> NetworkX package does not support symbolic expressions and so
> implementation of some functions from Graph theory, mainly directed graphs,
> would be really useful (for example, in the case that the transient matrix
> of a Markov chain has symbolic expressions).
> >
> >   Moreover, the module I was suggesting will only have the functions
> that are going to be used by ‘’stats’’. If it’s still ok, I will give
> special care to make that really clear in my proposal. Any more suggestions
> will be really appreciated.
> >
> >
> > Thank you!
> >
> >
> >
> > Τη Τετάρτη, 25 Μαρτίου 2020 - 10:53:35 μ.μ. UTC+2, ο χρήστης Aaron
> Meurer έγραψε:
> >>
> >> Is the graph theory part needed to implement the stats ideas. In
> >> general, graph theory is out of scope for SymPy (see
> >> https://github.com/sympy/sympy/wiki/GSoC-2020-Ideas#non-ideas). If
> >> some graph theory algorithms are required to do symbolics, that is
> >> fine, but you should make that clear in the proposal.
> >>
> >> Aaron Meurer
> >>
> >> On Wed, Mar 25, 2020 at 2:46 PM Basilis Kalos 
> wrote:
> >> >
> >> > Hello all!
> >> >
> >> >   I'm sending you the first draft of my application. Any comments and
> sugestions will be greatly appreciated.
> >> >
> >> > Thank you!
> >> >
> >> >
> https://docs.google.com/document/d/1J2N4DhXXlFj_YQ-isqIwvl1zzIg-OaC87RLIGgHTbgY/edit?usp=sharing
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google
> Groups "sympy" group.
> >> > To unsubscribe from this group and stop receiving emails from it,
> send an email to sy...@googlegroups.com.
> >> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/0e7c7477-4a9b-410e-99e2-700c8f44767b%40googlegroups.com
> .
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "sympy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sympy+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/f2d1a667-a7b8-421b-b7f5-fe8887b5ae73%40googlegroups.com
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/CAKgW%3D6JggiydyeFp2OCgdsocncbBiM-6z-vz2zyNr3o%2BHaKVRA%40mail.gmail.com
> .
>


-- 
With regards,
Gagandeep Singh
Github - https://github.com/czgdp1807/
Linkedin - https://www.linkedin.com/in/czgdp1807/


-- 

Re: [sympy] Proposal draft - Probabilities project

2020-03-25 Thread Aaron Meurer
If you are implementing a graph thing that cannot be done by networkx
due to it's symbolic nature, then you should go into more detail about
what this will look like and what the implementation will be. Note
that if you simply mean edge or node labels, networkx lets you use any
Python object as a label, including SymPy expressions.

Also you should make it clear how the graph algorithms will enable
computations in statistics, not just the other way around. I don't
think we should have a graph of a markov chain just for the sake of
having it, but if if some statistical computation requires
constructing and computing something on a graph then it can make
sense.

Aaron Meurer

On Wed, Mar 25, 2020 at 6:16 PM Basilis Kalos  wrote:
>
> Hi! Thank you very much for your comments!
>
>   Yes, i believe that the algorithms that i was going to implement from 
> Graph’s theory are going to be very useful to the Stochastic processes. The 
> NetworkX package does not support symbolic expressions and so implementation 
> of some functions from Graph theory, mainly directed graphs, would be really 
> useful (for example, in the case that the transient matrix of a Markov chain 
> has symbolic expressions).
>
>   Moreover, the module I was suggesting will only have the functions that are 
> going to be used by ‘’stats’’. If it’s still ok, I will give special care to 
> make that really clear in my proposal. Any more suggestions will be really 
> appreciated.
>
>
> Thank you!
>
>
>
> Τη Τετάρτη, 25 Μαρτίου 2020 - 10:53:35 μ.μ. UTC+2, ο χρήστης Aaron Meurer 
> έγραψε:
>>
>> Is the graph theory part needed to implement the stats ideas. In
>> general, graph theory is out of scope for SymPy (see
>> https://github.com/sympy/sympy/wiki/GSoC-2020-Ideas#non-ideas). If
>> some graph theory algorithms are required to do symbolics, that is
>> fine, but you should make that clear in the proposal.
>>
>> Aaron Meurer
>>
>> On Wed, Mar 25, 2020 at 2:46 PM Basilis Kalos  wrote:
>> >
>> > Hello all!
>> >
>> >   I'm sending you the first draft of my application. Any comments and 
>> > sugestions will be greatly appreciated.
>> >
>> > Thank you!
>> >
>> > https://docs.google.com/document/d/1J2N4DhXXlFj_YQ-isqIwvl1zzIg-OaC87RLIGgHTbgY/edit?usp=sharing
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "sympy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to sy...@googlegroups.com.
>> > To view this discussion on the web visit 
>> > https://groups.google.com/d/msgid/sympy/0e7c7477-4a9b-410e-99e2-700c8f44767b%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sympy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sympy/f2d1a667-a7b8-421b-b7f5-fe8887b5ae73%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAKgW%3D6JggiydyeFp2OCgdsocncbBiM-6z-vz2zyNr3o%2BHaKVRA%40mail.gmail.com.


Re: [sympy] Proposal draft - Probabilities project

2020-03-25 Thread Basilis Kalos
 

Hi! Thank you very much for your comments!

  Yes, i believe that the algorithms that i was going to implement from 
Graph’s theory are going to be very useful to the Stochastic processes. The 
NetworkX package does not support symbolic expressions and so 
implementation of some functions from Graph theory, mainly directed graphs, 
would be really useful (for example, in the case that the transient matrix 
of a Markov chain has symbolic expressions).  

  Moreover, the module I was suggesting will only have the functions that 
are going to be used by ‘’stats’’. If it’s still ok, I will give special 
care to make that really clear in my proposal. Any more suggestions will be 
really appreciated.


Thank you!


Τη Τετάρτη, 25 Μαρτίου 2020 - 10:53:35 μ.μ. UTC+2, ο χρήστης Aaron Meurer 
έγραψε:
>
> Is the graph theory part needed to implement the stats ideas. In 
> general, graph theory is out of scope for SymPy (see 
> https://github.com/sympy/sympy/wiki/GSoC-2020-Ideas#non-ideas). If 
> some graph theory algorithms are required to do symbolics, that is 
> fine, but you should make that clear in the proposal. 
>
> Aaron Meurer 
>
> On Wed, Mar 25, 2020 at 2:46 PM Basilis Kalos  > wrote: 
> > 
> > Hello all! 
> > 
> >   I'm sending you the first draft of my application. Any comments and 
> sugestions will be greatly appreciated. 
> > 
> > Thank you! 
> > 
> > 
> https://docs.google.com/document/d/1J2N4DhXXlFj_YQ-isqIwvl1zzIg-OaC87RLIGgHTbgY/edit?usp=sharing
>  
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sympy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sy...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sympy/0e7c7477-4a9b-410e-99e2-700c8f44767b%40googlegroups.com.
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/f2d1a667-a7b8-421b-b7f5-fe8887b5ae73%40googlegroups.com.


Re: [sympy] Proposal draft - Probabilities project

2020-03-25 Thread Aaron Meurer
Is the graph theory part needed to implement the stats ideas. In
general, graph theory is out of scope for SymPy (see
https://github.com/sympy/sympy/wiki/GSoC-2020-Ideas#non-ideas). If
some graph theory algorithms are required to do symbolics, that is
fine, but you should make that clear in the proposal.

Aaron Meurer

On Wed, Mar 25, 2020 at 2:46 PM Basilis Kalos  wrote:
>
> Hello all!
>
>   I'm sending you the first draft of my application. Any comments and 
> sugestions will be greatly appreciated.
>
> Thank you!
>
> https://docs.google.com/document/d/1J2N4DhXXlFj_YQ-isqIwvl1zzIg-OaC87RLIGgHTbgY/edit?usp=sharing
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sympy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sympy/0e7c7477-4a9b-410e-99e2-700c8f44767b%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAKgW%3D6%2B1J7ZaiPYum4D8-6%2BdNYVh9%3DLk%3DdM_hQFuudDCcsOY6Q%40mail.gmail.com.