Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Kevin Liu
Thanks Tim. It's frustrating to see the community has very little 
experience with MLN, after all, this is the smartest group of people I know 
in computer science. Okay, the focus here will be on code. 

On Friday, September 2, 2016 at 1:16:56 PM UTC-3, Viral Shah wrote:
>
> I agree with John here. This is totally unacceptable, and is making the 
> experience poorer for others.
>
> -viral
>
> On Friday, September 2, 2016 at 8:48:44 PM UTC+5:30, John Myles White 
> wrote:
>>
>> May I also point out to the My settings button on your top right corner > 
>>> My topic email subscriptions > Unsubscribe from this thread, which would've 
>>> spared you the message.
>>
>>
>> I'm sorry, but this kind of attitude is totally unacceptable, Kevin. I've 
>> tolerated your misuse of the mailing list, but it is not acceptable for you 
>> to imply that others are behaving inappropriately when they complain about 
>> your unequivocal misuse of the mailing list.
>>
>>  --John 
>>
>> On Friday, September 2, 2016 at 7:23:27 AM UTC-7, Kevin Liu wrote:
>>>
>>> May I also point out to the My settings button on your top right corner 
>>> > My topic email subscriptions > Unsubscribe from this thread, which 
>>> would've spared you the message.
>>>
>>> On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:

 Hello Chris. Have you been applying relational learning to your Neural 
 Crest Migration Patterns in Craniofacial Development research project? It 
 could enhance your insights. 

 On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas 
 wrote:
>
> This entire thread is a trip... a trip which is not really relevant to 
> julia-users. You may want to share these musings in the form of a blog 
> instead of posting them here.
>
> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>>
>> Princeton's post: 
>> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>>
>> Only logic saves us from paradox. - Minsky
>>
>> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>>>
>>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where 
>>> you mention the best optimization is not doing the computation at all. 
>>>
>>> Domingos talks about that in his book, where an efficient kind of 
>>> learning is by analogy, with no model at all, and how numerous 
>>> scientific 
>>> discoveries have been made that way, e.g. Bohr's analogy of the solar 
>>> system to the atom. Analogizers learn by hypothesizing that entities 
>>> with 
>>> similar known properties have similar unknown ones. 
>>>
>>> MLN can reproduce structure mapping, which is the more powerful type 
>>> of analogy, that can make inferences from one domain (solar system) to 
>>> another (atom). This can be done by learning formulas that don't refer 
>>> to 
>>> any of the specific relations in the source domain (general formulas). 
>>>
>>> Seth and Tim have been helping me a lot with putting the pieces 
>>> together for MLN in the repo I created 
>>> , and more help is 
>>> always welcome. I would like to write MLN in idiomatic Julia. My 
>>> question 
>>> at the moment to you and the community is how to keep mappings of 
>>> first-order harmonic functions type-stable in Julia? I am just 
>>> getting acquainted with the type field. 
>>>
>>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:

 Helping me separate the process in parts and priorities would be a 
 lot of help. 

 On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>
> Tim Holy, what if I could tap into the well of knowledge that you 
> are to speed up things? Can you imagine if every learner had to start 
> without priors? 
>
> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
> > 
> > I'd recommend starting by picking a very small project. For 
> example, fix a bug 
> > or implement a small improvement in a package that you already 
> find useful or 
> > interesting. That way you'll get some guidance while making a 
> positive 
> > contribution; once you know more about julia, it will be easier 
> to see your 
> > way forward. 
> > 
> > Best, 
> > --Tim 
> > 
> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
> >> I have no idea where to start and where to finish. Founders' 
> help would be 
> >> wonderful. 
> >> 
> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu 
> wrote: 
> >>> After which I have to code Felix into Julia, a relational 
> optimizer for 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Viral Shah
I agree with John here. This is totally unacceptable, and is making the 
experience poorer for others.

-viral

On Friday, September 2, 2016 at 8:48:44 PM UTC+5:30, John Myles White wrote:
>
> May I also point out to the My settings button on your top right corner > 
>> My topic email subscriptions > Unsubscribe from this thread, which would've 
>> spared you the message.
>
>
> I'm sorry, but this kind of attitude is totally unacceptable, Kevin. I've 
> tolerated your misuse of the mailing list, but it is not acceptable for you 
> to imply that others are behaving inappropriately when they complain about 
> your unequivocal misuse of the mailing list.
>
>  --John 
>
> On Friday, September 2, 2016 at 7:23:27 AM UTC-7, Kevin Liu wrote:
>>
>> May I also point out to the My settings button on your top right corner > 
>> My topic email subscriptions > Unsubscribe from this thread, which would've 
>> spared you the message.
>>
>> On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>>>
>>> Hello Chris. Have you been applying relational learning to your Neural 
>>> Crest Migration Patterns in Craniofacial Development research project? It 
>>> could enhance your insights. 
>>>
>>> On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:

 This entire thread is a trip... a trip which is not really relevant to 
 julia-users. You may want to share these musings in the form of a blog 
 instead of posting them here.

 On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>
> Princeton's post: 
> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>
> Only logic saves us from paradox. - Minsky
>
> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>>
>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where 
>> you mention the best optimization is not doing the computation at all. 
>>
>> Domingos talks about that in his book, where an efficient kind of 
>> learning is by analogy, with no model at all, and how numerous 
>> scientific 
>> discoveries have been made that way, e.g. Bohr's analogy of the solar 
>> system to the atom. Analogizers learn by hypothesizing that entities 
>> with 
>> similar known properties have similar unknown ones. 
>>
>> MLN can reproduce structure mapping, which is the more powerful type 
>> of analogy, that can make inferences from one domain (solar system) to 
>> another (atom). This can be done by learning formulas that don't refer 
>> to 
>> any of the specific relations in the source domain (general formulas). 
>>
>> Seth and Tim have been helping me a lot with putting the pieces 
>> together for MLN in the repo I created 
>> , and more help is 
>> always welcome. I would like to write MLN in idiomatic Julia. My 
>> question 
>> at the moment to you and the community is how to keep mappings of 
>> first-order harmonic functions type-stable in Julia? I am just 
>> getting acquainted with the type field. 
>>
>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> Helping me separate the process in parts and priorities would be a 
>>> lot of help. 
>>>
>>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:

 Tim Holy, what if I could tap into the well of knowledge that you 
 are to speed up things? Can you imagine if every learner had to start 
 without priors? 

 > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
 > 
 > I'd recommend starting by picking a very small project. For 
 example, fix a bug 
 > or implement a small improvement in a package that you already 
 find useful or 
 > interesting. That way you'll get some guidance while making a 
 positive 
 > contribution; once you know more about julia, it will be easier 
 to see your 
 > way forward. 
 > 
 > Best, 
 > --Tim 
 > 
 >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
 >> I have no idea where to start and where to finish. Founders' 
 help would be 
 >> wonderful. 
 >> 
 >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu 
 wrote: 
 >>> After which I have to code Felix into Julia, a relational 
 optimizer for 
 >>> statistical inference with Tuffy <
 http://i.stanford.edu/hazy/tuffy/> 
 >>> inside, for enterprise settings. 
 >>> 
  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu 
 wrote: 
  Can I get tips on bringing Alchemy's optimized Tuffy 
   in Java to Julia while 
 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Tim Wheeler
@kevin: I seriously doubt that anyone has ill wishes concerning your desire 
to implement a great MLP package for Julia. The problem is that you don't 
seem to have a clear enough understanding of how to go about solving the 
problem, and noone here has been willing to divert their work towards this 
project. The MLP project is grand, and if done properly could be of use to 
people, but it takes a lot of time and expertise to pull off.

We have difficulty helping you because the issues are non-specific and the 
questions are not about julia per se - they are about the entire MLP 
project as a whole. I am sure the community would be happy to answer 
specific, focused questions.

Roping people in on Github using the @-call doesn't help either. That 
should only be used sparingly for calling attention to a discussion when it 
is really relevant.

I wish you the best of luck but please be respectful of everyone's time and 
attention.

On Friday, September 2, 2016 at 8:22:37 AM UTC-7, Chris Rackauckas wrote:
>
> Two things. One, yes I am using a form of relational learning on that 
> project. Interesting stuff is coming out of it. But two, I was just trying 
> to help. You're clearly abusing the mailing list and are probably going to 
> get banned if you don't stop. I suggest blogging these musings, and keeping 
> the mailing list for discussions about Julia code.
>
> On Friday, September 2, 2016 at 7:23:27 AM UTC-7, Kevin Liu wrote:
>>
>> May I also point out to the My settings button on your top right corner > 
>> My topic email subscriptions > Unsubscribe from this thread, which would've 
>> spared you the message.
>>
>> On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>>>
>>> Hello Chris. Have you been applying relational learning to your Neural 
>>> Crest Migration Patterns in Craniofacial Development research project? It 
>>> could enhance your insights. 
>>>
>>> On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:

 This entire thread is a trip... a trip which is not really relevant to 
 julia-users. You may want to share these musings in the form of a blog 
 instead of posting them here.

 On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>
> Princeton's post: 
> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>
> Only logic saves us from paradox. - Minsky
>
> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>>
>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where 
>> you mention the best optimization is not doing the computation at all. 
>>
>> Domingos talks about that in his book, where an efficient kind of 
>> learning is by analogy, with no model at all, and how numerous 
>> scientific 
>> discoveries have been made that way, e.g. Bohr's analogy of the solar 
>> system to the atom. Analogizers learn by hypothesizing that entities 
>> with 
>> similar known properties have similar unknown ones. 
>>
>> MLN can reproduce structure mapping, which is the more powerful type 
>> of analogy, that can make inferences from one domain (solar system) to 
>> another (atom). This can be done by learning formulas that don't refer 
>> to 
>> any of the specific relations in the source domain (general formulas). 
>>
>> Seth and Tim have been helping me a lot with putting the pieces 
>> together for MLN in the repo I created 
>> , and more help is 
>> always welcome. I would like to write MLN in idiomatic Julia. My 
>> question 
>> at the moment to you and the community is how to keep mappings of 
>> first-order harmonic functions type-stable in Julia? I am just 
>> getting acquainted with the type field. 
>>
>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> Helping me separate the process in parts and priorities would be a 
>>> lot of help. 
>>>
>>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:

 Tim Holy, what if I could tap into the well of knowledge that you 
 are to speed up things? Can you imagine if every learner had to start 
 without priors? 

 > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
 > 
 > I'd recommend starting by picking a very small project. For 
 example, fix a bug 
 > or implement a small improvement in a package that you already 
 find useful or 
 > interesting. That way you'll get some guidance while making a 
 positive 
 > contribution; once you know more about julia, it will be easier 
 to see your 
 > way forward. 
 > 
 > Best, 
 > --Tim 
 > 
 >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Kevin Liu
The explicit message here with no implicit one is that the unsubscribe option 
exists for a reason. 

On Sep 2, 2016, at 12:18, John Myles White  wrote:

>> May I also point out to the My settings button on your top right corner > My 
>> topic email subscriptions > Unsubscribe from this thread, which would've 
>> spared you the message.
> 
> I'm sorry, but this kind of attitude is totally unacceptable, Kevin. I've 
> tolerated your misuse of the mailing list, but it is not acceptable for you 
> to imply that others are behaving inappropriately when they complain about 
> your unequivocal misuse of the mailing list.
> 
>  --John 
> 
>> On Friday, September 2, 2016 at 7:23:27 AM UTC-7, Kevin Liu wrote:
>> May I also point out to the My settings button on your top right corner > My 
>> topic email subscriptions > Unsubscribe from this thread, which would've 
>> spared you the message.
>> 
>>> On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>>> Hello Chris. Have you been applying relational learning to your Neural 
>>> Crest Migration Patterns in Craniofacial Development research project? It 
>>> could enhance your insights. 
>>> 
 On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:
 This entire thread is a trip... a trip which is not really relevant to 
 julia-users. You may want to share these musings in the form of a blog 
 instead of posting them here.
 
> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
> Princeton's post: 
> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
> 
> Only logic saves us from paradox. - Minsky
> 
>> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
>> mention the best optimization is not doing the computation at all. 
>> 
>> Domingos talks about that in his book, where an efficient kind of 
>> learning is by analogy, with no model at all, and how numerous 
>> scientific discoveries have been made that way, e.g. Bohr's analogy of 
>> the solar system to the atom. Analogizers learn by hypothesizing that 
>> entities with similar known properties have similar unknown ones. 
>> 
>> MLN can reproduce structure mapping, which is the more powerful type of 
>> analogy, that can make inferences from one domain (solar system) to 
>> another (atom). This can be done by learning formulas that don't refer 
>> to any of the specific relations in the source domain (general 
>> formulas). 
>> 
>> Seth and Tim have been helping me a lot with putting the pieces together 
>> for MLN in the repo I created, and more help is always welcome. I would 
>> like to write MLN in idiomatic Julia. My question at the moment to you 
>> and the community is how to keep mappings of first-order harmonic 
>> functions type-stable in Julia? I am just getting acquainted with the 
>> type field. 
>> 
>>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>> Helping me separate the process in parts and priorities would be a lot 
>>> of help. 
>>> 
 On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
 Tim Holy, what if I could tap into the well of knowledge that you are 
 to speed up things? Can you imagine if every learner had to start 
 without priors? 
 
 > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
 > 
 > I'd recommend starting by picking a very small project. For example, 
 > fix a bug 
 > or implement a small improvement in a package that you already find 
 > useful or 
 > interesting. That way you'll get some guidance while making a 
 > positive 
 > contribution; once you know more about julia, it will be easier to 
 > see your 
 > way forward. 
 > 
 > Best, 
 > --Tim 
 > 
 >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
 >> I have no idea where to start and where to finish. Founders' help 
 >> would be 
 >> wonderful. 
 >> 
 >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
 >>> After which I have to code Felix into Julia, a relational 
 >>> optimizer for 
 >>> statistical inference with Tuffy 
 >>>  
 >>> inside, for enterprise settings. 
 >>> 
  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
  Can I get tips on bringing Alchemy's optimized Tuffy 
   in Java to Julia while 
  showing the 
  best of Julia? I am going for the most correct way, even if it 
  

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Chris Rackauckas
Two things. One, yes I am using a form of relational learning on that 
project. Interesting stuff is coming out of it. But two, I was just trying 
to help. You're clearly abusing the mailing list and are probably going to 
get banned if you don't stop. I suggest blogging these musings, and keeping 
the mailing list for discussions about Julia code.

On Friday, September 2, 2016 at 7:23:27 AM UTC-7, Kevin Liu wrote:
>
> May I also point out to the My settings button on your top right corner > 
> My topic email subscriptions > Unsubscribe from this thread, which would've 
> spared you the message.
>
> On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>>
>> Hello Chris. Have you been applying relational learning to your Neural 
>> Crest Migration Patterns in Craniofacial Development research project? It 
>> could enhance your insights. 
>>
>> On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:
>>>
>>> This entire thread is a trip... a trip which is not really relevant to 
>>> julia-users. You may want to share these musings in the form of a blog 
>>> instead of posting them here.
>>>
>>> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:

 Princeton's post: 
 http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1

 Only logic saves us from paradox. - Minsky

 On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>
> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
> mention the best optimization is not doing the computation at all. 
>
> Domingos talks about that in his book, where an efficient kind of 
> learning is by analogy, with no model at all, and how numerous scientific 
> discoveries have been made that way, e.g. Bohr's analogy of the solar 
> system to the atom. Analogizers learn by hypothesizing that entities with 
> similar known properties have similar unknown ones. 
>
> MLN can reproduce structure mapping, which is the more powerful type 
> of analogy, that can make inferences from one domain (solar system) to 
> another (atom). This can be done by learning formulas that don't refer to 
> any of the specific relations in the source domain (general formulas). 
>
> Seth and Tim have been helping me a lot with putting the pieces 
> together for MLN in the repo I created 
> , and more help is always 
> welcome. I would like to write MLN in idiomatic Julia. My question at the 
> moment to you and the community is how to keep mappings of first-order 
> harmonic functions type-stable in Julia? I am just getting acquainted 
> with 
> the type field. 
>
> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>
>> Helping me separate the process in parts and priorities would be a 
>> lot of help. 
>>
>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>>>
>>> Tim Holy, what if I could tap into the well of knowledge that you 
>>> are to speed up things? Can you imagine if every learner had to start 
>>> without priors? 
>>>
>>> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
>>> > 
>>> > I'd recommend starting by picking a very small project. For 
>>> example, fix a bug 
>>> > or implement a small improvement in a package that you already 
>>> find useful or 
>>> > interesting. That way you'll get some guidance while making a 
>>> positive 
>>> > contribution; once you know more about julia, it will be easier to 
>>> see your 
>>> > way forward. 
>>> > 
>>> > Best, 
>>> > --Tim 
>>> > 
>>> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
>>> >> I have no idea where to start and where to finish. Founders' help 
>>> would be 
>>> >> wonderful. 
>>> >> 
>>> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu 
>>> wrote: 
>>> >>> After which I have to code Felix into Julia, a relational 
>>> optimizer for 
>>> >>> statistical inference with Tuffy <
>>> http://i.stanford.edu/hazy/tuffy/> 
>>> >>> inside, for enterprise settings. 
>>> >>> 
>>>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu 
>>> wrote: 
>>>  Can I get tips on bringing Alchemy's optimized Tuffy 
>>>   in Java to Julia while 
>>> showing the 
>>>  best of Julia? I am going for the most correct way, even if it 
>>> means 
>>>  coding 
>>>  Tuffy into C and Julia. 
>>>  
>>> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu 
>>> wrote: 
>>> > I'll try to build it, compare it, and show it to you guys. I 
>>> offered to 
>>> > do this as work. I am waiting to see if they will accept it. 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread John Myles White

>
> May I also point out to the My settings button on your top right corner > 
> My topic email subscriptions > Unsubscribe from this thread, which would've 
> spared you the message.


I'm sorry, but this kind of attitude is totally unacceptable, Kevin. I've 
tolerated your misuse of the mailing list, but it is not acceptable for you 
to imply that others are behaving inappropriately when they complain about 
your unequivocal misuse of the mailing list.

 --John 

On Friday, September 2, 2016 at 7:23:27 AM UTC-7, Kevin Liu wrote:
>
> May I also point out to the My settings button on your top right corner > 
> My topic email subscriptions > Unsubscribe from this thread, which would've 
> spared you the message.
>
> On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>>
>> Hello Chris. Have you been applying relational learning to your Neural 
>> Crest Migration Patterns in Craniofacial Development research project? It 
>> could enhance your insights. 
>>
>> On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:
>>>
>>> This entire thread is a trip... a trip which is not really relevant to 
>>> julia-users. You may want to share these musings in the form of a blog 
>>> instead of posting them here.
>>>
>>> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:

 Princeton's post: 
 http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1

 Only logic saves us from paradox. - Minsky

 On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>
> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
> mention the best optimization is not doing the computation at all. 
>
> Domingos talks about that in his book, where an efficient kind of 
> learning is by analogy, with no model at all, and how numerous scientific 
> discoveries have been made that way, e.g. Bohr's analogy of the solar 
> system to the atom. Analogizers learn by hypothesizing that entities with 
> similar known properties have similar unknown ones. 
>
> MLN can reproduce structure mapping, which is the more powerful type 
> of analogy, that can make inferences from one domain (solar system) to 
> another (atom). This can be done by learning formulas that don't refer to 
> any of the specific relations in the source domain (general formulas). 
>
> Seth and Tim have been helping me a lot with putting the pieces 
> together for MLN in the repo I created 
> , and more help is always 
> welcome. I would like to write MLN in idiomatic Julia. My question at the 
> moment to you and the community is how to keep mappings of first-order 
> harmonic functions type-stable in Julia? I am just getting acquainted 
> with 
> the type field. 
>
> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>
>> Helping me separate the process in parts and priorities would be a 
>> lot of help. 
>>
>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>>>
>>> Tim Holy, what if I could tap into the well of knowledge that you 
>>> are to speed up things? Can you imagine if every learner had to start 
>>> without priors? 
>>>
>>> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
>>> > 
>>> > I'd recommend starting by picking a very small project. For 
>>> example, fix a bug 
>>> > or implement a small improvement in a package that you already 
>>> find useful or 
>>> > interesting. That way you'll get some guidance while making a 
>>> positive 
>>> > contribution; once you know more about julia, it will be easier to 
>>> see your 
>>> > way forward. 
>>> > 
>>> > Best, 
>>> > --Tim 
>>> > 
>>> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
>>> >> I have no idea where to start and where to finish. Founders' help 
>>> would be 
>>> >> wonderful. 
>>> >> 
>>> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu 
>>> wrote: 
>>> >>> After which I have to code Felix into Julia, a relational 
>>> optimizer for 
>>> >>> statistical inference with Tuffy <
>>> http://i.stanford.edu/hazy/tuffy/> 
>>> >>> inside, for enterprise settings. 
>>> >>> 
>>>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu 
>>> wrote: 
>>>  Can I get tips on bringing Alchemy's optimized Tuffy 
>>>   in Java to Julia while 
>>> showing the 
>>>  best of Julia? I am going for the most correct way, even if it 
>>> means 
>>>  coding 
>>>  Tuffy into C and Julia. 
>>>  
>>> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu 
>>> wrote: 
>>> > I'll try to build it, compare it, and show it 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Kevin Liu
May I also point out to the My settings button on your top right corner > 
My topic email subscriptions > Unsubscribe from this thread, which would've 
spared you the message.

On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>
> Hello Chris. Have you been applying relational learning to your Neural 
> Crest Migration Patterns in Craniofacial Development research project? It 
> could enhance your insights. 
>
> On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:
>>
>> This entire thread is a trip... a trip which is not really relevant to 
>> julia-users. You may want to share these musings in the form of a blog 
>> instead of posting them here.
>>
>> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>>>
>>> Princeton's post: 
>>> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>>>
>>> Only logic saves us from paradox. - Minsky
>>>
>>> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:

 Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
 mention the best optimization is not doing the computation at all. 

 Domingos talks about that in his book, where an efficient kind of 
 learning is by analogy, with no model at all, and how numerous scientific 
 discoveries have been made that way, e.g. Bohr's analogy of the solar 
 system to the atom. Analogizers learn by hypothesizing that entities with 
 similar known properties have similar unknown ones. 

 MLN can reproduce structure mapping, which is the more powerful type of 
 analogy, that can make inferences from one domain (solar system) to 
 another 
 (atom). This can be done by learning formulas that don't refer to any of 
 the specific relations in the source domain (general formulas). 

 Seth and Tim have been helping me a lot with putting the pieces 
 together for MLN in the repo I created 
 , and more help is always 
 welcome. I would like to write MLN in idiomatic Julia. My question at the 
 moment to you and the community is how to keep mappings of first-order 
 harmonic functions type-stable in Julia? I am just getting acquainted with 
 the type field. 

 On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>
> Helping me separate the process in parts and priorities would be a lot 
> of help. 
>
> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>>
>> Tim Holy, what if I could tap into the well of knowledge that you are 
>> to speed up things? Can you imagine if every learner had to start 
>> without 
>> priors? 
>>
>> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
>> > 
>> > I'd recommend starting by picking a very small project. For 
>> example, fix a bug 
>> > or implement a small improvement in a package that you already find 
>> useful or 
>> > interesting. That way you'll get some guidance while making a 
>> positive 
>> > contribution; once you know more about julia, it will be easier to 
>> see your 
>> > way forward. 
>> > 
>> > Best, 
>> > --Tim 
>> > 
>> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
>> >> I have no idea where to start and where to finish. Founders' help 
>> would be 
>> >> wonderful. 
>> >> 
>> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
>> >>> After which I have to code Felix into Julia, a relational 
>> optimizer for 
>> >>> statistical inference with Tuffy <
>> http://i.stanford.edu/hazy/tuffy/> 
>> >>> inside, for enterprise settings. 
>> >>> 
>>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu 
>> wrote: 
>>  Can I get tips on bringing Alchemy's optimized Tuffy 
>>   in Java to Julia while 
>> showing the 
>>  best of Julia? I am going for the most correct way, even if it 
>> means 
>>  coding 
>>  Tuffy into C and Julia. 
>>  
>> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
>> > I'll try to build it, compare it, and show it to you guys. I 
>> offered to 
>> > do this as work. I am waiting to see if they will accept it. 
>> > 
>> >> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan 
>> Karpinski wrote: 
>> >> Kevin, as previously requested by Isaiah, please take this to 
>> some 
>> >> other forum or maybe start a blog. 
>> >> 
>> >>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  
>> wrote: 
>> >>> Symmetry-based learning, Domingos, 2014 
>> >>> 
>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
>> >>> / 
>> >>> 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Kevin Liu
Hello Chris. Have you been applying relational learning to your Neural 
Crest Migration Patterns in Craniofacial Development research project? It 
could enhance your insights. 

On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:
>
> This entire thread is a trip... a trip which is not really relevant to 
> julia-users. You may want to share these musings in the form of a blog 
> instead of posting them here.
>
> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>>
>> Princeton's post: 
>> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>>
>> Only logic saves us from paradox. - Minsky
>>
>> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>>>
>>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
>>> mention the best optimization is not doing the computation at all. 
>>>
>>> Domingos talks about that in his book, where an efficient kind of 
>>> learning is by analogy, with no model at all, and how numerous scientific 
>>> discoveries have been made that way, e.g. Bohr's analogy of the solar 
>>> system to the atom. Analogizers learn by hypothesizing that entities with 
>>> similar known properties have similar unknown ones. 
>>>
>>> MLN can reproduce structure mapping, which is the more powerful type of 
>>> analogy, that can make inferences from one domain (solar system) to another 
>>> (atom). This can be done by learning formulas that don't refer to any of 
>>> the specific relations in the source domain (general formulas). 
>>>
>>> Seth and Tim have been helping me a lot with putting the pieces together 
>>> for MLN in the repo I created 
>>> , and more help is always 
>>> welcome. I would like to write MLN in idiomatic Julia. My question at the 
>>> moment to you and the community is how to keep mappings of first-order 
>>> harmonic functions type-stable in Julia? I am just getting acquainted with 
>>> the type field. 
>>>
>>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:

 Helping me separate the process in parts and priorities would be a lot 
 of help. 

 On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>
> Tim Holy, what if I could tap into the well of knowledge that you are 
> to speed up things? Can you imagine if every learner had to start without 
> priors? 
>
> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
> > 
> > I'd recommend starting by picking a very small project. For example, 
> fix a bug 
> > or implement a small improvement in a package that you already find 
> useful or 
> > interesting. That way you'll get some guidance while making a 
> positive 
> > contribution; once you know more about julia, it will be easier to 
> see your 
> > way forward. 
> > 
> > Best, 
> > --Tim 
> > 
> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
> >> I have no idea where to start and where to finish. Founders' help 
> would be 
> >> wonderful. 
> >> 
> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
> >>> After which I have to code Felix into Julia, a relational 
> optimizer for 
> >>> statistical inference with Tuffy <
> http://i.stanford.edu/hazy/tuffy/> 
> >>> inside, for enterprise settings. 
> >>> 
>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
>  Can I get tips on bringing Alchemy's optimized Tuffy 
>   in Java to Julia while 
> showing the 
>  best of Julia? I am going for the most correct way, even if it 
> means 
>  coding 
>  Tuffy into C and Julia. 
>  
> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
> > I'll try to build it, compare it, and show it to you guys. I 
> offered to 
> > do this as work. I am waiting to see if they will accept it. 
> > 
> >> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski 
> wrote: 
> >> Kevin, as previously requested by Isaiah, please take this to 
> some 
> >> other forum or maybe start a blog. 
> >> 
> >>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  
> wrote: 
> >>> Symmetry-based learning, Domingos, 2014 
> >>> 
> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
> >>> / 
> >>> 
> >>> Approach 2: Deep symmetry networks generalize convolutional 
> neural 
> >>> networks by tying parameters and pooling over an arbitrary 
> symmetry 
> >>> group, 
> >>> not just the translation group. In preliminary experiments, 
> they 
> >>> outperformed convnets on a digit recognition task. 
> >>> 
> 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Chris Rackauckas
This entire thread is a trip... a trip which is not really relevant to 
julia-users. You may want to share these musings in the form of a blog 
instead of posting them here.

On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>
> Princeton's post: 
> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>
> Only logic saves us from paradox. - Minsky
>
> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>>
>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
>> mention the best optimization is not doing the computation at all. 
>>
>> Domingos talks about that in his book, where an efficient kind of 
>> learning is by analogy, with no model at all, and how numerous scientific 
>> discoveries have been made that way, e.g. Bohr's analogy of the solar 
>> system to the atom. Analogizers learn by hypothesizing that entities with 
>> similar known properties have similar unknown ones. 
>>
>> MLN can reproduce structure mapping, which is the more powerful type of 
>> analogy, that can make inferences from one domain (solar system) to another 
>> (atom). This can be done by learning formulas that don't refer to any of 
>> the specific relations in the source domain (general formulas). 
>>
>> Seth and Tim have been helping me a lot with putting the pieces together 
>> for MLN in the repo I created 
>> , and more help is always 
>> welcome. I would like to write MLN in idiomatic Julia. My question at the 
>> moment to you and the community is how to keep mappings of first-order 
>> harmonic functions type-stable in Julia? I am just getting acquainted with 
>> the type field. 
>>
>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> Helping me separate the process in parts and priorities would be a lot 
>>> of help. 
>>>
>>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:

 Tim Holy, what if I could tap into the well of knowledge that you are 
 to speed up things? Can you imagine if every learner had to start without 
 priors? 

 > On Aug 9, 2016, at 07:06, Tim Holy  
 wrote: 
 > 
 > I'd recommend starting by picking a very small project. For example, 
 fix a bug 
 > or implement a small improvement in a package that you already find 
 useful or 
 > interesting. That way you'll get some guidance while making a 
 positive 
 > contribution; once you know more about julia, it will be easier to 
 see your 
 > way forward. 
 > 
 > Best, 
 > --Tim 
 > 
 >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
 >> I have no idea where to start and where to finish. Founders' help 
 would be 
 >> wonderful. 
 >> 
 >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
 >>> After which I have to code Felix into Julia, a relational optimizer 
 for 
 >>> statistical inference with Tuffy  

 >>> inside, for enterprise settings. 
 >>> 
  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
  Can I get tips on bringing Alchemy's optimized Tuffy 
   in Java to Julia while 
 showing the 
  best of Julia? I am going for the most correct way, even if it 
 means 
  coding 
  Tuffy into C and Julia. 
  
 > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
 > I'll try to build it, compare it, and show it to you guys. I 
 offered to 
 > do this as work. I am waiting to see if they will accept it. 
 > 
 >> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski 
 wrote: 
 >> Kevin, as previously requested by Isaiah, please take this to 
 some 
 >> other forum or maybe start a blog. 
 >> 
 >>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  
 wrote: 
 >>> Symmetry-based learning, Domingos, 2014 
 >>> 
 https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
 >>> / 
 >>> 
 >>> Approach 2: Deep symmetry networks generalize convolutional 
 neural 
 >>> networks by tying parameters and pooling over an arbitrary 
 symmetry 
 >>> group, 
 >>> not just the translation group. In preliminary experiments, 
 they 
 >>> outperformed convnets on a digit recognition task. 
 >>> 
  On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu 
 wrote: 
  Minsky died of a cerebral hemorrhage at the age of 88.[40] 
   
 Ray 
  Kurzweil  says he 
 was 
  contacted by the cryonics 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-09-02 Thread Kevin Liu
Princeton's 
post: 
http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1

Only logic saves us from paradox. - Minsky

On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>
> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
> mention the best optimization is not doing the computation at all. 
>
> Domingos talks about that in his book, where an efficient kind of learning 
> is by analogy, with no model at all, and how numerous scientific 
> discoveries have been made that way, e.g. Bohr's analogy of the solar 
> system to the atom. Analogizers learn by hypothesizing that entities with 
> similar known properties have similar unknown ones. 
>
> MLN can reproduce structure mapping, which is the more powerful type of 
> analogy, that can make inferences from one domain (solar system) to another 
> (atom). This can be done by learning formulas that don't refer to any of 
> the specific relations in the source domain (general formulas). 
>
> Seth and Tim have been helping me a lot with putting the pieces together 
> for MLN in the repo I created , 
> and 
> more help is always welcome. I would like to write MLN in idiomatic Julia. 
> My question at the moment to you and the community is how to keep mappings 
> of first-order harmonic functions type-stable in Julia? I am just 
> getting acquainted with the type field. 
>
> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>
>> Helping me separate the process in parts and priorities would be a lot of 
>> help. 
>>
>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>>>
>>> Tim Holy, what if I could tap into the well of knowledge that you are to 
>>> speed up things? Can you imagine if every learner had to start without 
>>> priors? 
>>>
>>> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
>>> > 
>>> > I'd recommend starting by picking a very small project. For example, 
>>> fix a bug 
>>> > or implement a small improvement in a package that you already find 
>>> useful or 
>>> > interesting. That way you'll get some guidance while making a positive 
>>> > contribution; once you know more about julia, it will be easier to see 
>>> your 
>>> > way forward. 
>>> > 
>>> > Best, 
>>> > --Tim 
>>> > 
>>> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
>>> >> I have no idea where to start and where to finish. Founders' help 
>>> would be 
>>> >> wonderful. 
>>> >> 
>>> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
>>> >>> After which I have to code Felix into Julia, a relational optimizer 
>>> for 
>>> >>> statistical inference with Tuffy  
>>>
>>> >>> inside, for enterprise settings. 
>>> >>> 
>>>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
>>>  Can I get tips on bringing Alchemy's optimized Tuffy 
>>>   in Java to Julia while showing 
>>> the 
>>>  best of Julia? I am going for the most correct way, even if it 
>>> means 
>>>  coding 
>>>  Tuffy into C and Julia. 
>>>  
>>> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
>>> > I'll try to build it, compare it, and show it to you guys. I 
>>> offered to 
>>> > do this as work. I am waiting to see if they will accept it. 
>>> > 
>>> >> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski 
>>> wrote: 
>>> >> Kevin, as previously requested by Isaiah, please take this to 
>>> some 
>>> >> other forum or maybe start a blog. 
>>> >> 
>>> >>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  
>>> wrote: 
>>> >>> Symmetry-based learning, Domingos, 2014 
>>> >>> 
>>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
>>> >>> / 
>>> >>> 
>>> >>> Approach 2: Deep symmetry networks generalize convolutional 
>>> neural 
>>> >>> networks by tying parameters and pooling over an arbitrary 
>>> symmetry 
>>> >>> group, 
>>> >>> not just the translation group. In preliminary experiments, they 
>>> >>> outperformed convnets on a digit recognition task. 
>>> >>> 
>>>  On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote: 
>>>  Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>>>   Ray 
>>>  Kurzweil  says he 
>>> was 
>>>  contacted by the cryonics organization Alcor Life Extension 
>>>  Foundation 
>>>   
>>>
>>>  seeking 
>>>  Minsky's body.[41] 
>>>  <
>>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>>>  Kurzweil believes that Minsky was cryonically preserved by 
>>> Alcor and 
>>>  

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-25 Thread Kevin Liu
Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
mention the best optimization is not doing the computation at all. 

Domingos talks about that in his book, where an efficient kind of learning 
is by analogy, with no model at all, and how numerous scientific 
discoveries have been made that way, e.g. Bohr's analogy of the solar 
system to the atom. Analogizers learn by hypothesizing that entities with 
similar known properties have similar unknown ones. 

MLN can reproduce structure mapping, which is the more powerful type of 
analogy, that can make inferences from one domain (solar system) to another 
(atom). This can be done by learning formulas that don't refer to any of 
the specific relations in the source domain (general formulas). 

Seth and Tim have been helping me a lot with putting the pieces together 
for MLN in the repo I created , and 
more help is always welcome. I would like to write MLN in idiomatic Julia. 
My question at the moment to you and the community is how to keep mappings 
of first-order harmonic functions type-stable in Julia? I am just 
getting acquainted with the type field. 

On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>
> Helping me separate the process in parts and priorities would be a lot of 
> help. 
>
> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>>
>> Tim Holy, what if I could tap into the well of knowledge that you are to 
>> speed up things? Can you imagine if every learner had to start without 
>> priors? 
>>
>> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
>> > 
>> > I'd recommend starting by picking a very small project. For example, 
>> fix a bug 
>> > or implement a small improvement in a package that you already find 
>> useful or 
>> > interesting. That way you'll get some guidance while making a positive 
>> > contribution; once you know more about julia, it will be easier to see 
>> your 
>> > way forward. 
>> > 
>> > Best, 
>> > --Tim 
>> > 
>> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
>> >> I have no idea where to start and where to finish. Founders' help 
>> would be 
>> >> wonderful. 
>> >> 
>> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
>> >>> After which I have to code Felix into Julia, a relational optimizer 
>> for 
>> >>> statistical inference with Tuffy  
>> >>> inside, for enterprise settings. 
>> >>> 
>>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
>>  Can I get tips on bringing Alchemy's optimized Tuffy 
>>   in Java to Julia while showing 
>> the 
>>  best of Julia? I am going for the most correct way, even if it means 
>>  coding 
>>  Tuffy into C and Julia. 
>>  
>> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
>> > I'll try to build it, compare it, and show it to you guys. I 
>> offered to 
>> > do this as work. I am waiting to see if they will accept it. 
>> > 
>> >> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski 
>> wrote: 
>> >> Kevin, as previously requested by Isaiah, please take this to some 
>> >> other forum or maybe start a blog. 
>> >> 
>> >>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  
>> wrote: 
>> >>> Symmetry-based learning, Domingos, 2014 
>> >>> 
>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
>> >>> / 
>> >>> 
>> >>> Approach 2: Deep symmetry networks generalize convolutional 
>> neural 
>> >>> networks by tying parameters and pooling over an arbitrary 
>> symmetry 
>> >>> group, 
>> >>> not just the translation group. In preliminary experiments, they 
>> >>> outperformed convnets on a digit recognition task. 
>> >>> 
>>  On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote: 
>>  Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>>   Ray 
>>  Kurzweil  says he 
>> was 
>>  contacted by the cryonics organization Alcor Life Extension 
>>  Foundation 
>>   
>>  seeking 
>>  Minsky's body.[41] 
>>  <
>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>>  Kurzweil believes that Minsky was cryonically preserved by Alcor 
>> and 
>>  will be revived by 2045.[41] 
>>  <
>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>>  Minsky 
>>  was a member of Alcor's Scientific Advisory Board 
>>  .[42] 
>>  <
>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-AlcorBoard-42> 
>>  

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-09 Thread Kevin Liu
Helping me separate the process in parts and priorities would be a lot of 
help. 

On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>
> Tim Holy, what if I could tap into the well of knowledge that you are to 
> speed up things? Can you imagine if every learner had to start without 
> priors? 
>
> > On Aug 9, 2016, at 07:06, Tim Holy  wrote: 
> > 
> > I'd recommend starting by picking a very small project. For example, fix 
> a bug 
> > or implement a small improvement in a package that you already find 
> useful or 
> > interesting. That way you'll get some guidance while making a positive 
> > contribution; once you know more about julia, it will be easier to see 
> your 
> > way forward. 
> > 
> > Best, 
> > --Tim 
> > 
> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
> >> I have no idea where to start and where to finish. Founders' help would 
> be 
> >> wonderful. 
> >> 
> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
> >>> After which I have to code Felix into Julia, a relational optimizer 
> for 
> >>> statistical inference with Tuffy  
> >>> inside, for enterprise settings. 
> >>> 
>  On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
>  Can I get tips on bringing Alchemy's optimized Tuffy 
>   in Java to Julia while showing 
> the 
>  best of Julia? I am going for the most correct way, even if it means 
>  coding 
>  Tuffy into C and Julia. 
>  
> > On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
> > I'll try to build it, compare it, and show it to you guys. I offered 
> to 
> > do this as work. I am waiting to see if they will accept it. 
> > 
> >> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski 
> wrote: 
> >> Kevin, as previously requested by Isaiah, please take this to some 
> >> other forum or maybe start a blog. 
> >> 
> >>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  
> wrote: 
> >>> Symmetry-based learning, Domingos, 2014 
> >>> 
> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
> >>> / 
> >>> 
> >>> Approach 2: Deep symmetry networks generalize convolutional neural 
> >>> networks by tying parameters and pooling over an arbitrary 
> symmetry 
> >>> group, 
> >>> not just the translation group. In preliminary experiments, they 
> >>> outperformed convnets on a digit recognition task. 
> >>> 
>  On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote: 
>  Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>   Ray 
>  Kurzweil  says he 
> was 
>  contacted by the cryonics organization Alcor Life Extension 
>  Foundation 
>   
>  seeking 
>  Minsky's body.[41] 
>  <
> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>  Kurzweil believes that Minsky was cryonically preserved by Alcor 
> and 
>  will be revived by 2045.[41] 
>  <
> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>  Minsky 
>  was a member of Alcor's Scientific Advisory Board 
>  .[42] 
>  <
> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-AlcorBoard-42> 
>  In 
>  keeping with their policy of protecting privacy, Alcor will 
> neither 
>  confirm 
>  nor deny that Alcor has cryonically preserved Minsky.[43] 
>   
>  
>  We better do a good job. 
>  
> > On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote: 
> > *So, I think in the next 20 years (2003), if we can get rid of 
> all 
> > of the traditional approaches to artificial intelligence, like 
> > neural nets 
> > and genetic algorithms and rule-based systems, and just turn our 
> > sights a 
> > little bit higher to say, can we make a system that can use all 
> > those 
> > things for the right kind of problem? Some problems are good for 
> > neural 
> > nets; we know that others, neural nets are hopeless on them. 
> Genetic 
> > algorithms are great for certain things; I suspect I know what 
> > they're bad 
> > at, and I won't tell you. (Laughter)*  - Minsky, founder of 
> CSAIL 
> > MIT 
> > 
> > *Those programmers tried to find the single best way to 
> represent 
> > knowledge - Only Logic protects us from paradox.* - Minsky (see 
> > attachment from his 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-09 Thread Kevin Liu
Tim Holy, what if I could tap into the well of knowledge that you are to speed 
up things? Can you imagine if every learner had to start without priors? 

> On Aug 9, 2016, at 07:06, Tim Holy  wrote:
> 
> I'd recommend starting by picking a very small project. For example, fix a 
> bug 
> or implement a small improvement in a package that you already find useful or 
> interesting. That way you'll get some guidance while making a positive 
> contribution; once you know more about julia, it will be easier to see your 
> way forward.
> 
> Best,
> --Tim
> 
>> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote:
>> I have no idea where to start and where to finish. Founders' help would be
>> wonderful.
>> 
>>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote:
>>> After which I have to code Felix into Julia, a relational optimizer for
>>> statistical inference with Tuffy 
>>> inside, for enterprise settings.
>>> 
 On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote:
 Can I get tips on bringing Alchemy's optimized Tuffy
  in Java to Julia while showing the
 best of Julia? I am going for the most correct way, even if it means
 coding
 Tuffy into C and Julia.
 
> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote:
> I'll try to build it, compare it, and show it to you guys. I offered to
> do this as work. I am waiting to see if they will accept it.
> 
>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:
>> Kevin, as previously requested by Isaiah, please take this to some
>> other forum or maybe start a blog.
>> 
>>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:
>>> Symmetry-based learning, Domingos, 2014
>>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning
>>> /
>>> 
>>> Approach 2: Deep symmetry networks generalize convolutional neural
>>> networks by tying parameters and pooling over an arbitrary symmetry
>>> group,
>>> not just the translation group. In preliminary experiments, they
>>> outperformed convnets on a digit recognition task.
>>> 
 On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
 Minsky died of a cerebral hemorrhage at the age of 88.[40]
  Ray
 Kurzweil  says he was
 contacted by the cryonics organization Alcor Life Extension
 Foundation
 
 seeking
 Minsky's body.[41]
 
 Kurzweil believes that Minsky was cryonically preserved by Alcor and
 will be revived by 2045.[41]
 
 Minsky
 was a member of Alcor's Scientific Advisory Board
 .[42]
 
 In
 keeping with their policy of protecting privacy, Alcor will neither
 confirm
 nor deny that Alcor has cryonically preserved Minsky.[43]
 
 
 We better do a good job.
 
> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
> *So, I think in the next 20 years (2003), if we can get rid of all
> of the traditional approaches to artificial intelligence, like
> neural nets
> and genetic algorithms and rule-based systems, and just turn our
> sights a
> little bit higher to say, can we make a system that can use all
> those
> things for the right kind of problem? Some problems are good for
> neural
> nets; we know that others, neural nets are hopeless on them. Genetic
> algorithms are great for certain things; I suspect I know what
> they're bad
> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL
> MIT
> 
> *Those programmers tried to find the single best way to represent
> knowledge - Only Logic protects us from paradox.* - Minsky (see
> attachment from his lecture)
> 
>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>> Markov Logic Network is being used for the continuous development
>> of drugs to cure cancer at MIT's CanceRX ,
>> on
>> DARPA's largest AI project to date, Personalized Assistant that
>> Learns (PAL) , progenitor of Siri. One of
>> Alchemy's largest applications to date was 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-09 Thread Tim Holy
I'd recommend starting by picking a very small project. For example, fix a bug 
or implement a small improvement in a package that you already find useful or 
interesting. That way you'll get some guidance while making a positive 
contribution; once you know more about julia, it will be easier to see your 
way forward.

Best,
--Tim

On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote:
> I have no idea where to start and where to finish. Founders' help would be
> wonderful.
> 
> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote:
> > After which I have to code Felix into Julia, a relational optimizer for
> > statistical inference with Tuffy 
> > inside, for enterprise settings.
> > 
> > On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote:
> >> Can I get tips on bringing Alchemy's optimized Tuffy
> >>  in Java to Julia while showing the
> >> best of Julia? I am going for the most correct way, even if it means
> >> coding
> >> Tuffy into C and Julia.
> >> 
> >> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote:
> >>> I'll try to build it, compare it, and show it to you guys. I offered to
> >>> do this as work. I am waiting to see if they will accept it.
> >>> 
> >>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:
>  Kevin, as previously requested by Isaiah, please take this to some
>  other forum or maybe start a blog.
>  
>  On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:
> > Symmetry-based learning, Domingos, 2014
> > https://www.microsoft.com/en-us/research/video/symmetry-based-learning
> > /
> > 
> > Approach 2: Deep symmetry networks generalize convolutional neural
> > networks by tying parameters and pooling over an arbitrary symmetry
> > group,
> > not just the translation group. In preliminary experiments, they
> > outperformed convnets on a digit recognition task.
> > 
> > On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
> >> Minsky died of a cerebral hemorrhage at the age of 88.[40]
> >>  Ray
> >> Kurzweil  says he was
> >> contacted by the cryonics organization Alcor Life Extension
> >> Foundation
> >> 
> >> seeking
> >> Minsky's body.[41]
> >> 
> >> Kurzweil believes that Minsky was cryonically preserved by Alcor and
> >> will be revived by 2045.[41]
> >> 
> >> Minsky
> >> was a member of Alcor's Scientific Advisory Board
> >> .[42]
> >> 
> >> In
> >> keeping with their policy of protecting privacy, Alcor will neither
> >> confirm
> >> nor deny that Alcor has cryonically preserved Minsky.[43]
> >> 
> >> 
> >> We better do a good job.
> >> 
> >> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
> >>> *So, I think in the next 20 years (2003), if we can get rid of all
> >>> of the traditional approaches to artificial intelligence, like
> >>> neural nets
> >>> and genetic algorithms and rule-based systems, and just turn our
> >>> sights a
> >>> little bit higher to say, can we make a system that can use all
> >>> those
> >>> things for the right kind of problem? Some problems are good for
> >>> neural
> >>> nets; we know that others, neural nets are hopeless on them. Genetic
> >>> algorithms are great for certain things; I suspect I know what
> >>> they're bad
> >>> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL
> >>> MIT
> >>> 
> >>> *Those programmers tried to find the single best way to represent
> >>> knowledge - Only Logic protects us from paradox.* - Minsky (see
> >>> attachment from his lecture)
> >>> 
> >>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>  Markov Logic Network is being used for the continuous development
>  of drugs to cure cancer at MIT's CanceRX ,
>  on
>  DARPA's largest AI project to date, Personalized Assistant that
>  Learns (PAL) , progenitor of Siri. One of
>  Alchemy's largest applications to date was to learn a semantic
>  network
>  (knowledge graph as Google calls it) from the web.
>  
>  Some on Probabilistic Inductive Logic Programming / Probabilistic
>  Logic Programming / Statistical Relational Learning from 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-08 Thread Kevin Liu
I have no idea where to start and where to finish. Founders' help would be 
wonderful. 

On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote:
>
> After which I have to code Felix into Julia, a relational optimizer for 
> statistical inference with Tuffy  
> inside, for enterprise settings.
>
> On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote:
>>
>> Can I get tips on bringing Alchemy's optimized Tuffy 
>>  in Java to Julia while showing the 
>> best of Julia? I am going for the most correct way, even if it means coding 
>> Tuffy into C and Julia.
>>
>> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote:
>>>
>>> I'll try to build it, compare it, and show it to you guys. I offered to 
>>> do this as work. I am waiting to see if they will accept it. 
>>>
>>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:

 Kevin, as previously requested by Isaiah, please take this to some 
 other forum or maybe start a blog.

 On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:

> Symmetry-based learning, Domingos, 2014 
> https://www.microsoft.com/en-us/research/video/symmetry-based-learning/
>
> Approach 2: Deep symmetry networks generalize convolutional neural 
> networks by tying parameters and pooling over an arbitrary symmetry 
> group, 
> not just the translation group. In preliminary experiments, they 
> outperformed convnets on a digit recognition task. 
>
> On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
>>
>> Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>>  Ray 
>> Kurzweil  says he was 
>> contacted by the cryonics organization Alcor Life Extension 
>> Foundation 
>>  seeking 
>> Minsky's body.[41] 
>>  
>> Kurzweil 
>> believes that Minsky was cryonically preserved by Alcor and will be 
>> revived 
>> by 2045.[41] 
>>  
>> Minsky 
>> was a member of Alcor's Scientific Advisory Board 
>> .[42] 
>>  In 
>> keeping with their policy of protecting privacy, Alcor will neither 
>> confirm 
>> nor deny that Alcor has cryonically preserved Minsky.[43] 
>>  
>>
>> We better do a good job. 
>>
>> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
>>>
>>> *So, I think in the next 20 years (2003), if we can get rid of all 
>>> of the traditional approaches to artificial intelligence, like neural 
>>> nets 
>>> and genetic algorithms and rule-based systems, and just turn our sights 
>>> a 
>>> little bit higher to say, can we make a system that can use all those 
>>> things for the right kind of problem? Some problems are good for neural 
>>> nets; we know that others, neural nets are hopeless on them. Genetic 
>>> algorithms are great for certain things; I suspect I know what they're 
>>> bad 
>>> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL 
>>> MIT
>>>
>>> *Those programmers tried to find the single best way to represent 
>>> knowledge - Only Logic protects us from paradox.* - Minsky (see 
>>> attachment from his lecture)
>>>
>>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:

 Markov Logic Network is being used for the continuous development 
 of drugs to cure cancer at MIT's CanceRX , on 
 DARPA's largest AI project to date, Personalized Assistant that 
 Learns (PAL) , progenitor of Siri. One of 
 Alchemy's largest applications to date was to learn a semantic network 
 (knowledge graph as Google calls it) from the web. 

 Some on Probabilistic Inductive Logic Programming / Probabilistic 
 Logic Programming / Statistical Relational Learning from CSAIL 
 
  (my 
 understanding is Alchemy does PILP from entailment, proofs, and 
 interpretation)

 The MIT Probabilistic Computing Project (where there is Picture, an 
 extension of Julia, for computer vision; Watch the video from Vikash) 
 

 Probabilistic programming could do for Bayesian ML what Theano has 
 done for 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-08 Thread Kevin Liu
After which I have to code Felix into Julia, a relational optimizer for 
statistical inference with Tuffy  
inside, for enterprise settings.

On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote:
>
> Can I get tips on bringing Alchemy's optimized Tuffy 
>  in Java to Julia while showing the 
> best of Julia? I am going for the most correct way, even if it means coding 
> Tuffy into C and Julia.
>
> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote:
>>
>> I'll try to build it, compare it, and show it to you guys. I offered to 
>> do this as work. I am waiting to see if they will accept it. 
>>
>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:
>>>
>>> Kevin, as previously requested by Isaiah, please take this to some other 
>>> forum or maybe start a blog.
>>>
>>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:
>>>
 Symmetry-based learning, Domingos, 2014 
 https://www.microsoft.com/en-us/research/video/symmetry-based-learning/

 Approach 2: Deep symmetry networks generalize convolutional neural 
 networks by tying parameters and pooling over an arbitrary symmetry group, 
 not just the translation group. In preliminary experiments, they 
 outperformed convnets on a digit recognition task. 

 On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
>
> Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>  Ray 
> Kurzweil  says he was 
> contacted by the cryonics organization Alcor Life Extension Foundation 
>  seeking 
> Minsky's body.[41] 
>  
> Kurzweil 
> believes that Minsky was cryonically preserved by Alcor and will be 
> revived 
> by 2045.[41] 
>  
> Minsky 
> was a member of Alcor's Scientific Advisory Board 
> .[42] 
>  In 
> keeping with their policy of protecting privacy, Alcor will neither 
> confirm 
> nor deny that Alcor has cryonically preserved Minsky.[43] 
>  
>
> We better do a good job. 
>
> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
>>
>> *So, I think in the next 20 years (2003), if we can get rid of all of 
>> the traditional approaches to artificial intelligence, like neural nets 
>> and 
>> genetic algorithms and rule-based systems, and just turn our sights a 
>> little bit higher to say, can we make a system that can use all those 
>> things for the right kind of problem? Some problems are good for neural 
>> nets; we know that others, neural nets are hopeless on them. Genetic 
>> algorithms are great for certain things; I suspect I know what they're 
>> bad 
>> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL MIT
>>
>> *Those programmers tried to find the single best way to represent 
>> knowledge - Only Logic protects us from paradox.* - Minsky (see 
>> attachment from his lecture)
>>
>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>>>
>>> Markov Logic Network is being used for the continuous development of 
>>> drugs to cure cancer at MIT's CanceRX , on 
>>> DARPA's largest AI project to date, Personalized Assistant that 
>>> Learns (PAL) , progenitor of Siri. One of 
>>> Alchemy's largest applications to date was to learn a semantic network 
>>> (knowledge graph as Google calls it) from the web. 
>>>
>>> Some on Probabilistic Inductive Logic Programming / Probabilistic 
>>> Logic Programming / Statistical Relational Learning from CSAIL 
>>> 
>>>  (my 
>>> understanding is Alchemy does PILP from entailment, proofs, and 
>>> interpretation)
>>>
>>> The MIT Probabilistic Computing Project (where there is Picture, an 
>>> extension of Julia, for computer vision; Watch the video from Vikash) 
>>> 
>>>
>>> Probabilistic programming could do for Bayesian ML what Theano has 
>>> done for neural networks. 
>>>  - Ferenc HuszĂ¡r
>>>
>>> Picture doesn't appear to be open-source, even though its Paper is 
>>> available. 
>>>
>>> I'm in the process of comparing the Picture Paper and Alchemy code 
>>> and 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-08 Thread Kevin Liu
Can I get tips on bringing Alchemy's optimized Tuffy 
 in Java to Julia while showing the best 
of Julia? I am going for the most correct way, even if it means coding 
Tuffy into C and Julia.

On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote:
>
> I'll try to build it, compare it, and show it to you guys. I offered to do 
> this as work. I am waiting to see if they will accept it. 
>
> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:
>>
>> Kevin, as previously requested by Isaiah, please take this to some other 
>> forum or maybe start a blog.
>>
>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:
>>
>>> Symmetry-based learning, Domingos, 2014 
>>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning/
>>>
>>> Approach 2: Deep symmetry networks generalize convolutional neural 
>>> networks by tying parameters and pooling over an arbitrary symmetry group, 
>>> not just the translation group. In preliminary experiments, they 
>>> outperformed convnets on a digit recognition task. 
>>>
>>> On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:

 Minsky died of a cerebral hemorrhage at the age of 88.[40] 
  Ray Kurzweil 
  says he was contacted by 
 the cryonics organization Alcor Life Extension Foundation 
  seeking 
 Minsky's body.[41] 
  
 Kurzweil 
 believes that Minsky was cryonically preserved by Alcor and will be 
 revived 
 by 2045.[41] 
  Minsky 
 was a member of Alcor's Scientific Advisory Board 
 .[42] 
  In 
 keeping with their policy of protecting privacy, Alcor will neither 
 confirm 
 nor deny that Alcor has cryonically preserved Minsky.[43] 
  

 We better do a good job. 

 On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
>
> *So, I think in the next 20 years (2003), if we can get rid of all of 
> the traditional approaches to artificial intelligence, like neural nets 
> and 
> genetic algorithms and rule-based systems, and just turn our sights a 
> little bit higher to say, can we make a system that can use all those 
> things for the right kind of problem? Some problems are good for neural 
> nets; we know that others, neural nets are hopeless on them. Genetic 
> algorithms are great for certain things; I suspect I know what they're 
> bad 
> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL MIT
>
> *Those programmers tried to find the single best way to represent 
> knowledge - Only Logic protects us from paradox.* - Minsky (see 
> attachment from his lecture)
>
> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>>
>> Markov Logic Network is being used for the continuous development of 
>> drugs to cure cancer at MIT's CanceRX , on 
>> DARPA's largest AI project to date, Personalized Assistant that 
>> Learns (PAL) , progenitor of Siri. One of 
>> Alchemy's largest applications to date was to learn a semantic network 
>> (knowledge graph as Google calls it) from the web. 
>>
>> Some on Probabilistic Inductive Logic Programming / Probabilistic 
>> Logic Programming / Statistical Relational Learning from CSAIL 
>> 
>>  (my 
>> understanding is Alchemy does PILP from entailment, proofs, and 
>> interpretation)
>>
>> The MIT Probabilistic Computing Project (where there is Picture, an 
>> extension of Julia, for computer vision; Watch the video from Vikash) 
>> 
>>
>> Probabilistic programming could do for Bayesian ML what Theano has 
>> done for neural networks. 
>>  - Ferenc HuszĂ¡r
>>
>> Picture doesn't appear to be open-source, even though its Paper is 
>> available. 
>>
>> I'm in the process of comparing the Picture Paper and Alchemy code 
>> and would like to have an open-source PILP from Julia that combines the 
>> best of both. 
>>
>> On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker 
>> wrote:
>>>
>>> This sounds like it could be a great contribution. I shall keep a 
>>> curious eye on your progress
>>>
>>> Am Mittwoch, 3. August 2016 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-07 Thread Kevin Liu
I'll try to build it, compare it, and show it to you guys. I offered to do 
this as work. I am waiting to see if they will accept it. 

On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:
>
> Kevin, as previously requested by Isaiah, please take this to some other 
> forum or maybe start a blog.
>
> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  > wrote:
>
>> Symmetry-based learning, Domingos, 2014 
>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning/
>>
>> Approach 2: Deep symmetry networks generalize convolutional neural 
>> networks by tying parameters and pooling over an arbitrary symmetry group, 
>> not just the translation group. In preliminary experiments, they 
>> outperformed convnets on a digit recognition task. 
>>
>> On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
>>>
>>> Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>>>  Ray Kurzweil 
>>>  says he was contacted by 
>>> the cryonics organization Alcor Life Extension Foundation 
>>>  seeking 
>>> Minsky's body.[41] 
>>>  
>>> Kurzweil 
>>> believes that Minsky was cryonically preserved by Alcor and will be revived 
>>> by 2045.[41] 
>>>  Minsky 
>>> was a member of Alcor's Scientific Advisory Board 
>>> .[42] 
>>>  In 
>>> keeping with their policy of protecting privacy, Alcor will neither confirm 
>>> nor deny that Alcor has cryonically preserved Minsky.[43] 
>>>  
>>>
>>> We better do a good job. 
>>>
>>> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:

 *So, I think in the next 20 years (2003), if we can get rid of all of 
 the traditional approaches to artificial intelligence, like neural nets 
 and 
 genetic algorithms and rule-based systems, and just turn our sights a 
 little bit higher to say, can we make a system that can use all those 
 things for the right kind of problem? Some problems are good for neural 
 nets; we know that others, neural nets are hopeless on them. Genetic 
 algorithms are great for certain things; I suspect I know what they're bad 
 at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL MIT

 *Those programmers tried to find the single best way to represent 
 knowledge - Only Logic protects us from paradox.* - Minsky (see 
 attachment from his lecture)

 On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>
> Markov Logic Network is being used for the continuous development of 
> drugs to cure cancer at MIT's CanceRX , on 
> DARPA's largest AI project to date, Personalized Assistant that 
> Learns (PAL) , progenitor of Siri. One of 
> Alchemy's largest applications to date was to learn a semantic network 
> (knowledge graph as Google calls it) from the web. 
>
> Some on Probabilistic Inductive Logic Programming / Probabilistic 
> Logic Programming / Statistical Relational Learning from CSAIL 
> 
>  (my 
> understanding is Alchemy does PILP from entailment, proofs, and 
> interpretation)
>
> The MIT Probabilistic Computing Project (where there is Picture, an 
> extension of Julia, for computer vision; Watch the video from Vikash) 
> 
>
> Probabilistic programming could do for Bayesian ML what Theano has 
> done for neural networks. 
>  - Ferenc HuszĂ¡r
>
> Picture doesn't appear to be open-source, even though its Paper is 
> available. 
>
> I'm in the process of comparing the Picture Paper and Alchemy code and 
> would like to have an open-source PILP from Julia that combines the best 
> of 
> both. 
>
> On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker 
> wrote:
>>
>> This sounds like it could be a great contribution. I shall keep a 
>> curious eye on your progress
>>
>> Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
>>>
>>> Thanks for the advice Cristof. I am only interested in people 
>>> wanting to code it in Julia, from R by Domingos. The algo has been 
>>> successfully applied in many areas, even though there are many other 
>>> areas 
>>> remaining. 
>>>
>>> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <
>>> stocker@gmail.com> 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-07 Thread Stefan Karpinski
Kevin, as previously requested by Isaiah, please take this to some other
forum or maybe start a blog.

On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:

> Symmetry-based learning, Domingos, 2014 https://www.microsoft.com/en-
> us/research/video/symmetry-based-learning/
>
> Approach 2: Deep symmetry networks generalize convolutional neural
> networks by tying parameters and pooling over an arbitrary symmetry group,
> not just the translation group. In preliminary experiments, they
> outperformed convnets on a digit recognition task.
>
> On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
>>
>> Minsky died of a cerebral hemorrhage at the age of 88.[40]
>>  Ray Kurzweil
>>  says he was contacted by
>> the cryonics organization Alcor Life Extension Foundation
>>  seeking
>> Minsky's body.[41]
>>  Kurzweil
>> believes that Minsky was cryonically preserved by Alcor and will be revived
>> by 2045.[41]
>>  Minsky
>> was a member of Alcor's Scientific Advisory Board
>> .[42]
>>  In
>> keeping with their policy of protecting privacy, Alcor will neither confirm
>> nor deny that Alcor has cryonically preserved Minsky.[43]
>> 
>>
>> We better do a good job.
>>
>> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
>>>
>>> *So, I think in the next 20 years (2003), if we can get rid of all of
>>> the traditional approaches to artificial intelligence, like neural nets and
>>> genetic algorithms and rule-based systems, and just turn our sights a
>>> little bit higher to say, can we make a system that can use all those
>>> things for the right kind of problem? Some problems are good for neural
>>> nets; we know that others, neural nets are hopeless on them. Genetic
>>> algorithms are great for certain things; I suspect I know what they're bad
>>> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL MIT
>>>
>>> *Those programmers tried to find the single best way to represent
>>> knowledge - Only Logic protects us from paradox.* - Minsky (see
>>> attachment from his lecture)
>>>
>>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:

 Markov Logic Network is being used for the continuous development of
 drugs to cure cancer at MIT's CanceRX , on
 DARPA's largest AI project to date, Personalized Assistant that Learns
 (PAL) , progenitor of Siri. One of Alchemy's
 largest applications to date was to learn a semantic network (knowledge
 graph as Google calls it) from the web.

 Some on Probabilistic Inductive Logic Programming / Probabilistic Logic
 Programming / Statistical Relational Learning from CSAIL
 
  (my
 understanding is Alchemy does PILP from entailment, proofs, and
 interpretation)

 The MIT Probabilistic Computing Project (where there is Picture, an
 extension of Julia, for computer vision; Watch the video from Vikash)
 

 Probabilistic programming could do for Bayesian ML what Theano has done
 for neural networks. 
 - Ferenc HuszĂ¡r

 Picture doesn't appear to be open-source, even though its Paper is
 available.

 I'm in the process of comparing the Picture Paper and Alchemy code and
 would like to have an open-source PILP from Julia that combines the best of
 both.

 On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker
 wrote:
>
> This sounds like it could be a great contribution. I shall keep a
> curious eye on your progress
>
> Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
>>
>> Thanks for the advice Cristof. I am only interested in people wanting
>> to code it in Julia, from R by Domingos. The algo has been successfully
>> applied in many areas, even though there are many other areas remaining.
>>
>> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <
>> stocker@gmail.com> wrote:
>>
>>> Hello Kevin,
>>>
>>> Enthusiasm is a good thing and you should hold on to that. But to
>>> save yourself some headache or disappointment down the road I advice a
>>> level head. Nothing is really as bluntly obviously solved as it may 
>>> seems
>>> at first glance after listening to brilliant people explain things. A
>>> physics professor of mine 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-06 Thread Kevin Liu
Symmetry-based learning, Domingos, 2014 
https://www.microsoft.com/en-us/research/video/symmetry-based-learning/

Approach 2: Deep symmetry networks generalize convolutional neural networks 
by tying parameters and pooling over an arbitrary symmetry group, not just 
the translation group. In preliminary experiments, they outperformed 
convnets on a digit recognition task. 

On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
>
> Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>  Ray Kurzweil 
>  says he was contacted by the 
> cryonics organization Alcor Life Extension Foundation 
>  seeking 
> Minsky's body.[41] 
>  Kurzweil 
> believes that Minsky was cryonically preserved by Alcor and will be revived 
> by 2045.[41] 
>  Minsky 
> was a member of Alcor's Scientific Advisory Board 
> .[42] 
>  In 
> keeping with their policy of protecting privacy, Alcor will neither confirm 
> nor deny that Alcor has cryonically preserved Minsky.[43] 
>  
>
> We better do a good job. 
>
> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
>>
>> *So, I think in the next 20 years (2003), if we can get rid of all of the 
>> traditional approaches to artificial intelligence, like neural nets and 
>> genetic algorithms and rule-based systems, and just turn our sights a 
>> little bit higher to say, can we make a system that can use all those 
>> things for the right kind of problem? Some problems are good for neural 
>> nets; we know that others, neural nets are hopeless on them. Genetic 
>> algorithms are great for certain things; I suspect I know what they're bad 
>> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL MIT
>>
>> *Those programmers tried to find the single best way to represent 
>> knowledge - Only Logic protects us from paradox.* - Minsky (see 
>> attachment from his lecture)
>>
>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>>>
>>> Markov Logic Network is being used for the continuous development of 
>>> drugs to cure cancer at MIT's CanceRX , on 
>>> DARPA's largest AI project to date, Personalized Assistant that Learns 
>>> (PAL) , progenitor of Siri. One of Alchemy's 
>>> largest applications to date was to learn a semantic network (knowledge 
>>> graph as Google calls it) from the web. 
>>>
>>> Some on Probabilistic Inductive Logic Programming / Probabilistic Logic 
>>> Programming / Statistical Relational Learning from CSAIL 
>>>  
>>> (my 
>>> understanding is Alchemy does PILP from entailment, proofs, and 
>>> interpretation)
>>>
>>> The MIT Probabilistic Computing Project (where there is Picture, an 
>>> extension of Julia, for computer vision; Watch the video from Vikash) 
>>> 
>>>
>>> Probabilistic programming could do for Bayesian ML what Theano has done 
>>> for neural networks.  - 
>>> Ferenc HuszĂ¡r
>>>
>>> Picture doesn't appear to be open-source, even though its Paper is 
>>> available. 
>>>
>>> I'm in the process of comparing the Picture Paper and Alchemy code and 
>>> would like to have an open-source PILP from Julia that combines the best of 
>>> both. 
>>>
>>> On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker wrote:

 This sounds like it could be a great contribution. I shall keep a 
 curious eye on your progress

 Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
>
> Thanks for the advice Cristof. I am only interested in people wanting 
> to code it in Julia, from R by Domingos. The algo has been successfully 
> applied in many areas, even though there are many other areas remaining. 
>
> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <
> stocker@gmail.com> wrote:
>
>> Hello Kevin,
>>
>> Enthusiasm is a good thing and you should hold on to that. But to 
>> save yourself some headache or disappointment down the road I advice a 
>> level head. Nothing is really as bluntly obviously solved as it may 
>> seems 
>> at first glance after listening to brilliant people explain things. A 
>> physics professor of mine once told me that one of the (he thinks) most 
>> malicious factors to his past students progress where overstated 
>> results/conclusions by other researches (such as premature announcements 
>> from CERN). I am 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-05 Thread Kevin Liu
Minsky died of a cerebral hemorrhage at the age of 88.[40] 
 Ray Kurzweil 
 says he was contacted by the 
cryonics organization Alcor Life Extension Foundation 
 seeking 
Minsky's body.[41] 
 Kurzweil 
believes that Minsky was cryonically preserved by Alcor and will be revived 
by 2045.[41] 
 Minsky 
was a member of Alcor's Scientific Advisory Board 
.[42] 
 In 
keeping with their policy of protecting privacy, Alcor will neither confirm 
nor deny that Alcor has cryonically preserved Minsky.[43] 
 

We better do a good job. 

On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
>
> *So, I think in the next 20 years (2003), if we can get rid of all of the 
> traditional approaches to artificial intelligence, like neural nets and 
> genetic algorithms and rule-based systems, and just turn our sights a 
> little bit higher to say, can we make a system that can use all those 
> things for the right kind of problem? Some problems are good for neural 
> nets; we know that others, neural nets are hopeless on them. Genetic 
> algorithms are great for certain things; I suspect I know what they're bad 
> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL MIT
>
> *Those programmers tried to find the single best way to represent 
> knowledge - Only Logic protects us from paradox.* - Minsky (see 
> attachment from his lecture)
>
> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>>
>> Markov Logic Network is being used for the continuous development of 
>> drugs to cure cancer at MIT's CanceRX , on 
>> DARPA's largest AI project to date, Personalized Assistant that Learns 
>> (PAL) , progenitor of Siri. One of Alchemy's 
>> largest applications to date was to learn a semantic network (knowledge 
>> graph as Google calls it) from the web. 
>>
>> Some on Probabilistic Inductive Logic Programming / Probabilistic Logic 
>> Programming / Statistical Relational Learning from CSAIL 
>>  
>> (my 
>> understanding is Alchemy does PILP from entailment, proofs, and 
>> interpretation)
>>
>> The MIT Probabilistic Computing Project (where there is Picture, an 
>> extension of Julia, for computer vision; Watch the video from Vikash) 
>> 
>>
>> Probabilistic programming could do for Bayesian ML what Theano has done 
>> for neural networks.  - 
>> Ferenc HuszĂ¡r
>>
>> Picture doesn't appear to be open-source, even though its Paper is 
>> available. 
>>
>> I'm in the process of comparing the Picture Paper and Alchemy code and 
>> would like to have an open-source PILP from Julia that combines the best of 
>> both. 
>>
>> On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker wrote:
>>>
>>> This sounds like it could be a great contribution. I shall keep a 
>>> curious eye on your progress
>>>
>>> Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:

 Thanks for the advice Cristof. I am only interested in people wanting 
 to code it in Julia, from R by Domingos. The algo has been successfully 
 applied in many areas, even though there are many other areas remaining. 

 On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker  wrote:

> Hello Kevin,
>
> Enthusiasm is a good thing and you should hold on to that. But to save 
> yourself some headache or disappointment down the road I advice a level 
> head. Nothing is really as bluntly obviously solved as it may seems at 
> first glance after listening to brilliant people explain things. A 
> physics 
> professor of mine once told me that one of the (he thinks) most malicious 
> factors to his past students progress where overstated 
> results/conclusions 
> by other researches (such as premature announcements from CERN). I am no 
> mathematician, but as far as I can judge is the no free lunch theorem of 
> pure mathematical nature and not something induced empirically. These 
> kind 
> of results are not that easily to get rid of. If someone (especially an 
> expert) states such a theorem will prove wrong I would be inclined to 
> believe that he is not talking about literally, but instead is just 
> trying 
> to make a point about a more or less practical implication.
>
>
> Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-05 Thread Kevin Liu
Markov Logic Network is being used for the continuous development of drugs 
to cure cancer at MIT's CanceRX , on DARPA's 
largest AI project to date, Personalized Assistant that Learns (PAL) 
, progenitor of Siri. One of Alchemy's largest 
applications to date was to learn a semantic network (knowledge graph as 
Google calls it) from the web. 

Some on Probabilistic Inductive Logic Programming / Probabilistic Logic 
Programming / Statistical Relational Learning from CSAIL 
 (my 
understanding is Alchemy does PILP from entailment, proofs, and 
interpretation)

The MIT Probabilistic Computing Project (where there is Picture, an 
extension of Julia, for computer vision; Watch the video from Vikash) 


Probabilistic programming could do for Bayesian ML what Theano has done for 
neural networks.  - Ferenc 
HuszĂ¡r

Picture doesn't appear to be open-source, even though its Paper is 
available. 

I'm in the process of comparing the Picture Paper and Alchemy code and 
would like to have an open-source PILP from Julia that combines the best of 
both. 

On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker wrote:
>
> This sounds like it could be a great contribution. I shall keep a curious 
> eye on your progress
>
> Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
>>
>> Thanks for the advice Cristof. I am only interested in people wanting to 
>> code it in Julia, from R by Domingos. The algo has been successfully 
>> applied in many areas, even though there are many other areas remaining. 
>>
>> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker  
>> wrote:
>>
>>> Hello Kevin,
>>>
>>> Enthusiasm is a good thing and you should hold on to that. But to save 
>>> yourself some headache or disappointment down the road I advice a level 
>>> head. Nothing is really as bluntly obviously solved as it may seems at 
>>> first glance after listening to brilliant people explain things. A physics 
>>> professor of mine once told me that one of the (he thinks) most malicious 
>>> factors to his past students progress where overstated results/conclusions 
>>> by other researches (such as premature announcements from CERN). I am no 
>>> mathematician, but as far as I can judge is the no free lunch theorem of 
>>> pure mathematical nature and not something induced empirically. These kind 
>>> of results are not that easily to get rid of. If someone (especially an 
>>> expert) states such a theorem will prove wrong I would be inclined to 
>>> believe that he is not talking about literally, but instead is just trying 
>>> to make a point about a more or less practical implication.
>>>
>>>
>>> Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:

 The Markov logic network represents a probability distribution over the 
 states of a complex system (i.e. a cell), comprised of entities, where 
 logic formulas encode the dependencies between them. 

 On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
>
> Alchemy is like an inductive Turing machine, to be programmed to learn 
> broadly or restrictedly.
>
> The logic formulas from rules through which it represents can be 
> inconsistent, incomplete, or even incorrect-- the learning and 
> probabilistic reasoning will correct them. The key point is that Alchemy 
> doesn't have to learn from scratch, proving Wolpert and Macready's no 
> free 
> lunch theorem wrong by performing well on a variety of classes of 
> problems, 
> not just some.
>
> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
>>
>> Hello Community, 
>>
>> I'm in the last pages of Pedro Domingos' book, the Master Algo, one 
>> of two recommended by Bill Gates to learn about AI. 
>>
>> From the book, I understand all learners have to represent, evaluate, 
>> and optimize. There are many types of learners that do this. What 
>> Domingos 
>> does is generalize these three parts, (1) using Markov Logic Network to 
>> represent, (2) posterior probability to evaluate, and (3) genetic search 
>> with gradient descent to optimize. The posterior can be replaced for 
>> another accuracy measure when it is easier, as genetic search replaced 
>> by 
>> hill climbing. Where there are 15 popular options for representing, 
>> evaluating, and optimizing, Domingos generalized them into three 
>> options. 
>> The idea is to have one unified learner for any application. 
>>
>> There is code already done in R https://alchemy.cs.washington.edu/. 
>> My question: anybody in the community vested in coding it into Julia?
>>
>> Thanks. Kevin
>>
>> On Friday, June 3, 2016 at 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Christof Stocker
This sounds like it could be a great contribution. I shall keep a curious 
eye on your progress

Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
>
> Thanks for the advice Cristof. I am only interested in people wanting to 
> code it in Julia, from R by Domingos. The algo has been successfully 
> applied in many areas, even though there are many other areas remaining. 
>
> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker  > wrote:
>
>> Hello Kevin,
>>
>> Enthusiasm is a good thing and you should hold on to that. But to save 
>> yourself some headache or disappointment down the road I advice a level 
>> head. Nothing is really as bluntly obviously solved as it may seems at 
>> first glance after listening to brilliant people explain things. A physics 
>> professor of mine once told me that one of the (he thinks) most malicious 
>> factors to his past students progress where overstated results/conclusions 
>> by other researches (such as premature announcements from CERN). I am no 
>> mathematician, but as far as I can judge is the no free lunch theorem of 
>> pure mathematical nature and not something induced empirically. These kind 
>> of results are not that easily to get rid of. If someone (especially an 
>> expert) states such a theorem will prove wrong I would be inclined to 
>> believe that he is not talking about literally, but instead is just trying 
>> to make a point about a more or less practical implication.
>>
>>
>> Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
>>>
>>> The Markov logic network represents a probability distribution over the 
>>> states of a complex system (i.e. a cell), comprised of entities, where 
>>> logic formulas encode the dependencies between them. 
>>>
>>> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:

 Alchemy is like an inductive Turing machine, to be programmed to learn 
 broadly or restrictedly.

 The logic formulas from rules through which it represents can be 
 inconsistent, incomplete, or even incorrect-- the learning and 
 probabilistic reasoning will correct them. The key point is that Alchemy 
 doesn't have to learn from scratch, proving Wolpert and Macready's no free 
 lunch theorem wrong by performing well on a variety of classes of 
 problems, 
 not just some.

 On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
>
> Hello Community, 
>
> I'm in the last pages of Pedro Domingos' book, the Master Algo, one of 
> two recommended by Bill Gates to learn about AI. 
>
> From the book, I understand all learners have to represent, evaluate, 
> and optimize. There are many types of learners that do this. What 
> Domingos 
> does is generalize these three parts, (1) using Markov Logic Network to 
> represent, (2) posterior probability to evaluate, and (3) genetic search 
> with gradient descent to optimize. The posterior can be replaced for 
> another accuracy measure when it is easier, as genetic search replaced by 
> hill climbing. Where there are 15 popular options for representing, 
> evaluating, and optimizing, Domingos generalized them into three options. 
> The idea is to have one unified learner for any application. 
>
> There is code already done in R https://alchemy.cs.washington.edu/. 
> My question: anybody in the community vested in coding it into Julia?
>
> Thanks. Kevin
>
> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
>>
>> https://github.com/tbreloff/OnlineAI.jl/issues/5
>>
>> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>>>
>>> I plan to write Julia for the rest of me life... given it remains 
>>> suitable. I am still reading all of Colah's material on nets. I ran 
>>> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks 
>>> for 
>>> jumping in and telling me about OnlineAI.jl, I will look into it once I 
>>> am 
>>> ready. From a quick look, perhaps I could help and learn by building a 
>>> very 
>>> clear documentation of it. Would really like to see Julia a leap ahead 
>>> of 
>>> other languages, and plan to contribute heavily to it, but at the 
>>> moment am 
>>> still getting introduced to CS, programming, and nets at the basic 
>>> level. 
>>>
>>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:

 Kevin: computers that program themselves is a concept which is much 
 closer to reality than most would believe, but julia-users isn't 
 really the 
 best place for this speculation. If you're actually interested in 
 writing 
 code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
 might tackle code generation using a neural framework I'm working on. 

 On Friday, June 3, 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Kevin Liu
Thanks for the advice Cristof. I am only interested in people wanting to
code it in Julia, from R by Domingos. The algo has been successfully
applied in many areas, even though there are many other areas remaining.

On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker  wrote:

> Hello Kevin,
>
> Enthusiasm is a good thing and you should hold on to that. But to save
> yourself some headache or disappointment down the road I advice a level
> head. Nothing is really as bluntly obviously solved as it may seems at
> first glance after listening to brilliant people explain things. A physics
> professor of mine once told me that one of the (he thinks) most malicious
> factors to his past students progress where overstated results/conclusions
> by other researches (such as premature announcements from CERN). I am no
> mathematician, but as far as I can judge is the no free lunch theorem of
> pure mathematical nature and not something induced empirically. These kind
> of results are not that easily to get rid of. If someone (especially an
> expert) states such a theorem will prove wrong I would be inclined to
> believe that he is not talking about literally, but instead is just trying
> to make a point about a more or less practical implication.
>
>
> Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
>>
>> The Markov logic network represents a probability distribution over the
>> states of a complex system (i.e. a cell), comprised of entities, where
>> logic formulas encode the dependencies between them.
>>
>> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
>>>
>>> Alchemy is like an inductive Turing machine, to be programmed to learn
>>> broadly or restrictedly.
>>>
>>> The logic formulas from rules through which it represents can be
>>> inconsistent, incomplete, or even incorrect-- the learning and
>>> probabilistic reasoning will correct them. The key point is that Alchemy
>>> doesn't have to learn from scratch, proving Wolpert and Macready's no free
>>> lunch theorem wrong by performing well on a variety of classes of problems,
>>> not just some.
>>>
>>> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:

 Hello Community,

 I'm in the last pages of Pedro Domingos' book, the Master Algo, one of
 two recommended by Bill Gates to learn about AI.

 From the book, I understand all learners have to represent, evaluate,
 and optimize. There are many types of learners that do this. What Domingos
 does is generalize these three parts, (1) using Markov Logic Network to
 represent, (2) posterior probability to evaluate, and (3) genetic search
 with gradient descent to optimize. The posterior can be replaced for
 another accuracy measure when it is easier, as genetic search replaced by
 hill climbing. Where there are 15 popular options for representing,
 evaluating, and optimizing, Domingos generalized them into three options.
 The idea is to have one unified learner for any application.

 There is code already done in R https://alchemy.cs.washington.edu/. My
 question: anybody in the community vested in coding it into Julia?

 Thanks. Kevin

 On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
>
> https://github.com/tbreloff/OnlineAI.jl/issues/5
>
> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>>
>> I plan to write Julia for the rest of me life... given it remains
>> suitable. I am still reading all of Colah's material on nets. I ran
>> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for
>> jumping in and telling me about OnlineAI.jl, I will look into it once I 
>> am
>> ready. From a quick look, perhaps I could help and learn by building a 
>> very
>> clear documentation of it. Would really like to see Julia a leap ahead of
>> other languages, and plan to contribute heavily to it, but at the moment 
>> am
>> still getting introduced to CS, programming, and nets at the basic level.
>>
>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>>
>>> Kevin: computers that program themselves is a concept which is much
>>> closer to reality than most would believe, but julia-users isn't really 
>>> the
>>> best place for this speculation. If you're actually interested in 
>>> writing
>>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we
>>> might tackle code generation using a neural framework I'm working on.
>>>
>>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>>
 If Andrew Ng who cited Gates, and Gates who cited Domingos (who did
 not lecture at Google with a TensorFlow question in the end), were
 unsuccessful penny traders, Julia was a language for web design, and 
 the
 tribes in the video didn't actually solve 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Christof Stocker
Hello Kevin,

Enthusiasm is a good thing and you should hold on to that. But to save 
yourself some headache or disappointment down the road I advice a level 
head. Nothing is really as bluntly obviously solved as it may seems at 
first glance after listening to brilliant people explain things. A physics 
professor of mine once told me that one of the (he thinks) most malicious 
factors to his past students progress where overstated results/conclusions 
by other researches (such as premature announcements from CERN). I am no 
mathematician, but as far as I can judge is the no free lunch theorem of 
pure mathematical nature and not something induced empirically. These kind 
of results are not that easily to get rid of. If someone (especially an 
expert) states such a theorem will prove wrong I would be inclined to 
believe that he is not talking about literally, but instead is just trying 
to make a point about a more or less practical implication.

Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
>
> The Markov logic network represents a probability distribution over the 
> states of a complex system (i.e. a cell), comprised of entities, where 
> logic formulas encode the dependencies between them. 
>
> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
>>
>> Alchemy is like an inductive Turing machine, to be programmed to learn 
>> broadly or restrictedly.
>>
>> The logic formulas from rules through which it represents can be 
>> inconsistent, incomplete, or even incorrect-- the learning and 
>> probabilistic reasoning will correct them. The key point is that Alchemy 
>> doesn't have to learn from scratch, proving Wolpert and Macready's no free 
>> lunch theorem wrong by performing well on a variety of classes of problems, 
>> not just some.
>>
>> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
>>>
>>> Hello Community, 
>>>
>>> I'm in the last pages of Pedro Domingos' book, the Master Algo, one of 
>>> two recommended by Bill Gates to learn about AI. 
>>>
>>> From the book, I understand all learners have to represent, evaluate, 
>>> and optimize. There are many types of learners that do this. What Domingos 
>>> does is generalize these three parts, (1) using Markov Logic Network to 
>>> represent, (2) posterior probability to evaluate, and (3) genetic search 
>>> with gradient descent to optimize. The posterior can be replaced for 
>>> another accuracy measure when it is easier, as genetic search replaced by 
>>> hill climbing. Where there are 15 popular options for representing, 
>>> evaluating, and optimizing, Domingos generalized them into three options. 
>>> The idea is to have one unified learner for any application. 
>>>
>>> There is code already done in R https://alchemy.cs.washington.edu/. My 
>>> question: anybody in the community vested in coding it into Julia?
>>>
>>> Thanks. Kevin
>>>
>>> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:

 https://github.com/tbreloff/OnlineAI.jl/issues/5

 On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>
> I plan to write Julia for the rest of me life... given it remains 
> suitable. I am still reading all of Colah's material on nets. I ran 
> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
> jumping in and telling me about OnlineAI.jl, I will look into it once I 
> am 
> ready. From a quick look, perhaps I could help and learn by building a 
> very 
> clear documentation of it. Would really like to see Julia a leap ahead of 
> other languages, and plan to contribute heavily to it, but at the moment 
> am 
> still getting introduced to CS, programming, and nets at the basic level. 
>
> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>
>> Kevin: computers that program themselves is a concept which is much 
>> closer to reality than most would believe, but julia-users isn't really 
>> the 
>> best place for this speculation. If you're actually interested in 
>> writing 
>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
>> might tackle code generation using a neural framework I'm working on. 
>>
>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>
>>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did 
>>> not lecture at Google with a TensorFlow question in the end), were 
>>> unsuccessful penny traders, Julia was a language for web design, and 
>>> the 
>>> tribes in the video didn't actually solve problems, perhaps this would 
>>> be a 
>>> wildly off-topic, speculative discussion. But these statements couldn't 
>>> be 
>>> farther from the truth. In fact, if I had known about this video some 
>>> months ago I would've understood better on how to solve a problem I was 
>>> working on.  
>>>
>>> For the founders of Julia: 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Kevin Liu
Alchemy is also less expensive and opaque than Watson's meta learning 
:
 
'I believe you have prostate cancer because the decision tree, the genetic 
algorithm, and Naìˆve Bayes say so, although the multilayer perceptron and 
the SVM disagree.'

On Wednesday, August 3, 2016 at 4:36:52 PM UTC-3, Kevin Liu wrote:
>
> Another important cool thing I think is worth noting: he added the 
> possibility of weights to rules (attachment). Each line is equivalent to a 
> desired conclusion. 
>
> On Wednesday, August 3, 2016 at 4:27:05 PM UTC-3, Kevin Liu wrote:
>>
>> The Markov logic network represents a probability distribution over the 
>> states of a complex system (i.e. a cell), comprised of entities, where 
>> logic formulas encode the dependencies between them. 
>>
>> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
>>>
>>> Alchemy is like an inductive Turing machine, to be programmed to learn 
>>> broadly or restrictedly.
>>>
>>> The logic formulas from rules through which it represents can be 
>>> inconsistent, incomplete, or even incorrect-- the learning and 
>>> probabilistic reasoning will correct them. The key point is that Alchemy 
>>> doesn't have to learn from scratch, proving Wolpert and Macready's no free 
>>> lunch theorem wrong by performing well on a variety of classes of problems, 
>>> not just some.
>>>
>>> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:

 Hello Community, 

 I'm in the last pages of Pedro Domingos' book, the Master Algo, one of 
 two recommended by Bill Gates to learn about AI. 

 From the book, I understand all learners have to represent, evaluate, 
 and optimize. There are many types of learners that do this. What Domingos 
 does is generalize these three parts, (1) using Markov Logic Network to 
 represent, (2) posterior probability to evaluate, and (3) genetic search 
 with gradient descent to optimize. The posterior can be replaced for 
 another accuracy measure when it is easier, as genetic search replaced by 
 hill climbing. Where there are 15 popular options for representing, 
 evaluating, and optimizing, Domingos generalized them into three options. 
 The idea is to have one unified learner for any application. 

 There is code already done in R https://alchemy.cs.washington.edu/. My 
 question: anybody in the community vested in coding it into Julia?

 Thanks. Kevin

 On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
>
> https://github.com/tbreloff/OnlineAI.jl/issues/5
>
> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>>
>> I plan to write Julia for the rest of me life... given it remains 
>> suitable. I am still reading all of Colah's material on nets. I ran 
>> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks 
>> for 
>> jumping in and telling me about OnlineAI.jl, I will look into it once I 
>> am 
>> ready. From a quick look, perhaps I could help and learn by building a 
>> very 
>> clear documentation of it. Would really like to see Julia a leap ahead 
>> of 
>> other languages, and plan to contribute heavily to it, but at the moment 
>> am 
>> still getting introduced to CS, programming, and nets at the basic 
>> level. 
>>
>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>>
>>> Kevin: computers that program themselves is a concept which is much 
>>> closer to reality than most would believe, but julia-users isn't really 
>>> the 
>>> best place for this speculation. If you're actually interested in 
>>> writing 
>>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
>>> might tackle code generation using a neural framework I'm working on. 
>>>
>>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>>
 If Andrew Ng who cited Gates, and Gates who cited Domingos (who did 
 not lecture at Google with a TensorFlow question in the end), were 
 unsuccessful penny traders, Julia was a language for web design, and 
 the 
 tribes in the video didn't actually solve problems, perhaps this would 
 be a 
 wildly off-topic, speculative discussion. But these statements 
 couldn't be 
 farther from the truth. In fact, if I had known about this video some 
 months ago I would've understood better on how to solve a problem I 
 was 
 working on.  

 For the founders of Julia: I understand your tribe is mainly CS. 
 This master algorithm, as you are aware, would require collaboration 
 with 
 other tribes. Just citing the obvious. 

 On Friday, June 3, 2016 at 10:21:25 AM UTC-3, 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Kevin Liu
Another important cool thing I think is worth noting: he added the 
possibility of weights to rules (attachment). Each line is equivalent to a 
desired conclusion. 

On Wednesday, August 3, 2016 at 4:27:05 PM UTC-3, Kevin Liu wrote:
>
> The Markov logic network represents a probability distribution over the 
> states of a complex system (i.e. a cell), comprised of entities, where 
> logic formulas encode the dependencies between them. 
>
> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
>>
>> Alchemy is like an inductive Turing machine, to be programmed to learn 
>> broadly or restrictedly.
>>
>> The logic formulas from rules through which it represents can be 
>> inconsistent, incomplete, or even incorrect-- the learning and 
>> probabilistic reasoning will correct them. The key point is that Alchemy 
>> doesn't have to learn from scratch, proving Wolpert and Macready's no free 
>> lunch theorem wrong by performing well on a variety of classes of problems, 
>> not just some.
>>
>> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
>>>
>>> Hello Community, 
>>>
>>> I'm in the last pages of Pedro Domingos' book, the Master Algo, one of 
>>> two recommended by Bill Gates to learn about AI. 
>>>
>>> From the book, I understand all learners have to represent, evaluate, 
>>> and optimize. There are many types of learners that do this. What Domingos 
>>> does is generalize these three parts, (1) using Markov Logic Network to 
>>> represent, (2) posterior probability to evaluate, and (3) genetic search 
>>> with gradient descent to optimize. The posterior can be replaced for 
>>> another accuracy measure when it is easier, as genetic search replaced by 
>>> hill climbing. Where there are 15 popular options for representing, 
>>> evaluating, and optimizing, Domingos generalized them into three options. 
>>> The idea is to have one unified learner for any application. 
>>>
>>> There is code already done in R https://alchemy.cs.washington.edu/. My 
>>> question: anybody in the community vested in coding it into Julia?
>>>
>>> Thanks. Kevin
>>>
>>> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:

 https://github.com/tbreloff/OnlineAI.jl/issues/5

 On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>
> I plan to write Julia for the rest of me life... given it remains 
> suitable. I am still reading all of Colah's material on nets. I ran 
> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
> jumping in and telling me about OnlineAI.jl, I will look into it once I 
> am 
> ready. From a quick look, perhaps I could help and learn by building a 
> very 
> clear documentation of it. Would really like to see Julia a leap ahead of 
> other languages, and plan to contribute heavily to it, but at the moment 
> am 
> still getting introduced to CS, programming, and nets at the basic level. 
>
> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>
>> Kevin: computers that program themselves is a concept which is much 
>> closer to reality than most would believe, but julia-users isn't really 
>> the 
>> best place for this speculation. If you're actually interested in 
>> writing 
>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
>> might tackle code generation using a neural framework I'm working on. 
>>
>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>
>>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did 
>>> not lecture at Google with a TensorFlow question in the end), were 
>>> unsuccessful penny traders, Julia was a language for web design, and 
>>> the 
>>> tribes in the video didn't actually solve problems, perhaps this would 
>>> be a 
>>> wildly off-topic, speculative discussion. But these statements couldn't 
>>> be 
>>> farther from the truth. In fact, if I had known about this video some 
>>> months ago I would've understood better on how to solve a problem I was 
>>> working on.  
>>>
>>> For the founders of Julia: I understand your tribe is mainly CS. 
>>> This master algorithm, as you are aware, would require collaboration 
>>> with 
>>> other tribes. Just citing the obvious. 
>>>
>>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:

 There could be parts missing as Domingos mentions, but induction, 
 backpropagation, genetic programming, probabilistic inference, and 
 SVMs 
 working together-- what's speculative about the improved versions of 
 these? 

 Julia was made for AI. Isn't it time for a consolidated view on how 
 to reach it? 

 On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>
> This is not a forum for wildly off-topic, 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Kevin Liu
The Markov logic network represents a probability distribution over the 
states of a complex system (i.e. a cell), comprised of entities, where 
logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
>
> Alchemy is like an inductive Turing machine, to be programmed to learn 
> broadly or restrictedly.
>
> The logic formulas from rules through which it represents can be 
> inconsistent, incomplete, or even incorrect-- the learning and 
> probabilistic reasoning will correct them. The key point is that Alchemy 
> doesn't have to learn from scratch, proving Wolpert and Macready's no free 
> lunch theorem wrong by performing well on a variety of classes of problems, 
> not just some.
>
> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
>>
>> Hello Community, 
>>
>> I'm in the last pages of Pedro Domingos' book, the Master Algo, one of 
>> two recommended by Bill Gates to learn about AI. 
>>
>> From the book, I understand all learners have to represent, evaluate, and 
>> optimize. There are many types of learners that do this. What Domingos does 
>> is generalize these three parts, (1) using Markov Logic Network to 
>> represent, (2) posterior probability to evaluate, and (3) genetic search 
>> with gradient descent to optimize. The posterior can be replaced for 
>> another accuracy measure when it is easier, as genetic search replaced by 
>> hill climbing. Where there are 15 popular options for representing, 
>> evaluating, and optimizing, Domingos generalized them into three options. 
>> The idea is to have one unified learner for any application. 
>>
>> There is code already done in R https://alchemy.cs.washington.edu/. My 
>> question: anybody in the community vested in coding it into Julia?
>>
>> Thanks. Kevin
>>
>> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
>>>
>>> https://github.com/tbreloff/OnlineAI.jl/issues/5
>>>
>>> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:

 I plan to write Julia for the rest of me life... given it remains 
 suitable. I am still reading all of Colah's material on nets. I ran 
 Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
 jumping in and telling me about OnlineAI.jl, I will look into it once I am 
 ready. From a quick look, perhaps I could help and learn by building a 
 very 
 clear documentation of it. Would really like to see Julia a leap ahead of 
 other languages, and plan to contribute heavily to it, but at the moment 
 am 
 still getting introduced to CS, programming, and nets at the basic level. 

 On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>
> Kevin: computers that program themselves is a concept which is much 
> closer to reality than most would believe, but julia-users isn't really 
> the 
> best place for this speculation. If you're actually interested in writing 
> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
> might tackle code generation using a neural framework I'm working on. 
>
> On Friday, June 3, 2016, Kevin Liu  wrote:
>
>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did 
>> not lecture at Google with a TensorFlow question in the end), were 
>> unsuccessful penny traders, Julia was a language for web design, and the 
>> tribes in the video didn't actually solve problems, perhaps this would 
>> be a 
>> wildly off-topic, speculative discussion. But these statements couldn't 
>> be 
>> farther from the truth. In fact, if I had known about this video some 
>> months ago I would've understood better on how to solve a problem I was 
>> working on.  
>>
>> For the founders of Julia: I understand your tribe is mainly CS. This 
>> master algorithm, as you are aware, would require collaboration with 
>> other 
>> tribes. Just citing the obvious. 
>>
>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> There could be parts missing as Domingos mentions, but induction, 
>>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>>> working together-- what's speculative about the improved versions of 
>>> these? 
>>>
>>> Julia was made for AI. Isn't it time for a consolidated view on how 
>>> to reach it? 
>>>
>>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:

 This is not a forum for wildly off-topic, speculative discussion.

 Take this to Reddit, Hacker News, etc.


 On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  
 wrote:

> I am wondering how Julia fits in with the unified tribes
>
> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>
> 

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Kevin Liu
Alchemy is like an inductive Turing machine, to be programmed to learn 
broadly or restrictedly.

The logic formulas from rules through which it represents can be 
inconsistent, incomplete, or even incorrect-- the learning and 
probabilistic reasoning will correct them. The key point is that Alchemy 
doesn't have to learn from scratch, proving Wolpert and Macready's no free 
lunch theorem wrong by performing well on a variety of classes of problems, 
not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
>
> Hello Community, 
>
> I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two 
> recommended by Bill Gates to learn about AI. 
>
> From the book, I understand all learners have to represent, evaluate, and 
> optimize. There are many types of learners that do this. What Domingos does 
> is generalize these three parts, (1) using Markov Logic Network to 
> represent, (2) posterior probability to evaluate, and (3) genetic search 
> with gradient descent to optimize. The posterior can be replaced for 
> another accuracy measure when it is easier, as genetic search replaced by 
> hill climbing. Where there are 15 popular options for representing, 
> evaluating, and optimizing, Domingos generalized them into three options. 
> The idea is to have one unified learner for any application. 
>
> There is code already done in R https://alchemy.cs.washington.edu/. My 
> question: anybody in the community vested in coding it into Julia?
>
> Thanks. Kevin
>
> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
>>
>> https://github.com/tbreloff/OnlineAI.jl/issues/5
>>
>> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>>>
>>> I plan to write Julia for the rest of me life... given it remains 
>>> suitable. I am still reading all of Colah's material on nets. I ran 
>>> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
>>> jumping in and telling me about OnlineAI.jl, I will look into it once I am 
>>> ready. From a quick look, perhaps I could help and learn by building a very 
>>> clear documentation of it. Would really like to see Julia a leap ahead of 
>>> other languages, and plan to contribute heavily to it, but at the moment am 
>>> still getting introduced to CS, programming, and nets at the basic level. 
>>>
>>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:

 Kevin: computers that program themselves is a concept which is much 
 closer to reality than most would believe, but julia-users isn't really 
 the 
 best place for this speculation. If you're actually interested in writing 
 code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
 might tackle code generation using a neural framework I'm working on. 

 On Friday, June 3, 2016, Kevin Liu  wrote:

> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did 
> not lecture at Google with a TensorFlow question in the end), were 
> unsuccessful penny traders, Julia was a language for web design, and the 
> tribes in the video didn't actually solve problems, perhaps this would be 
> a 
> wildly off-topic, speculative discussion. But these statements couldn't 
> be 
> farther from the truth. In fact, if I had known about this video some 
> months ago I would've understood better on how to solve a problem I was 
> working on.  
>
> For the founders of Julia: I understand your tribe is mainly CS. This 
> master algorithm, as you are aware, would require collaboration with 
> other 
> tribes. Just citing the obvious. 
>
> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>
>> There could be parts missing as Domingos mentions, but induction, 
>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>> working together-- what's speculative about the improved versions of 
>> these? 
>>
>> Julia was made for AI. Isn't it time for a consolidated view on how 
>> to reach it? 
>>
>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>>
>>> This is not a forum for wildly off-topic, speculative discussion.
>>>
>>> Take this to Reddit, Hacker News, etc.
>>>
>>>
>>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>>
 I am wondering how Julia fits in with the unified tribes

 mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

 https://www.youtube.com/watch?v=B8J4uefCQMc

>>>
>>>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-03 Thread Kevin Liu
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two 
recommended by Bill Gates to learn about AI. 

>From the book, I understand all learners have to represent, evaluate, and 
optimize. There are many types of learners that do this. What Domingos does 
is generalize these three parts, (1) using Markov Logic Network to 
represent, (2) posterior probability to evaluate, and (3) genetic search 
with gradient descent to optimize. The posterior can be replaced for 
another accuracy measure when it is easier, as genetic search replaced by 
hill climbing. Where there are 15 popular options for representing, 
evaluating, and optimizing, Domingos generalized them into three options. 
The idea is to have one unified learner for any application. 

There is code already done in R https://alchemy.cs.washington.edu/. My 
question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
>
> https://github.com/tbreloff/OnlineAI.jl/issues/5
>
> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>>
>> I plan to write Julia for the rest of me life... given it remains 
>> suitable. I am still reading all of Colah's material on nets. I ran 
>> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
>> jumping in and telling me about OnlineAI.jl, I will look into it once I am 
>> ready. From a quick look, perhaps I could help and learn by building a very 
>> clear documentation of it. Would really like to see Julia a leap ahead of 
>> other languages, and plan to contribute heavily to it, but at the moment am 
>> still getting introduced to CS, programming, and nets at the basic level. 
>>
>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>>
>>> Kevin: computers that program themselves is a concept which is much 
>>> closer to reality than most would believe, but julia-users isn't really the 
>>> best place for this speculation. If you're actually interested in writing 
>>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
>>> might tackle code generation using a neural framework I'm working on. 
>>>
>>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>>
 If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
 lecture at Google with a TensorFlow question in the end), were 
 unsuccessful 
 penny traders, Julia was a language for web design, and the tribes in the 
 video didn't actually solve problems, perhaps this would be a wildly 
 off-topic, speculative discussion. But these statements couldn't be 
 farther 
 from the truth. In fact, if I had known about this video some months ago I 
 would've understood better on how to solve a problem I was working on.  

 For the founders of Julia: I understand your tribe is mainly CS. This 
 master algorithm, as you are aware, would require collaboration with other 
 tribes. Just citing the obvious. 

 On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>
> There could be parts missing as Domingos mentions, but induction, 
> backpropagation, genetic programming, probabilistic inference, and SVMs 
> working together-- what's speculative about the improved versions of 
> these? 
>
> Julia was made for AI. Isn't it time for a consolidated view on how to 
> reach it? 
>
> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>
>> This is not a forum for wildly off-topic, speculative discussion.
>>
>> Take this to Reddit, Hacker News, etc.
>>
>>
>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>
>>> I am wondering how Julia fits in with the unified tribes
>>>
>>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>>
>>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>>
>>
>>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>
> I plan to write Julia for the rest of me life... given it remains 
> suitable. I am still reading all of Colah's material on nets. I ran 
> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
> jumping in and telling me about OnlineAI.jl, I will look into it once I am 
> ready. From a quick look, perhaps I could help and learn by building a very 
> clear documentation of it. Would really like to see Julia a leap ahead of 
> other languages, and plan to contribute heavily to it, but at the moment am 
> still getting introduced to CS, programming, and nets at the basic level. 
>
> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>
>> Kevin: computers that program themselves is a concept which is much 
>> closer to reality than most would believe, but julia-users isn't really the 
>> best place for this speculation. If you're actually interested in writing 
>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
>> might tackle code generation using a neural framework I'm working on. 
>>
>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>
>>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
>>> lecture at Google with a TensorFlow question in the end), were unsuccessful 
>>> penny traders, Julia was a language for web design, and the tribes in the 
>>> video didn't actually solve problems, perhaps this would be a wildly 
>>> off-topic, speculative discussion. But these statements couldn't be farther 
>>> from the truth. In fact, if I had known about this video some months ago I 
>>> would've understood better on how to solve a problem I was working on.  
>>>
>>> For the founders of Julia: I understand your tribe is mainly CS. This 
>>> master algorithm, as you are aware, would require collaboration with other 
>>> tribes. Just citing the obvious. 
>>>
>>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:

 There could be parts missing as Domingos mentions, but induction, 
 backpropagation, genetic programming, probabilistic inference, and SVMs 
 working together-- what's speculative about the improved versions of 
 these? 

 Julia was made for AI. Isn't it time for a consolidated view on how to 
 reach it? 

 On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>
> This is not a forum for wildly off-topic, speculative discussion.
>
> Take this to Reddit, Hacker News, etc.
>
>
> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>
>> I am wondering how Julia fits in with the unified tribes
>>
>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>
>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>
>
>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
I plan to write Julia for the rest of me life... given it remains suitable. 
I am still reading all of Colah's material on nets. I ran Mocha.jl a couple 
weeks ago and was very happy to see it work. Thanks for jumping in and 
telling me about OnlineAI.jl, I will look into it once I am ready. From a 
quick look, perhaps I could help and learn by building a very clear 
documentation of it. Would really like to see Julia a leap ahead of other 
languages, and plan to contribute heavily to it, but at the moment am still 
getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>
> Kevin: computers that program themselves is a concept which is much closer 
> to reality than most would believe, but julia-users isn't really the best 
> place for this speculation. If you're actually interested in writing code, 
> I'm happy to discuss in OnlineAI.jl. I was thinking about how we might 
> tackle code generation using a neural framework I'm working on. 
>
> On Friday, June 3, 2016, Kevin Liu  wrote:
>
>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
>> lecture at Google with a TensorFlow question in the end), were unsuccessful 
>> penny traders, Julia was a language for web design, and the tribes in the 
>> video didn't actually solve problems, perhaps this would be a wildly 
>> off-topic, speculative discussion. But these statements couldn't be farther 
>> from the truth. In fact, if I had known about this video some months ago I 
>> would've understood better on how to solve a problem I was working on.  
>>
>> For the founders of Julia: I understand your tribe is mainly CS. This 
>> master algorithm, as you are aware, would require collaboration with other 
>> tribes. Just citing the obvious. 
>>
>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> There could be parts missing as Domingos mentions, but induction, 
>>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>>> working together-- what's speculative about the improved versions of these? 
>>>
>>> Julia was made for AI. Isn't it time for a consolidated view on how to 
>>> reach it? 
>>>
>>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:

 This is not a forum for wildly off-topic, speculative discussion.

 Take this to Reddit, Hacker News, etc.


 On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:

> I am wondering how Julia fits in with the unified tribes
>
> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>
> https://www.youtube.com/watch?v=B8J4uefCQMc
>



Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
I plan to write Julia for the rest of me life... given it remains suitable. 
I am still reading all of Colah's material on nets. I ran Mocha.jl a couple 
weeks ago and was very happy to see it work. Thanks for jumping in and 
telling me about OnlineAI.jl, I will look into it once I am ready. From a 
quick look, perhaps I could help and learn by building a very clear 
documentation for it. Would really like to see Julia a leap ahead of other 
languages, and plan to contribute heavily to it, but at the moment am still 
getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>
> Kevin: computers that program themselves is a concept which is much closer 
> to reality than most would believe, but julia-users isn't really the best 
> place for this speculation. If you're actually interested in writing code, 
> I'm happy to discuss in OnlineAI.jl. I was thinking about how we might 
> tackle code generation using a neural framework I'm working on. 
>
> On Friday, June 3, 2016, Kevin Liu  wrote:
>
>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
>> lecture at Google with a TensorFlow question in the end), were unsuccessful 
>> penny traders, Julia was a language for web design, and the tribes in the 
>> video didn't actually solve problems, perhaps this would be a wildly 
>> off-topic, speculative discussion. But these statements couldn't be farther 
>> from the truth. In fact, if I had known about this video some months ago I 
>> would've understood better on how to solve a problem I was working on.  
>>
>> For the founders of Julia: I understand your tribe is mainly CS. This 
>> master algorithm, as you are aware, would require collaboration with other 
>> tribes. Just citing the obvious. 
>>
>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> There could be parts missing as Domingos mentions, but induction, 
>>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>>> working together-- what's speculative about the improved versions of these? 
>>>
>>> Julia was made for AI. Isn't it time for a consolidated view on how to 
>>> reach it? 
>>>
>>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:

 This is not a forum for wildly off-topic, speculative discussion.

 Take this to Reddit, Hacker News, etc.


 On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:

> I am wondering how Julia fits in with the unified tribes
>
> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>
> https://www.youtube.com/watch?v=B8J4uefCQMc
>



Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Tom Breloff
Kevin: computers that program themselves is a concept which is much closer
to reality than most would believe, but julia-users isn't really the best
place for this speculation. If you're actually interested in writing code,
I'm happy to discuss in OnlineAI.jl. I was thinking about how we might
tackle code generation using a neural framework I'm working on.

On Friday, June 3, 2016, Kevin Liu  wrote:

> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not
> lecture at Google with a TensorFlow question in the end), were unsuccessful
> penny traders, Julia was a language for web design, and the tribes in the
> video didn't actually solve problems, perhaps this would be a wildly
> off-topic, speculative discussion. But these statements couldn't be farther
> from the truth. In fact, if I had known about this video some months ago I
> would've understood better on how to solve a problem I was working on.
>
> For the founders of Julia: I understand your tribe is mainly CS. This
> master algorithm, as you are aware, would require collaboration with other
> tribes. Just citing the obvious.
>
> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>
>> There could be parts missing as Domingos mentions, but induction,
>> backpropagation, genetic programming, probabilistic inference, and SVMs
>> working together-- what's speculative about the improved versions of these?
>>
>> Julia was made for AI. Isn't it time for a consolidated view on how to
>> reach it?
>>
>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>>
>>> This is not a forum for wildly off-topic, speculative discussion.
>>>
>>> Take this to Reddit, Hacker News, etc.
>>>
>>>
>>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>>
 I am wondering how Julia fits in with the unified tribes

 mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

 https://www.youtube.com/watch?v=B8J4uefCQMc

>>>
>>>


Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
Correction: Founders: tribe is mainly of Symbolists?

On Friday, June 3, 2016 at 10:36:01 AM UTC-3, Kevin Liu wrote:
>
> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
> lecture at Google with a TensorFlow question in the end), were unsuccessful 
> penny traders, Julia was a language for web design, and the tribes in the 
> video didn't actually solve problems, perhaps this would be a wildly 
> off-topic, speculative discussion. But these statements couldn't be farther 
> from the truth. In fact, if I had known about this video some months ago I 
> would've understood better on how to solve a problem I was working on.  
>
> For the founders of Julia: I understand your tribe is mainly CS. This 
> master algorithm, as you are aware, would require collaboration with other 
> tribes. Just citing the obvious. 
>
> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>
>> There could be parts missing as Domingos mentions, but induction, 
>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>> working together-- what's speculative about the improved versions of these? 
>>
>> Julia was made for AI. Isn't it time for a consolidated view on how to 
>> reach it? 
>>
>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>>
>>> This is not a forum for wildly off-topic, speculative discussion.
>>>
>>> Take this to Reddit, Hacker News, etc.
>>>
>>>
>>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>>
 I am wondering how Julia fits in with the unified tribes

 mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

 https://www.youtube.com/watch?v=B8J4uefCQMc

>>>
>>>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
lecture at Google with a TensorFlow question in the end), were unsuccessful 
penny traders, Julia was a language for web design, and the tribes in the 
video didn't actually solve problems, perhaps this would be a wildly 
off-topic, speculative discussion. But these statements couldn't be farther 
from the truth. In fact, if I had known about this video some months ago I 
would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This 
master algorithm, as you are aware, would require collaboration with other 
tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>
> There could be parts missing as Domingos mentions, but induction, 
> backpropagation, genetic programming, probabilistic inference, and SVMs 
> working together-- what's speculative about the improved versions of these? 
>
> Julia was made for AI. Isn't it time for a consolidated view on how to 
> reach it? 
>
> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>
>> This is not a forum for wildly off-topic, speculative discussion.
>>
>> Take this to Reddit, Hacker News, etc.
>>
>>
>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>
>>> I am wondering how Julia fits in with the unified tribes
>>>
>>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>>
>>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>>
>>
>>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
There could be parts missing as Domingos mentions, but induction, 
backpropagation, genetic programming, probabilistic inference, and SVMs 
working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to 
reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>
> This is not a forum for wildly off-topic, speculative discussion.
>
> Take this to Reddit, Hacker News, etc.
>
>
> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  > wrote:
>
>> I am wondering how Julia fits in with the unified tribes
>>
>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>
>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>
>
>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-02 Thread Isaiah Norton
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:

> I am wondering how Julia fits in with the unified tribes
>
> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>
> https://www.youtube.com/watch?v=B8J4uefCQMc
>


[julia-users] Is the master algorithm on the roadmap?

2016-06-02 Thread Kevin Liu
I am wondering how Julia fits in with the unified tribes

mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

https://www.youtube.com/watch?v=B8J4uefCQMc