Ivan,


It's actually quite impressive that you managed to come up with some new 
stuff working on your own!

I found a research paper on""symbolic AI" and opposed "neural networks"" by 
someone at the university of Missouri here in town, which is pretty cool. 
Maybe I will get to meet with them. 
I am interested in learning what the top down approach has to offer but I 
have a gut feeling that bottom up is what will interest me most. 

You keep giving me great avenues to explore which is going to same me a 
large amount of time and is very much appreciated. Excited to read up on 
OpenAI GPT2. 
Yeah, Sophia is great. She is what really got me thinking we were making 
good progress and was stoked to find out she was based on Open cog. 

the conversational aspect is very important to me, but I have never really 
considered chat-bots to be actual AI. Perhaps I am mistaken, and perhaps 
its a situation of differing platforms that will be mashed together with 
others later, lol.  I just talked to Mitsuku. It will be interesting to 
learn about how she is programmed. She is crazy fast. However it would be 
amazing to watch AI learn language on its own through interaction, which is 
probably Hollywood talking, but I have a lot of reading to do still just to 
be able to sort Science Fiction from reality. 

It is highly important for me to strive towards AI that  will 
learn/grow/evolve in a human like way. This is a large part of the dream 
that motivates me, and likely calls for the bottom up approach. 

I will try and drop you a line when ever I come across or have a good list 
of things relating to "generative artificial NN in combination with 
partially supervised learning NN." I would make me feel like I was actually 
helping in some way as opposed to just being ignorant, lol. Which would be 
great. Also that was the first thing I began looking into, and took a short 
internet class on programming NN. I'm studying python currently to be able 
to work on it. 

Thanks Ivan I hope we can talk more on this. 

Jason


On Thursday, March 7, 2019 at 4:26:45 PM UTC-6, Ivan V. wrote:
>
> Jason,
>
> did you mean this the other way around maybe? 
>>
>> "things work more like a bit of investment for a loads of results."
>>
>> like loads of investment for a bit of results? I hope not but have a 
>> feeling you did, loll.
>>
>
> Unfortunately, from my own experience (I'm self taught) it is a lot of 
> work for a minor innovation (yet to be seen). After all this trouble, I 
> regret I didn't spend more time on learning existing methods instead of 
> trying to invent my own stuff, which was mostly reinventing the wheel. But 
> I got some new stuff, so it isn't a complete waste of time.
>
> If you opt out for symbolic AI (top-down approach to modelling an 
> artificial mind), like I did, there exist: lambda calculus, different 
> flavors of mathematical logics (propositional, predicate, higher order, 
> fuzzy, dr. Ben Goertzel's probabilistic logic networks used in the very 
> OpenCog, and so on... - ordered by complexity - also see excellent basic  
> Stanford's 
> introduction to logic <http://intrologic.stanford.edu/public/index.php>), 
> then there is intuitionistic logic, Martin Löf's type theory, Thierry 
> Coquand's calculus of constructions, and God knows what more there exists 
> that I'm not aware of. I find Wikipedia very helpful for constructing a 
> general overview, then to deep dive into googled research papers on the 
> subjects I find interesting.
>
> If you opt out for artificial neural networks (bottom-up approach in 
> modelling an artificial mind), I'm afraid I'm not much of a use, but I'd 
> put my bets on generative artificial NN in combination with partially 
> supervised learning NN. Recently I found this field very promising and I 
> want to make myself to find a time to check it out more thoroughly.
>
> You may also like genetic algorithms, if you like natural evolutionary 
> approach. There might be more ideas in the natural appearance of Earthlings 
> than I thought at first.
>
> yeah after googling the subjects you mentioned, if I am not mistaken,  it 
>> sounds like we are not quite there yet.
>>
>
> You never know what's just behind the corner. Brand new OpenAI GPT2 
> algorithm released these days just astonished me. I imagine that training 
> it on research papers, instead of on Reddit posts, could actually make an 
> excellent artificial scientist. It could be amazing and very inspirational 
> work.
>
> Also, did you check out some videos of "Sophia" robot interacting with 
> humans? She is based on OpenCog architecture, but I don't know the details. 
> She appears to conduct some reasoning inference not found in similar 
> projects.
>
> But if you are just after a chit-chat machine, you might want to check out 
> a wide chat-bot collection. There are even specialized programming 
> languages for building chatbots (like AIML), and some of chat-bots (like 
> online award winning "Mitsuku") are very impressive embodyments of 
> conversation carrying machines. I'd call them hopefull beginnings of AI, 
> but there is a lot of space for improvements.
>
>
> čet, 7. ožu 2019. u 22:25 JRTA <[email protected] <javascript:>> napisao 
> je:
>
>> yeah after googling the subjects you mentioned, if I am not mistaken,  it 
>> sounds like we are not quite there yet. 
>>
>> On Thursday, March 7, 2019 at 1:35:39 PM UTC-6, Ivan V. wrote:
>>>
>>> > Can this be done?
>>>
>>> Not without a hard work and a lot of learning (manuals, research parers 
>>> and books). The time for this learning measures in decades. If you are 
>>> serious about AI, schedule the next decade or two for it, you'll be smarter 
>>> what to do after all that time. You can start with googling out "symbolic 
>>> AI" and opposed "neural networks". A plenty of materials and ideas out 
>>> there, on the web. But I'm warning you, that knowledge beast has thousands 
>>> of heads, and you have to be heavily motivated to sustain in your research. 
>>> As you slowly climb in you learning quest, your vision about AI would 
>>> profile into something you might be able to use in a real world. And don't 
>>> forget, at least thousands of people with very high academic degrees are 
>>> pursuing the same idea you have. If you want to contribute, prepare for a 
>>> lot of work for a modest contribution. Only if you have some special 
>>> abilities, things work more like a bit of investment for a loads of 
>>> results. But I haven't met anyone like that in my whole life.
>>>
>>> If this sounds too much for you, then buy some popcorn, sit back and 
>>> enjoy the show. Things just began to be interesting 
>>> <https://www.askskynet.com/>, and it took  more than a half century to 
>>> get where we all are now.
>>>
>>> Be well,
>>> Ivan V.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/opencog/4a2cbdd3-2591-497f-8960-cd7431afa902%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/opencog/4a2cbdd3-2591-497f-8960-cd7431afa902%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/066031e0-e747-4a13-b6a9-a951f5d3bd49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to