Jason, did you mean this the other way around maybe? > > "things work more like a bit of investment for a loads of results." > > like loads of investment for a bit of results? I hope not but have a > feeling you did, loll. >
Unfortunately, from my own experience (I'm self taught) it is a lot of work for a minor innovation (yet to be seen). After all this trouble, I regret I didn't spend more time on learning existing methods instead of trying to invent my own stuff, which was mostly reinventing the wheel. But I got some new stuff, so it isn't a complete waste of time. If you opt out for symbolic AI (top-down approach to modelling an artificial mind), like I did, there exist: lambda calculus, different flavors of mathematical logics (propositional, predicate, higher order, fuzzy, dr. Ben Goertzel's probabilistic logic networks used in the very OpenCog, and so on... - ordered by complexity - also see excellent basic Stanford's introduction to logic <http://intrologic.stanford.edu/public/index.php>), then there is intuitionistic logic, Martin Löf's type theory, Thierry Coquand's calculus of constructions, and God knows what more there exists that I'm not aware of. I find Wikipedia very helpful for constructing a general overview, then to deep dive into googled research papers on the subjects I find interesting. If you opt out for artificial neural networks (bottom-up approach in modelling an artificial mind), I'm afraid I'm not much of a use, but I'd put my bets on generative artificial NN in combination with partially supervised learning NN. Recently I found this field very promising and I want to make myself to find a time to check it out more thoroughly. You may also like genetic algorithms, if you like natural evolutionary approach. There might be more ideas in the natural appearance of Earthlings than I thought at first. yeah after googling the subjects you mentioned, if I am not mistaken, it > sounds like we are not quite there yet. > You never know what's just behind the corner. Brand new OpenAI GPT2 algorithm released these days just astonished me. I imagine that training it on research papers, instead of on Reddit posts, could actually make an excellent artificial scientist. It could be amazing and very inspirational work. Also, did you check out some videos of "Sophia" robot interacting with humans? She is based on OpenCog architecture, but I don't know the details. She appears to conduct some reasoning inference not found in similar projects. But if you are just after a chit-chat machine, you might want to check out a wide chat-bot collection. There are even specialized programming languages for building chatbots (like AIML), and some of chat-bots (like online award winning "Mitsuku") are very impressive embodyments of conversation carrying machines. I'd call them hopefull beginnings of AI, but there is a lot of space for improvements. čet, 7. ožu 2019. u 22:25 JRTA <[email protected]> napisao je: > yeah after googling the subjects you mentioned, if I am not mistaken, it > sounds like we are not quite there yet. > > On Thursday, March 7, 2019 at 1:35:39 PM UTC-6, Ivan V. wrote: >> >> > Can this be done? >> >> Not without a hard work and a lot of learning (manuals, research parers >> and books). The time for this learning measures in decades. If you are >> serious about AI, schedule the next decade or two for it, you'll be smarter >> what to do after all that time. You can start with googling out "symbolic >> AI" and opposed "neural networks". A plenty of materials and ideas out >> there, on the web. But I'm warning you, that knowledge beast has thousands >> of heads, and you have to be heavily motivated to sustain in your research. >> As you slowly climb in you learning quest, your vision about AI would >> profile into something you might be able to use in a real world. And don't >> forget, at least thousands of people with very high academic degrees are >> pursuing the same idea you have. If you want to contribute, prepare for a >> lot of work for a modest contribution. Only if you have some special >> abilities, things work more like a bit of investment for a loads of >> results. But I haven't met anyone like that in my whole life. >> >> If this sounds too much for you, then buy some popcorn, sit back and >> enjoy the show. Things just began to be interesting >> <https://www.askskynet.com/>, and it took more than a half century to >> get where we all are now. >> >> Be well, >> Ivan V. >> > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at https://groups.google.com/group/opencog. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/4a2cbdd3-2591-497f-8960-cd7431afa902%40googlegroups.com > <https://groups.google.com/d/msgid/opencog/4a2cbdd3-2591-497f-8960-cd7431afa902%40googlegroups.com?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6UWYrc29-fh__LjGFjZLSoGOMqQEUkAq4bukaLRyeYtAg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
