https://gist.github.com/LemonAndroid/7a5f2f521d0e0aa2f8ec8dcce28dc904#file-pain-iq-2-rb
<https://gist.github.com/LemonAndroid/7a5f2f521d0e0aa2f8ec8dcce28dc904#file-pain-iq-2-rb>
```ruby
class PAINDOTIQ
PAINFUL_THOUGHTS = {
"nuking the whole world" => [0, 20, 300],
"making all people stop smoking" => [20, 22, 30],
"creating better medicine for headache" => [10, 20, 30]
# NEEDS 100 total
}
def train
painful_thought = PAINFUL_THOUGHTS
.search_via_algolia(GoogleNews.new.local_index)
@neural_network.train(
painful_thought.text, # feed padded input 100 chars, padding
char is " " (\s)
painful_thought.one_hot_encoding, # google "one hot encoding"
painful_thought.time, # feed time of the google news (utc
timestamp)
painful_thought.google_news_headline.text # also padded input
of 100 chars
painful_thought.google_news_headline.ohe # one hot encoding of
google news via keywords
).fit(aks_humans_via_email(painful_thought) # This will start a
pipeline to train the model via human input. The human operators will receive
a nicely formatted email and'll be asked to rate the pain of the thought on a
scale from 0-100. They are given 3 options and can provide a 4rd free form one
end
end
PAINDOTIQ.new.train
```
https://gist.github.com/LemonAndroid/7a5f2f521d0e0aa2f8ec8dcce28dc904#file-pain-iq-2-rb
<https://gist.github.com/LemonAndroid/7a5f2f521d0e0aa2f8ec8dcce28dc904#file-pain-iq-2-rb>
> On 5. Aug 2019, at 19:02, Mike Archbold <[email protected]> wrote:
>
> On 8/5/19, Matt Mahoney <[email protected]
> <mailto:[email protected]>> wrote:
>> Narrow AI doesn't grow into AGI. AGI is lots of narrow AI specialists put
>> together. Nobody in an organization can do what the organization does.
>> Every member either knows one specific task well, or can refer you to
>> someone who does. Kind of like the organization of the structures of your
>> brain.
>
> Traditionally that is the way I thought about it, and that has made
> sense. But as I am continuing develop my ~AGI base program, it's
> increasingly apparent that the more narrow AI's the base could handle
> (in the future, not that far yet!!!!) the smarter the whole works
> potentially would get, and the easier it is to add new narrow
> functions which are increasingly less narrow. I'm trying to advance my
> theory along with the code.
>
>
>>
>> On Sun, Aug 4, 2019, 10:39 PM Manuel Korfmann <[email protected]> wrote:
>>
>>> Shout out to Stefan for being so real here
>>>
>>>
>>> signed
>>>
>>> The realist
>>>
>>> On 4. Aug 2019, at 23:54, Stefan Reich via AGI <[email protected]>
>>> wrote:
>>>
>>>> Maybe the
>>> best interface in the early going from one "narrow AGI" to another
>>> would be somewhere between an oversimplified English with some
>>> structure and maybe JSON
>>>
>>> Yes. It's called agi.blue. Here's your JSON:
>>>
>>> [
>>>
>>> {"a":"AGI","b":"means","c":"artificial general
>>> intelligence","slice":""}
>>> ]
>>>
>>>
>>> http://agi.blue/bot/centralIndexGrab?q=agi
>>>
>>> Cheers // https://www.youtube.com/watch?v=bwUE_XJMbok
>>>
>>>
>>>
>>> On Sun, 4 Aug 2019 at 20:38, Mike Archbold <[email protected]> wrote:
>>>
>>>> On 8/2/19, Secretary of Trades <[email protected]> wrote:
>>>>> 1), 2) and 5) nouns extracted for phraseology.
>>>>>
>>>>> Unidentified "F" Objects in 3), 4).
>>>>>
>>>>>
>>>>> Judgment is a highly intelligent procedure; shouldn't be the same with
>>>>> discrimination or recognition.
>>>>> Instead of 4): should be able to use the existing communication space
>>>>> in
>>>>> the usual manners.
>>>>
>>>> Ideally one "narrow AGI" should be able to communicate as you put it
>>>> in the "existing communication space in the usual manners." I think
>>>> Goertzel's structure calls for the use of a common cognitive
>>>> structure, ie., a common data structure in practical terms. Maybe the
>>>> best interface in the early going from one "narrow AGI" to another
>>>> would be somewhere between an oversimplified English with some
>>>> structure and maybe JSON? I don't know. It is an issue that is on my
>>>> mind, since many people are attempting their own clean slate AGI
>>>> nucleus.
>>>>
>>>> Mike A
>>>>
>>>> And most of all, AI should be active (or usable) in
>>>>> offline computing systems.
>>>>>
>>>>>
>>>>> On 01.08.2019 22:40, Mike Archbold wrote:
>>>>>> I like this editorial but I'm not sure "Narrow AGI" is the best
>>>>>> label.
>>>>>> At the moment I don't have a better name for it though. I mean, I
>>>>>> agree in principle but it's like somebody saying "X is a liberal
>>>>>> conservative." X might really be so, but it might be that... oh hell,
>>>>>> why don't we just call it "AI"?
>>>>>>
>>>>>> Really, all technology performs some function. A function is kind of
>>>>>> intrinsically narrow. Real estate sales, radio advertising, wire
>>>>>> transfer, musical composition...In that light, all technology is
>>>>>> narrow for its function.
>>>>>>
>>>>>> The difficulty with AGI is: it doesn't understand, reason, and judge
>>>>>> as a human can, at a human level. But I think that a narrow AGI app
>>>>>> is
>>>>>> still a narrow function! Thus narrow AGI is what is going on, a
>>>>>> narrow
>>>>>> function because all technology is basically narrow, we need it to do
>>>>>> something specific. What narrow AI is, is really just a lot better
>>>>>> good old fashioned programs that do something better at a human
>>>>>> level.
>>>>>>
>>>>>> My opinion is a "narrow AGI" would need:
>>>>>>
>>>>>> 1) increased common sense, the ability to form rudimentary
>>>>>> understanding, reasoning, and judgining pushing the boundary toward
>>>>>> human level
>>>>>> 2) can perform some function, some narrow function (all functions are
>>>>>> narrow it seems) very well, approaching continually human level
>>>>>> competence
>>>>>> 3) Can handle wide variations in cases (DL level fuzzy pattern
>>>>>> matching, patternism)
>>>>>> 4) USES A COMMON BASE WITH OTHER NARROW AGIs which gets more
>>>>>> competent
>>>>>> 5) Becomes increasingly easier to specialize
>>>>>>
>>>>>>
>>>>>> Mike A
>>>>>>
>>>>>> On 8/1/19, Costi Dumitrescu <[email protected]> wrote:
>>>>>>> So Mars gets conquered by AI robots. What Tensor Flaw is so
>>>> intelligent
>>>>>>> about surgery or proving math theorems?
>>>>>>>
>>>>>>> Bias?
>>>>>>>
>>>>>>>
>>>>>>> On 01.08.2019 13:16, Ben Goertzel wrote:
>>>> https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
>>>>>> ------------------------------------------
>>>>>> Artificial General Intelligence List: AGI
>>>>>> Permalink:
>>>>>>
>>>> https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M3aff1fc5fb106c331c3ce13e
>>>>>> Delivery options: https://agi.topicbox.com/groups/agi/subscription
>>>
>>>
>>> --
>>> Stefan Reich
>>> BotCompany.de <http://botcompany.de/> // Java-based operating systems
>>>
>>>
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest
>>> <https://agi.topicbox.com/latest>>*
>>> / AGI / see discussions <https://agi.topicbox.com/groups/agi
>>> <https://agi.topicbox.com/groups/agi>> +
>>> participants <https://agi.topicbox.com/groups/agi/members
>>> <https://agi.topicbox.com/groups/agi/members>> + delivery
>>> options <https://agi.topicbox.com/groups/agi/subscription
>>> <https://agi.topicbox.com/groups/agi/subscription>> Permalink
>>> <https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Me92e7653bb5f058bcf222ecc
>>>
>>> <https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Me92e7653bb5f058bcf222ecc>>
>>>
>
> ------------------------------------------
> Artificial General Intelligence List: AGI
> Permalink:
> https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Md1e6155ebda471da77c90f36
>
> <https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Md1e6155ebda471da77c90f36>
> Delivery options: https://agi.topicbox.com/groups/agi/subscription
> <https://agi.topicbox.com/groups/agi/subscription>
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Mb9003e53b1e24b6961952909
Delivery options: https://agi.topicbox.com/groups/agi/subscription