Ok, I'm interested.

sri, 9. ožu 2022. u 06:05 Linas Vepstas <[email protected]> napisao je:

> Hi Ivan,
>
> On Mon, Mar 7, 2022 at 6:49 AM Ivan V. <[email protected]> wrote:
>
>> Linas,
>>
>> I'm giving it a thought...
>>
>> For words similar to other words, or words near other words, directed
>> cyclic graphs with a few thousands unique nodes (spanning infinitely
>> because they are cyclic) seem the most promising food for thought. If you
>> could make CogServer (or analogous app) to output the node-relations set, I
>> could try to show them in a browser. Probably, additional search-for-word
>> feature would appy along the already seen browsing by dragging nodes around
>> to navigate them.
>>
>
> I could set this up. It would be the standard cogserver. I'm paranoid
> about being hacked, so I would set it up somewhere "secret" where only you
> can get to it.  The API would be that you give it a word, and get back a
> list of edges attached to that word. Each edge has a weight.  At the far
> end of the edge is another word.  I can give you either the "raw API" as I
> use it, or I can write some easy-to-use wrappers for it.  The raw API looks
> like this:
>
> ; Some initial setup, "magic incantations"
> (define star-obj (foo bar baz))
>
> ; Get a list of all "left"-connected atoms
> (define atom-list (star-obj 'left-stars (Word "bicycle")))
>
> ; The above returns a scheme list. To get the weight, you would say
> (define some-atom (list-ref atom-list 42))  ; get the 42nd atom
>
> ; get the third floating point number attached at "whizbang"
> (cog-value-ref (cog-get-value some-atom (Predicate "key to whiz bang")) 3)
>
> That's it. The atom-list will be either a list of WordNodes or a list or
> WordClassNodes, or other atoms, it depends.  We can keep it simple, or
> general ...
>
> The value system hangs lists of floating point numbers on atoms. The
> (Predicate "key to whiz bang") is just the key to some specific list of
> numbers.  Not all atoms have all keys; this is all very data-set-specific.
>
> As I said, I can wrap up the above in an even simpler API.
>
>>
>> For skip-gram disjunct queries, I'm not completely sure how their output
>> data is interrelated. But if you can make the queries output
>> parent-children data, the rest would be easy.
>>
>
> Disjuncts are just atoms.
>
> Consider the sentence  "Ivan threw the ball".  The disjunct for the verb
> "threw" is conceptually: "threw: ivan- & ball+" which says that Ivan is on
> the left and ball is on the right.   Written in Atomese, it is this a
> rather verbose tree:
>
> (Section (Word "throw")
>      (ConnectorList
>           (Connector (Word "Ivan") (ConnectorDir "-"))
>           (Connector (Word "ball") (ConnectorDir "+")) ))
>
> which is painfully verbose ... the "conceptual" version is much easier to
> understand.  (and read)
>
> Anyway, the above will have various keys on it, holding the count, the log
> probability,  assorted other numbers that are "interesting".
>
> The query for the above would be the same as before, except that you would
> get back a list of these Sections.
>
> If you are interested & ready to do this, let me know. It would take me a
> few days to set it up.
>
>>
>> - ivan -
>>
>> ned, 6. ožu 2022. u 20:30 Linas Vepstas <[email protected]> napisao
>> je:
>>
>>>
>>>
>>> On Sat, Mar 5, 2022 at 3:08 PM Ivan V. <[email protected]> wrote:
>>>
>>>>
>>>> The logical step would be to prepare a CogServer instance filled with
>>>> those millions of atoms, keep it always running, and then query only what
>>>> is of the current interest to forward it to a browser.
>>>>
>>>
>>> Yes, exactly.
>>>
>>>
>>>> Anyway, who would browse over millions of atoms all at once? One might
>>>> only be interested in some subset of it, and if that subset can be measured
>>>> in thousands of atoms,
>>>>
>>>
>>> Or even just hundreds.  Or dozens.
>>>
>>>
>>>> Do you have any basic glimpse of a kind of visualization you'd like to
>>>> have? And what user interactions would pair it to be successful?
>>>>
>>>
>>> That's the hard question. It's hard to find good answers. I need your
>>> help finding good answers. Here are some ideas.  For example, given one
>>> word, find all the other words "related" to it. Order the list by the
>>> strength-of-relationship (and maybe show only the top-20). There are
>>> various different ways of defining "relatedness". One is to ask for all
>>> words that occur nearby, in "typical" text.  So, for example, if you ask
>>> about "bicycle", you might get back "bicycle wheel", "bicycle seat", "ride
>>> bicycle", "own bicycle".  Another might be to ask for "similar" words, you
>>> might get back "car", "horse", "bus", "motorcycle".   A third query would
>>> return skip-gram-like "disjuncts", of the form "ride * bicycle to *" or "*
>>> was on * bicycle" or "* travelled by bicycle * on foot"  -- stuff like
>>> that.  These are all fairly low-level relationships between words, and are
>>> the kind of datasets I have right now, today.
>>>
>>> My long-term goal, vision is to create a complex sophisticated network
>>> of information. Given that network, how can it be visualized, how can it be
>>> queried?  A classic answer would be a school-child homework assignment:
>>> "write 5 sentences about bicycles".   This would be a random walk through
>>> the knowledge network, converting that random walk into grammatically
>>> correct sentences (we're talking about how to do this in link-grammar, in a
>>> different email thread. It's hard.)
>>>
>>> Is there a way of visualizing this kind of random walk? Showing the
>>> local graph of things related to bicycles?
>>>
>>> So the meta-problem is: given a network of knowledge, how does one
>>> interact with it? How does one visualize it? How does one make it do
>>> things? If I pluck the word "bicycle" like a guitar string, how can I hear,
>>> see the vibrations of the network?
>>>
>>> -- Linas
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "opencog" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/opencog/CAHrUA37FQRCq0Sj_iiJ5bch-D6Fi4VOkrLS_SMcFy3q_B6DMqw%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/opencog/CAHrUA37FQRCq0Sj_iiJ5bch-D6Fi4VOkrLS_SMcFy3q_B6DMqw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6U4Qs9EqPfnyKvbSr3xgXbXqDesdbjJN7rogO8YxU2ncw%40mail.gmail.com
>> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6U4Qs9EqPfnyKvbSr3xgXbXqDesdbjJN7rogO8YxU2ncw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA359jzC%3Dpg7DRVpguQnGdUhAXwda_%3D0thwmEaY%3DuX2s1FQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA359jzC%3Dpg7DRVpguQnGdUhAXwda_%3D0thwmEaY%3DuX2s1FQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VLPV3WbyL5A%3D-FH5vdzqz_bMfbABg-A%3D%3DXLfBLb-k7nw%40mail.gmail.com.

Reply via email to