Hi Craig,

On Mon, Apr 11, 2022 at 7:59 AM Craig Bosco <[email protected]> wrote:

>  @Linas,
> I must admit -- it took me a few days to read and really consume the
> Grammar Induction PDF. Some of these topics may seem unapproachable to
> "outsiders", and I am sure some people struggle (like me) or don't believe
> they have the capability to fully ingest and comprehend. Or, perhaps more
> importantly, contribute. I've finished reading your PDF once and have a few
> thoughts I will share now (before I read it again, and then again). I have
> some other thoughts about audio cognition that I will save for a follow up
> email.
>

Thanks!  I tried to make it readable, as best as I could.  If you have
specific questions, comments let me know.

>
> My first takeaway relates to general "project management" and productivity
> tools. It's going to be very difficult for people to swap in, pick up work,
> and delegate tasks, unless there's some sort of common operational picture
> that we can all refer to. Github works great as a platform for programmers,
> but even Github can be confusing at times. I apologize if I've missed
> something, but I am thinking a free kanban-like board (such as Airtable or
> Monday.com) would help a small & distributed team quickly see items that
> need work, and reduces the "barrier to entry" for all those lurkers out
> there. I've started to identify a few things that we could tackle,
> specifically: corpus for audio and visual data. It seems like this
> direction is the most needed from your end, but correct me if I'm wrong.
>

I don't know how to respond. You're a software dev manager, and I assume an
experienced software dev.  In my work experience, it takes a new developer
a few weeks to get oriented enough to be able to start making
contributiions. It takes them at least a year, before they become generally
familiar with the project, as a whole. I don't know how to short-circuit
that. Or rather, I don't beleive its possible to short-circuit that.

4 or 5 embeded emails below, I mentioned that one possibility is to
commericalize the AtomSpace. I think this could be done by a more-or-less
conventional software+marketing team, following well-understood development
management styles.  If that's really the goal, then the place to start
would be to work up a marketing-and-competitive-advantage assement. Who are
the competitors? We need to do most of what they can do, and offer some
killer features that no one else has. I think the AtomSpace has such killer
features.  The AtomSpace does NOT have a product planner who can assess
what the market wants, and can negotiate with development on how to get
that. You know of any good product planners?

Once there's a plan in place, then yes, you can dole out coding tasks to
assorted programmers. At least, that's how it works when everyone is
*paid*.  For a volunteer projject ... hoooboy ... famously, its all about
'scrtatchig that itch'.  So: what do *you*, personally, want to do?  I'm
sure we can find something you'd like to do, that would be useful.  But
what works for you won't work for the next guy.

... the above is *competely different* from the stuff described in the PDF.
The PDF is about a sciencee research proposal. It's about doing somethig
that's never been done before. It will require a lot of tinkering. That's
an utterly different skill set. Different personality type. (In my
experience, a personality type that will quit and find other employment, if
asked to use airtable or monday.com They'll roll thier eyes, leave, and not
even say goodbye. "If you can't say something polite, don't say anything at
all.")


> Another observation I've had is that recent advancement in BI tools for
> analytics have led to an explosion of Business Analysts, allowing people
> without technical skills to leverage powerful ML in drag & drop interfaces
> that are easy to pick up. I think that adding something like an ETL GUI to
> Atomspace/OpenCog would accelerate this growth process by allowing
> "experimenters" to drop in&out modules and observe results. This might be a
> pie-in-the-sky wish but I think this sort of enablement will spark another
> explosion of data science growth.
>

Err, The atomspace is a graph database, comparable to other graph
databases, which, as far as I know, don't have drag-n-drop GUI's on them. I
dunno. Maybe they do. That's why some kind of competitive analysis would
need to be done.

The stuff in the PDF is a science project; there's nothing there than can
be wrapped in a GUI; its too early for that.

Perhaps MOSES could be wrapped in a GUI. I dunno, I suspect that, presume
that, conventional, mainstream ML can do what MOSES does, better cheaper
faster.  Maybe. I dunno.  Beats me. ML is a commercially mature,
multi-billion-dollar industry. its not the land of cowboys and yahoos that
it used to be.  MOSES is still very intersting, but for completely
different reasons, reasons that commerical users don't care about.

--linas




> I will re-read your paper and follow up with some additional thoughts from
> my professional audio experience later in the week...
>
> Kind regards,
> Craig
>
> On Fri, Apr 8, 2022 at 6:51 PM nugi nugroho <[email protected]> wrote:
>
>> Could it be possible to make some 3d model of an object, Isolate it,
>> create another corpus for object recognition by inputting the video
>> rendering result to the system to be able to recognize the object in 2d. I
>> wonder if this is possible without inputting a 3d model to the new corpus
>> and just the blank video input and by some iteration it will be able to
>> identify the isolated object in some noisy environment. If this is possible
>> then the system could theoretically learn to recognize objects from a
>> youtube video by passing the video and the subtitle(for now the subtitle is
>> created from a neural network since I have no idea how to create the audio
>> corpus). Usually the cooking channels from youtube are explaining about the
>> ingredients like apples and moving them as they cook it, so I think that I
>> could be used as the input to teach the model. I know this is a huge and
>> very difficult thing to do.
>>
>> The thing that I know is I don't know a lot about how to use this
>> technique for computer vision, I was still dumb. I still don't have any
>> idea how to isolate each object in the video since this shouldn't be
>> hardcoded but learned.
>>
>> Pada tanggal Jum, 8 Apr 2022 pukul 16.06 nugi nugroho <[email protected]>
>> menulis:
>>
>>> I still do not understand the system completely, but I wonder if this
>>> system is theoretically capable of creating 3d models from video input that
>>> the cameramen rotate around an object at a certain angle. I was thinking of
>>> creating a corpus capable of converting video to 3d models for further
>>> processing by reasoning algorithms. I think AGI needs to have the skills to
>>> think at least about the 3d world(just my naive assumption though).  This
>>> was the first idea that popped into my head but, well, I still need to
>>> prepare for my university exam so I can't learn faster than my current pace
>>> and my current skills are not sufficient to realize that idea for now. I
>>> hope that I can make some contribution to the project at the end of this
>>> year, but I cannot promise.
>>>
>>> Pada tanggal Rab, 6 Apr 2022 pukul 01.04 Linas Vepstas <
>>> [email protected]> menulis:
>>>
>>>> Hi Craig, (and Ivan)
>>>>
>>>> Replying publicly to a private email:
>>>>
>>>> On Mon, Apr 4, 2022 at 4:00 PM Craig Bosco <[email protected]>
>>>> wrote:
>>>>
>>>>> the only way forward is to crowdsource work and ideas.
>>>>>
>>>> ...
>>>>
>>>>> everyone will ultimately benefit from the OpenCog platform as it gains
>>>>> ease-of-use and sophistication.
>>>>>
>>>>
>>>> The OpenCog "platform" is both broad and deep; to discuss all aspects
>>>> of it would be boiling the ocean. Unless, that is, you want to work on deep
>>>> and basic infrastructure.  One such would be converting the AtomSpace into
>>>> a commercially viable platform that ordinary developers would want to use
>>>> on a day-by-day basis. Having this would attract public attention, although
>>>> it would not much advance the overall AGI research goals.
>>>>
>>>> One way to convert the AtomSpace into a commercially viable product
>>>> would be to allow it to store generic JSON or similar (generic
>>>> s-expressions, generic YAML, or even generic python or a json-like subset
>>>> of python. Or all of the above.) This is "commercially appealing" because
>>>> there already is a company that does this (grakn.ai, but since renamed
>>>> to some other name I can't recall) and there are several other
>>>> graph-database companies that offer something similar.  I've taken some
>>>> small steps in this direction, but abandoned them as they seemed like a
>>>> distraction from the main topic of AGI research.
>>>>
>>>> The above might be appealing because it is a fairly well-defined,
>>>> clear-cut project. It does not require arcane theory, or deep
>>>> experimentation. It's mostly a matter of roll-up-your-sleeves and write
>>>> code, which is exactly the kind of thing most programmers enjoy. Take a
>>>> sketch, and turn it into a polished product.
>>>>
>>>> As to AGI research: the stuff I'm working on now is very theory-laden
>>>> and complex; I now realize that I should not much expect anyone to follow,
>>>> although a shout out to Amir who continues to surprise me regularly. He's
>>>> on the right track.
>>>>
>>>> As to AGI research that you or Ivan could work on (... if only Ivan
>>>> stopped skimming emails, and actually paid attention to what was written in
>>>> them...) there is brand-new green-field development on audio and video
>>>> processing.  Green-field, in that not much code has been written, and so
>>>> you don't have to modify a large complex existing code-base. It does,
>>>> however, require interfacing into large and complex existing systems. The
>>>> path is fairly straight-forward; see attached PDF. The work, however, is
>>>> definitely challenging: it will require some hard thinking and lots of
>>>> work. It's not "just programming", it's architecture and exploration.  Some
>>>> of that work is grunt-work, e.g. collecting a suitable corpus of images.
>>>> Some is just painful: running CPU-intensive jobs for days on end.
>>>>
>>>> PDF:
>>>> https://github.com/opencog/learn/blob/master/learn-lang-diary/agi-2022/grammar-induction.pdf
>>>>
>>>> -- Linas
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "opencog" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/opencog/CAHrUA37aWVAPcguURrd7xZZmJUuP7Czb%2Bu4NsbhU1dF6D1y4_Q%40mail.gmail.com
>>>> <https://groups.google.com/d/msgid/opencog/CAHrUA37aWVAPcguURrd7xZZmJUuP7Czb%2Bu4NsbhU1dF6D1y4_Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAJA72CeOHdm6iEMT5cM6Yetygyei_g8xa7cgNRg3kA2RWNvUMA%40mail.gmail.com
>> <https://groups.google.com/d/msgid/opencog/CAJA72CeOHdm6iEMT5cM6Yetygyei_g8xa7cgNRg3kA2RWNvUMA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>
> CONFIDENTIALITY NOTICE -- This email is intended only for the person(s)
> named in the message header. Unless otherwise indicated, it contains
> information that is confidential, privileged and/or exempt from disclosure
> under applicable law. If you have received this message in error, please
> notify the sender of the error and delete the message. The integrity and
> security of this email cannot be guaranteed. The sender is not liable for
> any damage caused by viewing this message. Thank you.
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CACsYDWRjJrG82gYBka4eKYugyDbAgRvqwC49T-c_R_G64Nhtmg%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CACsYDWRjJrG82gYBka4eKYugyDbAgRvqwC49T-c_R_G64Nhtmg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>


-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37k4zCYODbUDzL_LmCF5BJ-yXkBfbkmQ5Fs-NzzTdJskg%40mail.gmail.com.

Reply via email to