Hi Linas,
Thank you for the example:
I think this again helps visualize my further questions:
How is this additional conceptual knowledge harvested in a way that mimics
human thinking on one hand (i.e. it deploys context adequately, established
relevant abstractions, and, generally, creates
On Fri, Apr 21, 2017 at 10:12 AM, Daniel Gross wrote:
> In context of A one morphism may hold, in context B another -- and you
> indicated two kinds of contexts, ) domains (swimming, rowing) and
> human-introspective-valueladen interpretive context.
To return to Alex's
On Thu, Apr 20, 2017 at 4:34 PM, AT wrote:
> It is quite obvious we are not really in OpenCog territory here
Why would you say that? The current coding task, in opencog, is to write
the code that can perform the things that you describe, that I talk about.
Although all
On Thu, Apr 20, 2017 at 2:37 PM, Daniel Gross wrote:
> so i wonder where is the meaning in this kind of machine . -- if the
> semantic graph is actually constructed out of the machine learned parse of
> natural language text without a predefined mapping to a semantic graph
>
Hi Linas,
I think you "morphism" example is very interesting and just to emphasize a
key insight -- context.
In context of A one morphism may hold, in context B another -- and you
indicated two kinds of contexts, ) domains (swimming, rowing) and
human-introspective-valueladen interpretive
Ivan,
what I wanted to say is that meaning depends not only on the language, but
also on the person, and it changes over time. Most people agree on the
meanings of most words, most of the time, but not always. Best example is
the slang of some subculture. If the subculture is a gang, there moght
>
> Hi Ivan,
> I think best if you can spend a bit time on working on a few
> representative examples that shows what you can do with your embedded
> language. AI discussions tend to get very abstract, very quickly :-), so to
> "engineer" ground ourselves its best to talk by way of examples. This
Hi Ivan,
I think best if you can spend a bit time on working on a few representative
examples that shows what you can do with your embedded language. AI
discussions tend to get very abstract, very quickly :-), so to "engineer"
ground ourselves its best to talk by way of examples. This helps
Daniel/Ivan,
It is quite obvious we are not really in OpenCog territory here, but what
your discussion is hinting at is that you will need your own theory of
meaning, or theory of the meaning of meaning. At the conceptual level my
approach begins where Linas left off, ie there is no meaning
Mr. Daniel Gross,
I'm afraid I'm going to leave the juicy AGI details to AGI developers (not
to say it is an easy part, far from that). I decided to be just a technical
guy, if anyone is interested in my low-level solution of programming
language that equally easy (or hard) solves application
Hi Ivan,
thank you for your response.
Pattern matching is a very general purpose mechanism -- in my mind key
questions are:
what governed the language for pattern description and the semantics of how
patterns match with inputs
what governs the language of transformational rules, triggered
Hey Daniel, great to see someone interested in AGI :)
How about us, humans, I mean how do we think? I'm not trying to resemble
our neural networks, I took another, top-down approach, in between, but
let's observe us as an thinking example. Do we see how our thoughts are
formed? I think that we
Not to forget, languages A, B C and D from the previous post could all be
different domains of the same language.
2017-04-20 20:53 GMT+02:00 Ivan Vodišek :
> Yes Linas, thank you for response. That is why there is no exclusively
> definite interpretation of any expression.
Yes Linas, thank you for response. That is why there is no exclusively
definite interpretation of any expression. Expression "space" can be
translated to numerous meanings, with each meaning having its own, slightly
different interpretation in its own language. If we think about
"Multiverse",
Ivan, I mostly agree (superficially) with most of what you are saying, but:
I notice you avoid or over-simplify the issues mentioned in the wikipedia
article "upper ontology". The points are two fold: different human beings
have subtley different "upper ontologies", they tend to change over time,
On Thu, Apr 20, 2017 at 11:19 AM, Daniel Gross wrote:
> Hi Linas,
>
> Thank you for your responses, and the pointer.
>
> It seems to me that your example further pin-points my question:
>
> A quasi-linear walk through a semantic network is essentially a
> constructed
Hi all :)
May I say a few words about semantics? In my work on describing knowledge,
I've concluded that a semantics (meaning) of an expression is merely an
abstract concept of thought that relates the expression to its
interpretation in another (or the same) language for which we already know
Hi Linas,
Thank you for your responses, and the pointer.
It seems to me that your example further pin-points my question:
A quasi-linear walk through a semantic network is essentially a constructed
structure (or path) through the use of grammar, to get at a possible
reading of a sentence
On Wed, Apr 19, 2017 at 12:23 AM, Daniel Gross wrote:
> Hi Linas,
>
> How do you propose to learn an ontology from the data --
>
The simplest approach is to simply read english-langage sentences that
encode an ontology: for example, an early version of MIT ConceptNet
Semantics and syntax are two different things. Syntax allows you to parse
sentences. Semantics is more about how concepts inter-relate to each other.
-- a network. A sentence tends to be a quasi-linearized walk through such
a network. For example, take a look at the "deep" and the "surface"
As I see it, the meaning of a word can be understood as the fuzzy set
of patterns in which that word is involved...
Some of these will be purely language-internal patterns (as
highlighted by Saussure and other structuralist linguists way back
when), others will be patterns associating the word
Hi Ben,
Thank you for your response. I started reading the paper and was wondering
if you could help me clarify a confusion i apparently have when it comes to
the meaning of meaning:
How is linguistic meaning connected to human embodied meaning that we would
call human (or AGI)
We have a probabilistic logic engine (PLN) which works on (optionally
probabilistically labeled) logic expressions This logic engine
can also help with extracting semantic information from natural
language or perceptual observations. However, it's best used together
with other methods that
Hi Linas,
How do you propose to learn an ontology from the data -- also, what purpose
would, in your opinion, the learned ontology serve. Or stated differently,
in what way are you thinking to engender higher-level cognitive
capabilities via machine learned bundled neuron (and implicit
On Tue, Apr 18, 2017 at 3:22 PM, Alex wrote:
> Maybe we can solve the problem about modelling classes (and using OO and
> UML notions for knowledge representation) with the following (pseudo)code
>
> - We can define ConceptNode "Object", that consists from the set or
>
Hmmm...
Instead of
***
- We can require that any instance is inherited from the concrete class:
ExtensionalInheritanceLinks
invoice_no_2314
VATInvoice
***
I would think to say
MemberLink
invoice_no_2314
VATInvoice
DefiniteLink
invoice_no_2314
(where the latter indicates
On Mon, Apr 17, 2017 at 1:34 AM, Alex wrote:
> To be honest, I am a bit afraid to include in my thesis project that uses
> term logic - it is just fragment of monadic predicate logic and it was
> decided some 150 years ago that more extensive logics (full predicate
On Sun, Apr 16, 2017 at 6:34 PM, Alex wrote:
> Well, the mentioned book has chapter about inheritance, but it is in no
> way connected with the terms of intensional and extensional inheritance.
> So, this book is not usable.
>
Sure, its usable. Opencog inheritance is
28 matches
Mail list logo