Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Lukasz Stafiniak
On Sat, Jan 10, 2009 at 11:02 PM, Ben Goertzel b...@goertzel.org wrote:
 On a related note, even a very fine powder of very low friction feels
 different to water - how can you capture the sensation of water using beads
 and blocks of a reasonably large size?

 The objective of a CogDevWorld such as BlocksNBeadsWorld is explicitly
 **not** to precisely simulate the sensations of being in the real
 world.

 My question to you is: What important cognitive ability is drastically
 more easily developable given a world that contains a distinction
 between fluids and various sorts of bead-conglomerates?

The objection is not valid in equating beads with dry powder. Certain
forms of adhesion of the beads form a good approximation to fluids.
You can have your hand wet with sticky beads etc.

The model feels underspecified to me, but I'm OK with that, the ideas
conveyed. It doesn't feel fair to insist there's no fluid dynamics
modeled though ;-)

Best regards.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Direct communication between minds

2009-01-06 Thread Lukasz Stafiniak
Yes, the modality tapping as a target for BCI...
The inner speech can be accessed this way (your phonological loop).

The experiments here are less function-discriminative I think, it's
rather what (classification) than how (inverse-engineering the
imagination).

http://scienceblogs.com/developingintelligence/2009/01/svms_decode_intentions_the_sta.php

For the paper I want, I'm interested in the language aspect of all
this (and especially AGI-related).

What is the place for inner speech, communication of the system with
itself that is amenable to episodic memory, in a cognitive
architecture, its relation to thought in general? Is the inner speech
language exactly the same as the language used for communication with
others (e.g. in its semantics)? How to enhance the transfer of
context-delineation?


2009/1/5 Abram Demski abramdem...@gmail.com:
 Lukasz,

 I think the most realistic near-term (next 10/20 years?) telepathy
 technology will deal with sensory modalities, rather than high-level
 concepts. In particular, it doesn't seem unrealistic to directly
 interface the phonological loop of two people, or similarly the
 visiospatial sketchpad. These low-level areas of the brain probably
 speak the same language from person to person. This would allow
 people to exchange any sounds or images that they imagined. I can
 think of some obvious questions.

 -How is it turned on/off, and how is it directed at particular people?
 -Is it possible to control which sounds/images are transferred, or
 would it be a dump of everything currently in working memory?
 -Can sounds and images coming in from the environment be easily shut out?
 -Would it be too unpleasant to have sounds or images forced upon one's
 imagination? Would current thoughts be erased, et cetera?

 --Abram



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Direct communication between minds

2009-01-06 Thread Lukasz Stafiniak
On Tue, Jan 6, 2009 at 12:34 PM, Lukasz Stafiniak lukst...@gmail.com wrote:

 The experiments here are less function-discriminative I think, it's
 rather what (classification) than how (inverse-engineering the
 imagination).

 http://scienceblogs.com/developingintelligence/2009/01/svms_decode_intentions_the_sta.php

But, the ability to transfer classifiers from a person to another
without any re-training is really impressive. They suggest there are
generic morphological thought patterns.

 For the paper I want, I'm interested in the language aspect of all
 this (and especially AGI-related).



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Direct communication between minds

2009-01-04 Thread Lukasz Stafiniak
Hi,

I plan to write a paper for a local CogSci conference, and a
super-cool subject that comes to my mind is that of minds
communicating through means more capable than what we're accustomed
to. Is a revolution in language possible? Brainstorming:

- Every now and then you hear that new developments could lead to
telepathic devices (a naive post by the ever-famous Freeman Dyson
here: http://www.edge.org/q2009/q09_3.html RADIOTELEPATHY, THE
DIRECT COMMUNICATION OF FEELINGS AND THOUGHT FROM BRAIN TO BRAIN)

- A promise that AGIs could communicate with their raw thoughts
using their internal knowledge representation

- A look from (AI-oriented) language semantics perspective: the
dichotomy of the universal ontology approach (ontological semantics)
vs. the lexicon-grounded semantics (multi-layered semantic networks)

- Perhaps what I need to look at is the architecture of
communication: a cascade (with feedback links) of

-- the want to express / to interpret, let's stick with the sender side

-- the activation of the concepts to express

-- the crystallization of the message (this can occur online while
expressing); it's a selection-abstraction phase that takes into
account the whole effect of the message on the receiver

-- the selection of proper expressive means (for example, the stream
of words or gestures)

- it's language everywhere: when we describe a process
scientifically, we do it in some language; when we then engineer the
process, it seems to us to have its true internal language; with AI,
this is the KR, and the semantics are provided by mind dynamics;
therefore there is an internal language to every mind, separate from
the communication language, and thus the problem of expressing oneself
as translation; I'm sure a whole bunch of academic philosophy deals
with it (like the late Wittgenstein dismissing the translation idea)

- A look at a cognitive architecture and how aspects of its operation
can be communicated across different minds that instantiate it:
referential meanings, complex concepts, emotions

Could you share your thoughts? I'd appreciate relevant pointers.

Happy New Year,
Łukasz Stafiniak


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] [GoogleTech Talk] Case-based reasoning for game AI

2008-12-30 Thread Lukasz Stafiniak
The lecture is actually about more than just CBR.
I recommend watching if you're bored, this is really entertaining :-)

http://machineslikeus.com/news/video-case-based-reasoning-game-ai

Bits seem similar to what Novamente is working on.
Ambitious, but with engineering rather than AGI-focused spirit.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Lukasz Stafiniak
http://www.sciencedaily.com/releases/2008/12/081224215542.htm

Nothing surprising ;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Re: [OpenCog] Transfer learning

2008-12-18 Thread Lukasz Stafiniak
I'd like to propose two somewhat related papers:

(Graph-Based Domain Mapping for Transfer Learning in General Games
Gregory Kuhlmann, Peter Stone)
http://www.cs.utexas.edu/~pstone/Papers/bib2html/b2hd-ECML07-rulegraphs.html

Intrinsically Motivated Reinforcement Learning
Nuttapong Chentanez  Andrew G. Barto  Satinder Singh
http://www.eecs.umich.edu/~baveja/Papers/FinalNIPSIMRL.pdf

On Tue, Dec 16, 2008 at 3:57 PM, Ben Goertzel b...@goertzel.org wrote:

 I just read an interesting (somewhat mathy) paper on transfer learning,
 and put the link here

 http://www.opencog.org/wiki/Transfer_Learning

 ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Re: [OpenCog] Re: What is the role of MOSES in Novamente and Open Cog?-----was---- internship opportunity at Google (Mountain View, CA)

2008-12-17 Thread Lukasz Stafiniak
Talking on a very abstract level, MOSES could be ultimately developed
so that it explores what you could call constraint relaxation
strategies; combo trees are built such that they meet the constraints
optimally with more-or-less random exploration of different
trade-offs. Pure MOSES (current) starts with no knowledge and learns
along the way, you could also call the knowledge learned
constraints, especially once it would be expressed in
transfer-friendly declarative way.

Everything is a constraint.

On Wed, Dec 17, 2008 at 4:23 PM, Ed Porter ewpor...@msn.com wrote:

 My point was the parallel constraint relaxation would appear able to be used
 much like a genetic algorithm, except that in a Novamente system it would
 have the ability to take advantage of much of the relevant world knowledge,
 and knowledge about how to best reason from world knowledge, that was
 contained in hypergraph, when proposing solutions.



 Your acknowledgement that in WebMind the hypergraph was used without MOSES,
 implies you agreement that the hypergraph could be used for exploring a
 possible solution space to various problems, and doing creative thinking.



 My REAL main point, was that from my reading about Combo and MOSES in your
 2007 Novamente book, and from reading one of Moshe's long papers about it,
 MOSES seem to take to little advantage of all the rich, complex hierarchical
 and generalization knowledge contained in the hypergraph --- although it was
 clear to me that their would be ways in which it could be modified to do
 so.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] internship opportunity at Google (Mountain View, CA)

2008-12-16 Thread Lukasz Stafiniak
Oh my, I've been very tired the other day! (as my English there
shows...) I'm sorry for spamming the list.

On Mon, Dec 15, 2008 at 11:41 PM, Lukasz Stafiniak lukst...@gmail.com wrote:
 I am initially interested but please consider other propositions as


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] internship opportunity at Google (Mountain View, CA)

2008-12-16 Thread Lukasz Stafiniak
Oh my, I've been very tired the other day! (as my English there
shows...) I'm sorry for spamming the list.

On Mon, Dec 15, 2008 at 11:41 PM, Lukasz Stafiniak lukst...@gmail.com wrote:
 I am initially interested but please consider other propositions as


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] internship opportunity at Google (Mountain View, CA)

2008-12-15 Thread Lukasz Stafiniak
I am initially interested but please consider other propositions as
well... I haven't decided yet (and I have some preconditions: it's
overseas, I need to finish writing a draft of my PhD thesis first,
etc).

On Mon, Dec 15, 2008 at 7:32 PM, Moshe Looks madscie...@google.com wrote:

  * Learning procedural abstractions
  * Adapting estimation-of-distribution algorithms to program evolution
  * Applying plop to various interesting data sets
  * Adapting plop to do natural language processing or image processing
  * Better mechanisms for exploiting background knowledge in program evolution

I'm generally interested in these topics...


  * Functional programming experience (esp. Lisp, but ML, Haskell, or
 even the functional style of C++ count too)

OCaml is my programming language of choice for quite a lot of time, I
have some recent Haskell experience, and quite a lot of Emacs Lisp
experience. I don't know Common Lisp yet.

  * Experience with evolutionary computation or stochastic local search
 (esp. estimation-of-distribution algorithms and/or genetic
 programming)

I'm teaching classes in Evolutionary Algorithms (and the lecturer puts
emphasis on EDAs), I've taught classes in Data Mining, I'll be
teaching classes in AI in the Spring (an introductory course). I have
some experience with implementing GP and simulated annealing.

  * Open-source contributor

Not much success stories or cooperation...
Only: I've implemented a rich type system for Speagram, and
pmwiki-mode for Emacs.
http://pmwiki-mode.sourceforge.net/wiki/ (oups, the site is currently
down; http://pmwiki-mode.cvs.sourceforge.net/viewvc/pmwiki-mode/pmwiki-mode/).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Transfer Learning

2008-12-02 Thread Lukasz Stafiniak
Given the recent appearance of transfer learning here, it deserves a
separate thread.
Nice link: http://www.cs.utexas.edu/~lilyanam/TL/ Transfer Learning
Reading Group at UT Austin
Seems to be very AGI-friendly... The top read-link seems to be a good
one to start:
http://ftp.cs.wisc.edu/machine-learning/shavlik-group/torrey.aaai08.pdf
Transfer in Reinforcement Learning via Markov Logic Networks
(sounds NM/OpenCogPrime-ish)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] General musings on AI, humans and empathy...

2008-11-08 Thread Lukasz Stafiniak
A beautiful post, Ben. Thank you.

On Sun, Nov 9, 2008 at 12:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


 http://multiverseaccordingtoben.blogspot.com/2008/11/in-search-of-machines-of-loving-grace.html

 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Re: Two Remarkable Computational Competencies of the SGA

2008-10-29 Thread Lukasz Stafiniak
OK it's just a Compact Genetic Algorithm -- genetic drift kind of
stuff. Nice read, but very simple (subsumed by any serious EDA). It
says you can do simple pattern mining by just looking at the
distribution, without complex statistics.

On Wed, Oct 29, 2008 at 8:13 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 Very relevant even if you don't agree. Too much rhetoric though (it's
 not really that earth-shaking). I haven't made up my mind yet.

 http://evoadaptation.wordpress.com/2008/10/18/new-manuscript-two-remarkable-computational-competencies-of-the-simple-genetic-algorithm/



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] a mathematical explanation of AI algorithms?

2008-10-08 Thread Lukasz Stafiniak
The more recent work by G. E. Hinton brought here by Ed Porter is very
interesting mathematically (if you go into the details of trying to
argument why it works -- probabilistic modeling a la graphical
models).

On Thu, Oct 9, 2008 at 12:32 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 For neural nets, Daniel Amit had a good book in the 80's reviewing the
 dynamics
 of attractor neural nets ...
 On Wed, Oct 8, 2008 at 6:25 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Read an introductory text on machine learning to get up to speed --
 it's the math of AI, and there's lots of it. Statistics, information
 theory. It's an important perspective from which to look at less well
 understood hacks, to feel the underlying structure.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Fwd: Usage-based Computational Language Acquisition

2008-09-14 Thread Lukasz Stafiniak
Found this while grepping thru tons of posts on linguistlist
(19.2502). Could be of interest.

Date: Wed, 13 Aug 2008 13:39:46
From: Kerstin Fischer [EMAIL PROTECTED]
Subject: Usage-based Computational Language Acquisition
E-mail this message to a friend:
http://linguistlist.org/issues/emailmessage/verification.cfm?iss=19-2502.htmlsubmissionid=186781topicid=3msgnumber=1

Full Title: Usage-based Computational Language Acquisition

Date: 28-Jul-2009 - 03-Aug-2009
Location: Berkeley, CA, USA
Contact Person: Kerstin Fischer
Meeting Email: [EMAIL PROTECTED]

Linguistic Field(s): Cognitive Science; Computational Linguistics; Language
Acquisition

Subject Language(s): English (eng)

Call Deadline: 07-Sep-2008

Meeting Description:

Usage-based models of language acquisition: computational perspectives
Theme Session at ICLC 11, Berkeley, CA.

Date: July 28-August 3, 2009

Organizers: Kerstin Fischer  Arne Zeschel, University of Southern Denmark

Call for Papers

Theme Session Description:
Usage-based approaches to language acquisition have not only produced many
valuable insights in the field of child language studies (cf. Tomasello 2003 and
Goldberg 2006 for overviews), but have also helped to corroborate important
assumptions of emergentist theories of language in general (cf. Dabrowska 2005).
In line with basic tenets of Cognitive Linguistics, these approaches emphasize
the key role of communicative and experiential grounding in language use and
language structure, and seek to explain its acquisition in terms of general
(i.e., non-specialized) cognitive principles and mechanisms as far as possible.
At the same time, explicit, testable models of how these principles and
mechanisms are implemented in the context of grounded construction learning are
only beginning to be developed (cf. Bod, to appear).

The purpose of this workshop is to bring together language acquisition
researchers from linguistics, psychology and computer science who work on such
models in order to discuss how usage-based constructionist accounts of language
acquisition can benefit from such research. Topics will include, but are not
restricted to:

- cognitive capacities that constitute prerequisites for normal child language
acquisition (cf. Tomasello et al. 2005, Tomasello 2006) and how they can be
accommodated in language learning simulations (e.g., Steels and Kaplan 2002);
- the basic mechanisms and psycholinguistic plausibility of different approaches
to automatic construction learning (e.g., Chang  Maia 2001; Batali 2002; Steels
2004; Dominey and Boucher 2005);
- the kinds of semantic representations that grounded language learning
experiments or simulations should draw on (Bergen  Chang 2005; Feldman 2006);
- the way in which the acquisition of particular constructions may be grounded
in the previous acquisition of certain other constructions (Johnson 2001;
Morris, Cottrell  Elman 2000; Abbot-Smith  Behrens 2006); and, finally,
- ways of accommodating useful notions from Cognitive Linguistics in
computational models of language processing and acquisition (cf. Chang et al.
2002).

The session will compare different approaches to automatic construction learning
and consider the extent to which they can inform usage-based accounts of child
language acquisition. In that, it seeks to bridge the gap between kindred
research in Cognitive Linguistics and related areas of Cognitive Science, and to
provide a forum for discussing important challenges for future research on
emergentist models of language.

Submission Procedure:
Abstracts should be:
- 500 words max
- submitted in .rtf or .doc format
- turned in by Sept 7th at the latest
- accompanied by an e-mail specifying the title of the paper, name(s) of
author(s), affiliation and a contact e-mail address
- sent to [EMAIL PROTECTED] and [EMAIL PROTECTED]

Please note that both the theme session proposal itself and the individual
contributions will undergo independent reviewing by the ICLC program committee.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Any further comments from lurkers??? [WAS do we need a stronger politeness code on this list?]

2008-08-03 Thread Lukasz Stafiniak
So far I've been in favor of mailing lists, because of their:

- push rather than pull mechanism (all news in one reader)
- filtering and/or threading (client-side)

Arguably, threading (by subject lines), provided to me by gmail, is
short-spanned, and forums share with wikis some of the top-down
self-organization of knowledge which mailing lists completely lack.

On Sun, Aug 3, 2008 at 8:53 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 The function you're describing as being carried out by an FAQ, would be
 served by a forum similar to the ImmInst fora, actually.

 ben

 On Sun, Aug 3, 2008 at 2:49 PM, Joseph Henry [EMAIL PROTECTED] wrote:

 I have seen very good and productive threads on this list, but they tend
 to be the exception. Hence I mostly just delete the items from the list, and
 follow the occasional thread that looks interesting or involves people who
 have posted more reasonable items in the past

 Yeah, that is typically what I do as well. Only a small number of threads
 ever make it past the 2-3 day lifespan in my inbox.

 I like the idea of the centralized FAQ (which I remember seeing the
 beginning of a while back). Not because I would point people to it, but
 rather I would find it useful to see others (or myself) pointed to it as I
 am still playing a long game of catch up.

 I think the FAQ should also include areas for people like Tintner to
 explain their theories in FULL detail to prevent any more confusion,
 arguments, or alienation. (Lets just put that to rest guys...)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Lukasz Stafiniak
On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED] wrote:

 The terms forward-chaining and backward-chaining when used to refer to
 reasoning strategies have absolutely nothing to do with temporal
 dependencies or levels of reasoning.  These two terms refer simply, and
 only, to the algorithms used to evaluate if/then rules in a rule base
 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the
 then part is added to the FWC engine's output.  In the BWC algorithm, the
 then part is evaluated and, if TRUE, the if part is added to the BWC
 engine's output.  It is rare, but some systems use both FWC and BWC.

 That's it.  Period.  No other denotations or connotations apply.

Curiously, the definition put by Abram Demski is the only one I've
been aware of until yesterday (I believe it's the one used among
theorem proving people). Let's see what googling says on forward
chaining:

1. (Wikipedia)

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm
A large number of expert systems require the use of forward chaining,
or data driven inference. [...]
Data driven expert systems are different from the goal driven, or
backward chaining systems seen in the previous chapters.
The goal driven approach is practical when there are a reasonable
number of possible final answers, as in the case of a diagnostic or
identification system. The system methodically tries to prove or
disprove each possible answer, gathering the needed information as it
goes.
The data driven approach is practical when combinatorial explosion
creates a seemingly infinite number of possible right answers, such as
possible configurations of a machine.

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html
Forward-chaining implies that upon assertion of new knowledge, all
relevant inductive and deductive rules are fired exhaustively,
effectively making all knowledge about the current state explicit
within the state. Forward chaining may be regarded as progress from a
known state (the original knowledge) towards a goal state(s).
Backward-chaining by an architecture means that no rules are fired
upon assertion of new knowledge. When an unknown predicate about a
known piece of knowledge is detected in an operator's condition list,
all rules relevant to the knowledge in question are fired until the
question is answered or until quiescence. Thus, backward chaining
systems normally work from a goal state back to the original state.

4. http://www.ontotext.com/inference/reasoning_strategies.html
* Forward-chaining: to start from the known facts and to perform
the inference in an inductive fashion. This kind of reasoning can have
diverse objectives, for instance: to compute the inferred closure; to
answer a particular query; to infer a particular sort of knowledge
(e.g. the class taxonomy); etc.
* Backward-chaining: to start from a particular fact or from a
query and by means of using deductive reasoning to try to verify that
fact or to obtain all possible results of the query. Typically, the
reasoner decomposes the fact into simpler facts that can be found in
the knowledge base or transforms it into alternative facts that can be
proven applying further recursive transformations. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Lukasz Stafiniak
On Mon, Jun 30, 2008 at 8:07 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 By the way, just wanted to point out a beautifully simple example - perhaps 
 the simplest - of an irreducibility in complex systems.

 Individual molecular interactions are symmetric in time, they work the same 
 forwards and backwards. Yet diffusion, which is nothing more than the 
 aggregate of molecular interactions, is asymmetric. Figure that one out.

This is just statistical mechanics. The interesting thing is that we
make an opportunistic assumption, that any colliding particles are
independent before collision (this introduces the time arrow), which
is then empirically confirmed by the fact that derived properties
agree with the phenomenological theory of entropy.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Lukasz Stafiniak
On Mon, Jun 30, 2008 at 8:07 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 By the way, just wanted to point out a beautifully simple example - perhaps 
 the simplest - of an irreducibility in complex systems.

 Individual molecular interactions are symmetric in time, they work the same 
 forwards and backwards. Yet diffusion, which is nothing more than the 
 aggregate of molecular interactions, is asymmetric. Figure that one out.

This is just statistical mechanics. The interesting thing is that we
make an opportunistic assumption, that any colliding particles are
independent before collision (this introduces the time arrow), which
is then empirically confirmed by the fact that derived properties
agree with the phenomenological theory of entropy.

P.S. The biggest issue that spoiled my joy of reading Permutation
City is that you cannot simulate dynamic systems ( = solve
numerically differential equations) out-of-order, you need to know
time t to compute time t+1 (or, alternatively, you need to know
t+2), the same goes for space, I presume you need to know x-1,x,x+1
to compute the next-step x.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Adaptivity in Hybrid Cognitive Systems Osnabruck PhD program

2008-05-28 Thread Lukasz Stafiniak
I'm not affiliated but I've found this interesting.
They seem to have 8 positions for PhD students:
http://www.cogsci.uni-osnabrueck.de/PhD/GK/
Their research program is really worth checking-out:
http://www.cogsci.uni-osnabrueck.de/PhD/GK/research/body.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-26 Thread Lukasz Stafiniak
On Mon, May 26, 2008 at 1:26 PM, Mark Waser [EMAIL PROTECTED] wrote:

 What I'd rather do instead is see if we can get a .NET parallel track
 started over the next few months, see if we can get everything ported, and
 see the relative productivity between the two paths.  That would provide a
 provably true answer to the debate.

There are also sane languages using the C++ object model
(http://felix-lang.org/). And there is Mono, though I've heard it
falls behind .NET considerably in terms of efficiency. The thing is,
will multi-language sourcing be encouraged? (Will every contributor be
allowed to write in his language, provided it compiles with the rest?)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-25 Thread Lukasz Stafiniak
On Sun, May 25, 2008 at 10:26 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Certainly there are plenty of folks with equal software engineering experience
 to you, advocating the Linux/C++ route (taken in the current OpenCog version)
 rather than the .Net/C# route that I believe you advocate...

No, I believe he advocates OCaml vs. F#   ;-)
(sorry for leaving-out Haskell and others)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Re: Type inference

2008-05-20 Thread Lukasz Stafiniak
On Tue, May 20, 2008 at 10:05 AM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 On Tue, May 20, 2008 at 5:16 AM, Stephen Reed [EMAIL PROTECTED] wrote:

 Code synthesis, according to my plan, should avoid the need for type
 inference.  The AGI would know in advance the type(s) of the variable.

 Do you see a use for type inference in my work that I have overlooked?

 How would it know in advance what type of the variable does it need?
 Perhaps to compute some result it would need some other or more
 specific arguments than initially conceived?

 In my opinion, planning and program synthesis are closely related, and
 type inference is just a way of looking at some issues involved.

(I repeat some of my message below, for the group.)

You can avoid type inference (or something equivalent) only if you use
propositional logic (in a specific sense), a logic where you cannot
specify properties of objects piece-by-piece. If you can, you need to
track what properties exactly a variable (a parameter, a result of
some computation/action) has to have having already occurred in
contexts it has occurred in, to know where it can be applied / what
can be done with it further on. I call this type inference. You can't
guess the program at once and then typecheck it, you need do type
inference as you go.

Oh, a P.S.: you use incremental parsing, interpreting a sentence
word-by-word. Type inference is a (usually limited and static) form of
program interpretation. It needs to be done incrementally to accompany
program synthesis. And this is sometimes difficult! I don't do it
incrementally yet (in the system I currently work on).

The relation between type inference and program synthesis can be seen
from the parallels between algorithm W (the classical type inference
for the core of ML languages) and my algorithm C (for program
synthesis in the same type system)
[http://www.ii.uni.wroc.pl/~lukstafi/pmwiki/uploads/Main/DM_generation.pdf].
Interested people can look at the translated beginning of
[http://www.ii.uni.wroc.pl/~lukstafi/pmwiki/uploads/Main/dyplom_en.pdf].
My ideas have evolved since back then.

Currently I work on type inference (yeah) in a bit more expressive
context. The page of my system:
http://www.ii.uni.wroc.pl/~lukstafi/pmwiki/index.php?n=Infer.Infer
Using weak constraints would add it more AGIsh flavor, but my work is
in programming languages theory context and I needed the simplest
thing possible. Currently (in addition to the glue type-trees) I
have linear inequalities. I plan to add Datalog as a sublogic, then it
should at least start looking a bit more like AI... Here two examples
with linear arithmetic:

 input:
newtype Bar
newtype List : nat

newcons LNil : List 0
newcons LCons : for all (n) : Bar * List(n) -- List(n+1)

let rec split =
 function LNil - LNil, LNil
   | LCons (x, LNil) as y - y, LNil
   | LCons (x, LCons (y, z)) -
   match split z with (l1, l2) -
 LCons (x, l1), LCons (y, l2)

 output:
split1 : [?gen10 = ?gen9;
 ?gen9 = ?gen10 + 1;]
 List (?gen10 + ?gen9) - List ?gen9, List ?gen10

-- what this means:
split (the first thing defined by this name) is a function that
takes a list of length a + b and returns a pair of lists, one of
length a and the other of length b, where b = a and a = b+1,
that is they are of roughly the same length.

 input:
newtype Binary : nat
newtype Carry : nat

newcons Zero : Binary 0
newcons PZero : for all (n) : Binary(n) -- Binary(n+n)
newcons POne : for all (n) : Binary(n) -- Binary(n+n+1)

newcons CZero : Carry 0
newcons COne : Carry 1

let rec plus =
 function CZero -
   (function Zero - (fun b - b)
 | PZero a1 as a -
   (function Zero - a
 | PZero b1 - PZero (plus CZero a1 b1)
 | POne b1 - POne (plus CZero a1 b1))
 | POne a1 as a -
   (function Zero - a
 | PZero b1 - POne (plus CZero a1 b1)
 | POne b1 - PZero (plus COne a1 b1)))
   | COne -
   (function Zero -
   (function Zero - POne(Zero)
 | PZero b1 - POne b1
 | POne b1 - PZero (plus COne Zero b1))
 | PZero a1 as a -
   (function Zero - POne a1
 | PZero b1 - POne (plus CZero a1 b1)
 | POne b1 - PZero (plus COne a1 b1))
 | POne a1 as a -
   (function Zero - PZero (plus COne a1 Zero)
 | PZero b1 - PZero (plus COne a1 b1)
 | POne b1 - POne (plus COne a1 b1)))

 output:
plus1 : [?vCarry_n_12 = 1;]
 Carry ?vCarry_n_12 - Binary ?gen10 - Binary ?gen11 -
   Binary (?gen10 + ?gen11 + ?vCarry_n_12)

- what this means:
plus is a function that takes a value of type Carry c and binary
numbers Binary a and Binary b, and returns a binary number Binary
(a+b+c) where the carry token c = 1 (numbers in types are natural
numbers).


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id

Re: [agi] Uninterpreted RDF terms

2008-05-18 Thread Lukasz Stafiniak
Word Grammar comes to my mind, where when A -R- B, and A' is-a A,
then you know A' -R- B' where B' is-a B. Because I want to have
lattices (partial orders)  in my system anyway, and because nodes of
my graph-terms might be objects of any domain (they can be nested
graph-terms even), they could usually be from the partial orders
domain and I could add subtyping semantics. So, inheritance relations
would be specified separately from graph-terms. So, you would write:

(rdf:type ?object-type ?X)
(is-a ?X owl:Thing)

where is-a is not part of the graph-term, it rather relates
graph-terms. Or you could look at it as a special edge, similarly to
Word Grammar.

On Sun, May 18, 2008 at 4:17 AM, Stephen Reed [EMAIL PROTECTED] wrote:
 Hi Lukasz,

 Here is a typical Capability Description from my first set of bootstrap
 cases:

  (capability
   name: defineInstanceVariable
   description: Defines an instance variable having the given name and
 object type.
   preconditions:
 (rdf:type ?variable-name cyc:NonEmptyCharacterString)
 (rdf:type ?object-type owl:Thing)
 (rdf:type ?variable-comment cyc:NonEmptyCharacterString)
 (rdf:type ?variable-invariant-conditions cyc:Tuple)
 (implies
   (cyc:memberOfTuple ?variable-invariant-condition
 ?variable-invariant-conditions)
   (rdf:type ?variable-invariant-condition cyc:CycLFormula))
   input-roles:
 (texai:blRole ?variable-name a variable name)
 (texai:blRole ?object-type a type)
 (texai:blRole ?variable-comment a comment)
 (texai:blRole ?variable-invariant-conditions some invariant
 conditions)
   output-roles:
 (texai:blRole ?defined-instance-variable the defined instance
 variable)
   postconditions:  ;;TODO properties of the output with regard to the inputs
 (rdf:type ?defined-instance-variable
 texai:org.texai.bl.domainEntity.BLInstanceVariable)
 )

 I think that the restriction you propose is not expressive enough to handle
 this case.  If I am wrong please correct me.  The matching is performed on
 the preconditions, postconditions and invariant-conditions.  The latter is
 not illustrated in this example but consist of implications similar in form
 to the one found in the preconditions of this example.

 For the others on this list following my progress, the example is from a set
 of essential capability descriptions that I'll use to bootstrap the skill
 acquisition facility of the the Texai dialog system.   The subsumption-based
 capability matcher is done.  I'm writing Java code that implements each of
 these capabilities.  That should be completed in a few more days, and then
 I'll fit that into the already completed dialog system.  At that point I
 should be able to begin exploring what essential utterances will be needed
 to acquire skills by being taught, and generate Java programs to perform
 them.

 -Steve


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] (Interpreted!) RDF terms

2008-05-17 Thread Lukasz Stafiniak
Because I use saturation anyway, my algorithms can be parameterized
with a monotonic consequence operator, which could implement an
implicational theory, with rules (at least) of the form

for all (...) [ Phi == exist (...) Psi ]

where Phi and Psi are conjunctions of RDF atoms plus Psi could also
contain equalities. I could also add negation for atoms, by
restricting labels and not-labels (in a similar way to how offspring_i
and offspring_j would be restricted), but observe that there would be
no closed-world assumption: lack of an edge means I don't know yet.
This is similar to bottom-up logic programming.

On Sat, May 17, 2008 at 7:40 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 Steve,

 How severe would you consider a restriction on RDF graphs that would
 allow at most one incoming and at most one outgoing edge with a given
 label, for capability descriptions? This would allow to do unification
 (and generalization aka. intersection) on graphs easily (not as easily
 as on terms, but nearly). Outside the system where it would be needed
 (I have automatic programming / program analysis in mind), the
 theory/graphs can be extended of course. For example, the parenting
 relation would have to be split into x offspring_i y means x is the
 i-th offspring of y, and we could also add outgoing and incoming
 restrictions, e.g. that a node cannot have incoming offspring_i and
 offspring_j edges for i  j. Outside, we would have the implication
 x offspring_i y == x offspring y.

 Best wishes.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-06 Thread Lukasz Stafiniak
On Tue, May 6, 2008 at 4:07 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Note:  Most of these complaints do *NOT* apply to Texai (except possibly
 the two to five level complaint -- except that Texai is actually starting at
 what I would call one of the middle levels and looks like it has reasonable
 plans for branching out.

Texai has the added value of freshness, but the challenge Steve is
facing now is perhaps bigger than the ones he has conquered already:
to reflect on the system's state and to represent, learn and reason
about actions.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Lukasz Stafiniak
IMHO, Euphoria shows that pure GA approaches are lame.
More details here:
http://aigamedev.com/editorial/naturalmotion-euphoria

On Thu, May 1, 2008 at 5:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Now this looks like a fairly AGI-friendly approach to controlling
  animated characters ... unfortunately it's closed-source and
  proprietary though...

  http://en.wikipedia.org/wiki/Euphoria_%28software%29


  ben

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Lukasz Stafiniak
If I was paid to get a good animation, I would cheat: I would use

-- mixed forward/inverse dynamics instead of pure (forward) simulation,

-- motion capture data mining,

-- hand-crafted parameterized models instead of generic NNs

On Thu, May 1, 2008 at 8:19 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Actually, it seems their technique is tailor-made for imitative learning

  If you gathered data about how people move in a certain context, using
  motion capture, then you could use their GA/NN stuff to induce a
  program that would generate data similar to the motion-captured data.

  This would then be more generalizable than using the raw motion-capture data

  -- Ben


  On Thu, May 1, 2008 at 2:11 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
   IMHO, Euphoria shows that pure GA approaches are lame.
More details here:
http://aigamedev.com/editorial/naturalmotion-euphoria
  
  
  
On Thu, May 1, 2008 at 5:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Now this looks like a fairly AGI-friendly approach to controlling
  animated characters ... unfortunately it's closed-source and
  proprietary though...

  http://en.wikipedia.org/wiki/Euphoria_%28software%29


  ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-14 Thread Lukasz Stafiniak
On Mon, Apr 14, 2008 at 5:14 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 My first solution to this problem is to postpone it by employing a
 controlled English, in which such constructions will be avoided if possible.
 Secondly, Jerry Ball demonstrated his solution in Double R Grammar at the
 2007 AAAI Fall Symposium, Cognitive Approaches to NLP.  His slide
 presentation is here, which I think fully addresses your issues.  To
 summarize Dr. Ball's ideas, which I will ultimately adopt for Texai:

Thanks, very interesting slides. I think he forgets to mention
Dynamic Syntax (Ruth Kempson, Dov Gabbay).

 Serial processing [word by word parsing] with algorithmic backtracking has
 no hope for on-line processing in real-time in a large coverage NLP system.

I think that Double R accomodation approach can be approximated by
incremental right-to-left parsing. Something along the lines of
http://www.speagram.org/wiki/Grammar/ChartParser but still needs much
work (the approach was developed when I've been in computational
semantics phase, it ignores cognitive linguistics, and is too
fragmented: only categorical semantics (and agreement, by use of
variables in types) are processed, with relational and referential
semantics postponed to latter stages). The up side is that it can
handle general Context Free Grammars.

I didn't know that Microsoft uses some kind of right-to-left parsing,
I thought it is my invention :-)

 I regret that some aspects of my implementation are difficult to follow
 because I am using Jerry Ball's Double R Grammar, but not his ACT-R Lisp
 engine,  using instead my own incremental, cognitively plausible, version of
 Luc Steel's Fluid Construction Grammar engine.  I combined these two systems
 because Jerry Ball's engine is not reversible, Luc Steel's grammar is not a
 good coverage of English, and the otherwise excellent Fluid Construction
 Grammar engine is not incremental.

 -Steve

Perhaps you could get some linguist to capitalize on your work with a
publication?

Best Regards,
Łukasz

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-14 Thread Lukasz Stafiniak
2008/4/14 Lukasz Stafiniak [EMAIL PROTECTED]:
 On Mon, Apr 14, 2008 at 5:14 PM, Stephen Reed [EMAIL PROTECTED] wrote:
  
   Serial processing [word by word parsing] with algorithmic backtracking has
   no hope for on-line processing in real-time in a large coverage NLP system.

  I think that Double R accomodation approach can be approximated by
  incremental right-to-left parsing. Something along the lines of
  http://www.speagram.org/wiki/Grammar/ChartParser but still needs much

If you're confused by the equations, increments are left-to-right, and
between increments, there's right-to-left accomodation-like stage.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-13 Thread Lukasz Stafiniak
On Wed, Apr 9, 2008 at 6:03 AM, Stephen Reed [EMAIL PROTECTED] wrote:

 I would be interested
 in your comments on my adoption of Fluid Construction Grammar as a solution
 to the NL  to semantics mapping problem.

(1) Word Grammar (WG) is a construction-free version of your approach.
It is based solely on spreading activation. It doesn't have a sharp
separation of syntax and semantics: there's only one net. Nodes
representing subgraphs corresponding to constructions can be organized
into inheritance hierarchies (extensibility). But pure WG makes
things very awkward logics-wise, making it work would be a lot of
research (the WG book doesn't discuss utterance generation IIRC, but
reversing parsing-interpretation seems quite direct: select the most
activated word which doesn't have a left landmark, introduce a
word-instance node for it, include spreading its activation through a
right-landmark (ignoring direction of the landmark) edge). Texai is
impure by its very nature, perhaps it could be made more (than just
sharing the spreading activation idea) of a mix WG*FCG.

(2) FCG is closer to traditional apporaches a la computational
linguistics than WG.

(3) One could give up some FCG features to simplify it, for example by
assuming one-to-one correspondence between constructions and atomic
predicates.

(4) I'm interested in how do you handle backtracking: giving up on
application of a construction when it leads to inconsistency.
Chart-based unification parsing can be optimized to share applications
of constructions which are parallel, and this can be extended to
operators which are (like unification) monotonic, e.g. cannot make
unsatisfiable/inconsistent state a satisfiable/consistent one. Merging
conjuncts new facts to old ones so it is monotonic in monotonic
logics. (Default/defeasible logics are nonmonotonic.)

(4a) Does the fact that your parser is incremental mean that you do
early commitment to constructions? (Double R Grammar seems to
support early commitment when there is choice, but backtracking is
still needed to get an interpretation when there are only ones without
it.)

I will get to studying your sources when I'll have some time...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-12 Thread Lukasz Stafiniak
On Thu, Apr 10, 2008 at 2:49 AM, Pei Wang [EMAIL PROTECTED] wrote:
 Lukasz,

  Thanks!

  To me, your logical semantics and linguistic semantics correspond
  to meaning of concepts and meaning of words, respectively, and the
  latter is a subset of the former, as far as an individual is
  concerned.

Now I think that my short description was wrong, I don't intend this
reading but I'm a bit lost right now.

Words are empirically evident, while some (e.g. see the beginning of
the Double R Grammar paper) deny the existence of concepts (for some
psychological or psycholinguistic theory of what concepts are).
Linguists study languages, which are easier for empirical exploration
and systematization than the mind. (And linguists do also use
introspection, introspection is an empirical method I think.)

  Some random comments on the Multinet material:

  *. Principal requirements for a KRS
  -- completeness: it is easy to show that a KRS is incomplete, but hard
  (if possible) to show/prove it is complete.

Only some approximation to completeness is required.

  *. CD theory: Though the idea in very intuitive, it is fundamentally
  wrong, for two major reasons:
  (1) Unlike the situation of chemistry, where everything can be
  analyzed as compound formed by 100+ chemical elements, meaning of a
  concept/word cannot be reduced into semantic primitives. Though we
  can use simpler concepts to define complicated ones, such a definition
  never capture the full meaning of the latter, but can only approximate
  it to various degrees.

This critique is agreed upon and accounted for by Multinet.

  (2) The meaning of a concept/word is not a constant, but a variable.
  Of course, any description of it will be constant, but it is just a
  snapshot taken as a moment, and the semantic theory must allow
  meaning to change, rather than attempt to specify the real or true
  meaning, or to converge to such an attractor.

The QAS of Multinet should do better at accounting for change, and
should be _more than a QAS_ to provide the use-theoretic semantics
they are aiming at. The meaning in Multinet can change: a multinet can
change, the restrictive knowledge limits the amount of allowed
change.

  *. KB and the world: still depends on Tarskian semantics, with KR
  aiming at to describe the world as it is.

No, Multinet is very critical about model-theoretic (or
truth-theoretic) semantics. It has the division into intensional
level which doesn't refer to the outside world, and
pre-extensional level, which I think can also have internalized
reading (the world objects as they are meant).

(We need balance of course, it is useful to know the world as it is. I
might add, it is also useful to know the world as it should be...)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-09 Thread Lukasz Stafiniak
On Wed, Apr 9, 2008 at 6:03 AM, Stephen Reed [EMAIL PROTECTED] wrote:

 Thanks for the compliment Lukasz.   I am reading your slides and here are my
 comments:

 (1)  I had seven years experience with the Cyc project.  Would you agree
 that Cyc aspires to be a KRS as you define it?

Well, as Hermann Helbig defines it. Yes, Cyc gets its highlight in the
Multinet book. (I've only written a bit of text on the slides, most of
them, the diagrams, come from the Multinet book.)

 (2)  Sadly, Cyc lacks procedural methods as first class KB objects.  Also
 known as codelets, these are pieces of procedural code that can be fired as
 the consequent of a rule.  Cyc has a facility to do this but the procedures
 themselves are semantically opaque, being calls into the Cyc runtime engine.
 In my own work I want to fully represent procedures, using Cyc action
 scripts as the starting point, so that the system can do things in
 addition to answering questions.

Multinet doesn't help here much: QAS is not an agent; Multinet helps
by providing the semantic framework for situations and actions. We
need to add the representation of self grounded in that semantics.

 (4) Conceptual Dependency Theory (CD) - This is somewhat like Cyc in that
 Doug Lenat is a mathematician and was strongly attracted to symbolic
 representations that are independent of natural language.  The glaring
 problem with this approach is that coverage of commonsense phenomena is
 harder without guidance from natural language sources.  To illustrate my
 point, rather than start with an English encyclopedia and represent it
 entirely, the Cyc project began with some commonsense situations, (e.g. one
 day in the life of Fred) and represented them from first
 philosophical/mathematical principles.   In my own work, I want to extend
 the Cyc ontology to cover all the concepts mentioned in the glosses
 (definitions) of WordNet, and ultimately the propositional content of
 Wikipedia articles.

Well, perhaps Cyc falls short on both fronts: it is too broad to be
CD: it represents much more meaning than can be built from CD's-like
atoms. But the representation is provided ad-hoc by knowledge
ingeneers, it is not grounded in the general net of meaning, which
in Multinet is provided by the NL semantics. But perhaps these are
just false slogans and Cyc knowledge is dense enough.

 (5) Sorts and Features - To me these are Cyc-like, except that Cyc made the
 decision to represent appropriate features as class membership (e.g. the
 property cyc:mainColorOfObject is a sub-property of  cyc:isa / rdf:type).
 Supposedly, this representation is faster for Cyc deductive inference.

On the red petal slide you see that Multinet is flexible about how
things are represented (e.g. general rules transform between PROP and
ATTR-VAL, similarily rules relate features and their corresponding
concept nodes; well I don't know enough about features in Multinet).

 (6) Knowledge types - Multinet appears more expressive in this respect than
 the Cyc ontology, although the Cyc KR language CycL allows meta assertions
 so I believe that MultiNet could be encoded in a Cyc KB.

I believe Cyc is well worked-out by the many man-years of development.
(I still need to get my hands dirty.)

 (7) Conceptual capsule - interesting, Cyc has the supporting assertions but
 not the notion of what assertions uniquely define an object.

This seems important to the object/concept-centeredness and
intensionality of Multinet.

 (8) How does Multinet address connectionism or probabilistic inference (e.g.
 Bayes)?  Did I miss where a probability may be associated with an assertion,
 or with an argument place in an assertion?

The book only mentions that the underlying logic should have levels
of truthworthiness. Multinet doesn't represent probabilities because
neither does language: Multinet has modal modifiers, e.g. (very)
probable, (very) unlikely etc. and intensional generalized
quantifiers, like some, most.

 (8) lexicon - need more examples for me to comment.  I would be interested
 in your comments on my adoption of Fluid Construction Grammar as a solution
 to the NL  to semantics mapping problem.

I'll try to find time for some more in-depth comments here.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-09 Thread Lukasz Stafiniak
Steve,

I'm just on the 7th page of the Double R Grammar paper so I'm rushing
ideas here, but it is interesting to see how Multinet, while taking
roots in Conceptual Dependency Theory / Case Grammar, and taking the
concepts it talks about as mental realities, lands quite close to the
philosophy of Double R Grammar (which defines itself in opposition to
the above) by insisting on grounding concepts in lexicon.


On Wed, Apr 9, 2008 at 4:02 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 On Wed, Apr 9, 2008 at 6:03 AM, Stephen Reed [EMAIL PROTECTED] wrote:
  
   (4) Conceptual Dependency Theory (CD) - This is somewhat like Cyc in that
   Doug Lenat is a mathematician and was strongly attracted to symbolic
   representations that are independent of natural language.  The glaring
   problem with this approach is that coverage of commonsense phenomena is
   harder without guidance from natural language sources.  To illustrate my
   point, rather than start with an English encyclopedia and represent it
   entirely, the Cyc project began with some commonsense situations, (e.g. one
   day in the life of Fred) and represented them from first
   philosophical/mathematical principles.   In my own work, I want to extend
   the Cyc ontology to cover all the concepts mentioned in the glosses
   (definitions) of WordNet, and ultimately the propositional content of
   Wikipedia articles.

  Well, perhaps Cyc falls short on both fronts: it is too broad to be
  CD: it represents much more meaning than can be built from CD's-like
  atoms. But the representation is provided ad-hoc by knowledge
  ingeneers, it is not grounded in the general net of meaning, which
  in Multinet is provided by the NL semantics. But perhaps these are
  just false slogans and Cyc knowledge is dense enough.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Between logical semantics and linguistic semantics

2008-04-08 Thread Lukasz Stafiniak
I have recently polished my copy-and-paste slides on Multinet:

http://www.ii.uni.wroc.pl/~lukstafi/pmwiki/uploads/AGI/Multinet.pdf

Pei Wang also gives an interesting chapter about semantics in AGI-Curriculum.

By logical semantics I mean the meaning of the contents of mind, and
by linguistic semantics the meaning of the contents of
communication.
What AGI-importance do you assign to capturing the semantics of
natural language? (And NL-semantics' impact on logical semantics, as
opposed to letting the computer build the representation for itself,
out of some elementary thought mechanics.)

P.S. Thanks to Pei Wang for the interesting curriculum and to Stephen
Reed for the great work on Texai.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-28 Thread Lukasz Stafiniak
On Fri, Mar 28, 2008 at 9:29 PM, Robert Wensman
[EMAIL PROTECTED] wrote:
 A few things come to my mind:

 1. To what extent is learning and reasoning a sub topic of cognitive
 architectures? Is learning and reasoning a plugin to a cognitive
 architecture, or is in fact the whole cognitive architecture about learning
 and reasoning.

If cognitive architectures department of AGI research is to be
usefully delineated, then these are not its subtopics. But neither
they are plug-ins. It is in this chapter I introduce you to the
overall structure of the system. From other chapters you know that...

 2. I would like a special topic on AGI goal representation. More
 specifically, a topic that discusses how a goal specified by any human
 designer, can be related to the world model and actions that an AGI system
 creates? For example, how can the human specified goal, be related to a
 knowledge representation that is constantly developed by the AGI system?

Yes, more work needed on lifelong goal structures, Pollock's master
plans, integration with motivational system (which in the primitive
form is spreading activation).

 3. Why do AI/AGI researchers always talk about knowledge representation.
 It gives such a strong bias towards static or useless knowledge bases. Why
 not talk more about World modelling. Because of the more active meaning of
 the word modelling as opposed to representation, it implies that things
 such as inference etc. need to be considered. Since the word modelling is
 also used to denote the process of creating a model, it also implies that we
 need mechanisms for learning. I really think we should consider if not
 knowledge representation is a concept straightly borrowed from dumb-narrow
 AI, or if it really is a key concept for AGI. Sure enough, there will always
 be knowledge representation, but the question is whether it is an
 important/relevant/sufficient/misleading concept for AGI.

Agreed. I think that knowledge representation label should not be
abandoned, but should be grown towards how the system accomodates the
sophisticated semantics of natural language and/or its formative
domain where formative domain can be social environment,
programmistic environment etc.

 4. In fact. I would suggest that AGI researchers start to distinguish
 themselves from narrow AGI by replacing the over ambiguous concepts from AI,
 one by one. For example:

I neither agree nor disagree with your suggestion, I just thank for
clarifying your ideas here considerably :-)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread Lukasz Stafiniak
On Feb 19, 2008 2:41 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 I think resolution theorem proving provides a way to answer yes/no queries
 in a KB.  I take it as a starting point, and try to think of ways to speed
 it up and to expand its abilities (answering what/where/when/who/how
 queries).

Oh my, resolution answers wh-questions as well as decision questions
in FOL. You just record the answer substitution. (BTW, Prolog is a
positive resolution.) We need to be more technical here.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Lukasz Stafiniak
On Feb 17, 2008 2:11 PM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 9:56 AM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 
  I'm planning to collect commonsense knowledge into a large KB, in the form
  of first-order logic, probably very close to CycL.

 Before you embark on such a project, it might be worth first looking
 closely at the question of why Cyc hasn't been useful, so that you
 don't end up making the same mistakes.

This is perhaps a good opportunity to poll you on why do you think Cyc
KB hasn't been useful / successful, I'm interested in grounded
opinions (Stephen?), and not about Cyc as an AGI but about Cyc KB as
what it was supposed to be (e.g. a universal backbone so that expert
systems didn't fall off the knowledge cliff).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Lukasz Stafiniak
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:
  Exactly. That's why it can't hack provably correct programs.

 Which is useless because you can't write provably correct programs that aren't
 extremely simple.  *All* nontrivial properties of programs are undecidable.
 http://en.wikipedia.org/wiki/Rice%27s_theorem

This is false. You can write nontrivial programs for which you can
prove nontrivial properties. Rice's theorem tells that you cannot
prove nontrivial properties for programs written in Turing-complete
languages and given unbounded resources and handed to you by an
adversary.

 And good luck translating human goals expressed in ambiguous and incomplete
 natural language into provably correct formal specifications.

This is true.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90871958-149830


Re: [agi] Logical Satisfiability

2008-01-15 Thread Lukasz Stafiniak
On Jan 15, 2008 10:49 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 At any rate, I should have a better idea if the idea will work or not by the
 end of the year.

Lucky you, last time I proved P=NP it only lasted two days ;-)
Some resources for people caught by this off-topic thread:

- old year's celebrity: PCP Theorem by Gap Amplification
http://www.cs.huji.ac.il/~dinuri/mypapers/combpcp.pdf

- Introduction to Complexity Theory (Lecture Notes):
http://www.wisdom.weizmann.ac.il/~oded/cc-sum.html

- complexities do collapse at times (L=SL, 2004): Undirected
ST-connectivity in Log-Space
http://www.wisdom.weizmann.ac.il/~reingold/publications/sl.ps

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=86291535-6f34ee


Re: [agi] AGI and Deity

2007-12-13 Thread Lukasz Stafiniak
Under this thread, I'd like to bring your attention to Serial
Experiments: Lain, an interesting pre-Matrix (1998) anime.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75885762-854b15


[agi] Case-Based Reasoning

2007-12-10 Thread Lukasz Stafiniak
Perhaps you could find this interesting:
Case-Based Approximate Reasoning
Hüllermeier, Eyke


On May 14, 2007 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:


  Also anything you can find on case-based reasoning, tho it is woefully
 rare.

 Having done a lot of case-based reasoning almost 23 years ago . . . .

 Case-based reasoning is effectively analogous to weighted nearest neighbor
 in multi-dimensional space.  If you (or the system) can define the
 dimensions and scale and weight them, it's an awesome method -- this is
 equivalent to the logic-based/expert-system approach to CBR.  The other
 alternative, which most people don't realize is exactly equivalent to CBR,
 is to just use neural networks (since they just effectively map the
 multi-d space -- complete with scaling and weighting).

 Having used both methods, I would say that, until they both scale themselves
 fairly quickly into oblivion, the neural network method is more accurate
 while CBR provides much better explanations.  The unfortunate thing is that
 as you add more and more dimensions, both methods falter pretty quickly.

 - Original Message -
 From: J Storrs Hall, PhD [EMAIL PROTECTED]
 To: agi@v2.listbox.com


 Sent: Monday, May 14, 2007 7:51 AM
 Subject: Re: [agi] Tommy



  On Saturday 12 May 2007 10:24:03 pm Lukasz Stafiniak wrote:
 
  Do you have some interesting links about imitation? I've found these,
  not all of them interesting, I'm just showing what I have:
 
  Thanks -- some of those look interesting. I don't have any good links, but
 I'd
  reccomend Hurley  Chater, eds, Perspectives on Imitation (in 2 vols).
 
  Also anything you can find on case-based reasoning, tho it is woefully
 rare.
 
  Josh
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email

  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74317004-ffbd21

Re: [agi] Funding AGI research

2007-11-19 Thread Lukasz Stafiniak
it reminds me that old joke about three kinds of mathematicians ;-)

On Nov 19, 2007 5:25 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote:



 On Nov 18, 2007 11:24 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 
  There are a lot of worthwhile points in your post, and a number of things
 I don't fully agree with, but I don't have time to argue them all right
 now...
 
  Instead I'll just pick two points:


 er, looks like that was three ;-)



 
 
  1)
  The Babbages and Leibnizes of a given historical period are often visible
 only in HINDSIGHT.  You can't say that there are no Babbages or Leibnizes of
 AGI around right now ... there could be some on this very list, unrecognized
 by you, but who will be recognized by all a few decades from now...
 
  2)
  I don't think it's true that Babbage's or Leibniz's machines were specced
 out so much better than, say, Novamente.  Relative to the technology of
 their time, plenty of details were left unspecified -- it just seems obvious
 to us now, in hindsight, how to fill in those details.  It wasn't obvious to
 all their contemporaries.  And while, in hindsight, the workability of their
 machines seems obvious to us, to their contemporaries it must have seemed
 like the workability of their machines required a huge leap of intuition.
 They had no rigorous mathematical proof of the workability of their
 machines, nor did they have working prototypes.  They had conceptual
 arguments that pushed the boundaries of the science of their times, and
 seemed like nonsense to many of their contemporaries.
 
  3)
  I don't agree that AGI is primarily a computer science problem, any more
 than, say, building a car is primarily a metalworking problem.  AGI requires
 computer science problems to be solved as part of its solution; but IMO the
 essence of AGI-creation is not computer science.  This seems to be a genuine
 difference of scientific intuition btw the two of us.  Plenty of others whom
 I respect appear to share the same opinion as you.
 
 
  -- Ben G
 
 
 
 
 
 
 
  On Nov 18, 2007 11:04 PM, J. Andrew Rogers [EMAIL PROTECTED] 
 wrote:
 
  
  
   On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote:
   
Navigating complex social and business situations requires a quite
different set of capabilities than creating AGI.  Potentially they
could
be combined in the same person, but one certainly can't assume that
would be the case.
  
  
   I completely agree.  But if we are to assume that AGI requires some
   respectable amount of funding, as seems to be posited by many people,
   then it seems that it will require a person with broader skills than
   the stereotypical computer science nerd.  In that case, maybe AGI is
   not accessible to someone who is unwilling or unable to be anything
   but a computer science nerd.  As if the pool of viable AGI
   researchers was not small enough already.
  
  
  
And, I don't think it's fair to say that if you're smart enough to
solve AGI,
you should be able to quickly make a pile of money doing some kind of
more marketable technical-computer-science, and fund the AGI
yourself.
   
This assumes a lot of things, for instance that AGI is the same
sort of
problem as technical-computer-science problems, so that if someone can
do AGI better than others, they must be able to do technical-
computer-science
better than others too.  But I actually don't think this is true; I
think that AGI
demands a different sort of thinking.
  
  
   I'm not so sure about this.  All hard problems seem to receive
   similar sentiments until they are actually solved.  I do think that
   AGI is a relatively hard problem even among the hard problems, but
   there are other computer science problems that had thousands of pages
   of literature devoted to them without much progress that when they
   were solved by someone turned out to be relatively simple.  That
   20/20 hindsight thing.  To the extent that there is any special sauce
   in AGI, I expect it will look like one of these cases.
  
   Solving computer science problems is a pretty general skill, in part
   because it is a pretty shallow field in most important respects.  To
   use AI research as an example, it is composed of only a handful of
   fundamental ideas from which a myriad of derivatives and mashups have
   been created.  Most other problems in computer science have the same
   feature, and when problems get solved it is because someone looked at
   the handful of fundamentals and ignored the vast bodies of derivative
   products which add nothing new.  Vast quantities of research does not
   equate to a significant quantity of ideas.  AI is a little more
   complex than some other topics, but is still far simpler at the level
   of fundamentals than some people make it out to be.
  
  
   People are incapable of solving AGI for the same reason they are
   incapable of solving any of the other interesting computer 

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
I think that there are two basic directions to better the Novamente
architecture:
the one Mark talks about
more integration of MOSES with PLN and RL theory

On 11/13/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 Response to Mark Waser  Mon 11/12/2007 2:42 PM post.



 MARK  Remember that the brain is *massively* parallel.  Novamente and
 any other linear (or minorly-parallel) system is *not* going to work in
 the same fashion as the brain.  Novamente can be parallelized to some
 degree but *not* to anywhere near the same degree as the brain.  I love
 your speculation and agree with it -- but it doesn't match near-term
 reality.  We aren't going to have brain-equivalent parallelism anytime in
 the near future.



 ED I think in five to ten years there could be computers capable of
 providing every bit as much parallelism as the brain at prices that will
 allow thousands or hundreds of thousands of them to be sold.



 But it is not going to happen overnight.  Until then the lack of brain
 level hardware is going to limit AGI. But there are still a lot of high
 value system that could be built on say $100K to $10M of hardware.



 You claim we really need experience with computing and controlling
 activation over large atom tables.  I would argue that obtaining such
 experience should be a top priority for government funders.



 MARK  The node/link architecture is very generic and can be used for
 virtually anything.  There is no rational way to attack it.  It is, I
 believe, going to be the foundation for any system since any system can
 easily be translated into it.  Attacking the node/link architecture is
 like attacking assembly language or machine code.  Now -- are you going to
 write your AGI in assembly language?  If you're still at the level of
 arguing node/link, we're not communicating well.



 ED  nodes and links are what patterns are made of, and each static
 pattern can have an identifying node associated with it as well as the
 nodes and links representing its sub-patterns, elements, the compositions
 of which it is part, it associations, etc.  The system automatically
 organize patterns into a gen/comp hierarchy.  So, I am not just dealing at
 a node and link level, but they are the basic building blocks.





 MARK ... I *AM* saying that the necessity of using probabilistic
 reasoning for day-to-day decision-making is vastly over-rated and has been
 a horrendous side-road for many/most projects because they are attempting
 to do it in situations where it is NOT appropriate.  The increased,
 almost ubiquitous adaptation of probabilistic methods is the herd
 mentality in action (not to mention the fact that it is directly
 orthogonal to work thirty years older).  Most of the time, most projects
 are using probabilistic methods to calculate a tenth place decimal of a
 truth value when their data isn't even sufficient for one.  If you've got
 a heavy-duty discovery system, probabilistic methods are ideal.  If you're
 trying to derive probabilities from a small number of English statements
 (like this raven is white and most ravens are black), you're seriously
 on the wrong track.  If you go on and on about how humans don't understand
 Bayesian reasoning, you're both correct and clueless in not recognizing
 that your very statement points out how little Bayesian reasoning has to
 do with most general intelligence.  Note, however, that I *do* believe
 that probabilistic methods *are* going to be critically important for
 activation for attention, etc.



 ED  I agree that many approaches accord too much importance to the
 numerical accuracy and Bayesian purity of their approach, and not enough
 importance on the justification for the Bayesian formulations they use.
 I know of one case where I suggested using information that would almost
 certainly have improved a perception process and the suggestion was
 refused because it would not fit within the system's probabilistic
 framework.   At an AAAI conference in 1997 I talked to a programmer for a
 big defense contractor who said he as a fan of fuzzy logic system; that
 they were so much more simple to get up an running because you didn't have
 to worry about probabilistic purity.  He said his group that used fuzzy
 logic was getting things out the door that worked faster than the more
 probability limited competition.  So obviously there is something to say
 for not letting probabilistic purity get in the way of more reasonable
 approaches.



 But I still think probabilities are darn important. Even your this raven
 is white and most ravens are black example involves notions of
 probability.  We attribute probabilities to such statements based on
 experience with the source of such statements or similar sources of
 information, and the concept most is a probabilistic one.  The reason we
 humans are so good at reasoning from small data is based on our ability to
 estimate rough probabilities from similar or generic patterns.



 MARK  The 

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
On Nov 14, 2007 3:48 PM, Edward W. Porter [EMAIL PROTECTED] wrote:
 Lukasz,

 Which of the multiple issues that Mark listed is one of the two basic
 directions you were referring to.

 Ed Porter

(First of all, I'm sorry for attaching my general remark as a reply: I
was writing from a cell-phone which limited navigation.)

I think, that it would be a more fleshed-out knowledge representation
(but without limiting the representation-building flexibility of
Novamente).

 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 14, 2007 9:15 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] What best evidence for fast AI?


 I think that there are two basic directions to better the Novamente
 architecture:
 the one Mark talks about
 more integration of MOSES with PLN and RL theory


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64970556-f74c23


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Lukasz Stafiniak
On Nov 12, 2007 10:34 PM, Linas Vepstas [EMAIL PROTECTED] wrote:

 I can easily imagine that next-years grand challenge, or the one
 thereafter, will explicitly require ability to deal with cyclists,
 motorcyclists, pedestrians, children and dogs. Exactly how they'd test
 this, however, I don't know ...

DARPA seems to be winding up the car challenges :-(

(anyone knows anything to the contrary?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64332374-2a763e


Re: AIXItl; Wolfram's hypothesis (was Re: [agi] How valuable is Solmononoff Induction for real world AGI?)

2007-11-10 Thread Lukasz Stafiniak
On Nov 10, 2007 4:47 PM, Tim Freeman [EMAIL PROTECTED] wrote:
 From: Lukasz Stafiniak [EMAIL PROTECTED]
 The programs are generally required to exactly match in AIXI (but not
 in AIXItl I think).

 I'm pretty sure AIXItl wants an exact match too.  There isn't anything
 there that lets the theoretical AI guess probability distributions and
 then get scored based on how probable the actual world is according to
 that distribution -- each hypothesis is either right or wrong, and
 wrong hypotheses are discarded.

I agree that I misinterpreted the meaning of exact match.
AIXItl uses strategies whose outputs do not need to agree with history.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63846012-5d1170


[agi] Re: Solomonoff Machines – up close and personal

2007-11-10 Thread Lukasz Stafiniak
On Nov 10, 2007 11:42 PM, Edward W. Porter [EMAIL PROTECTED] wrote:

 You say there is no magic in AIXI.  Is it just make believe Let X be the
 best way to solve problems. Use X, or does it say something of value to
 those like me who want to see real AGI's built?

Some observations that come to me from reading Marcus Hutter, you can
see their worth:

(1) he gives rates of convergence for a posterior distribution to a
real distribution, assuming that the real distribution has a non-zero
prior probability

(2) he shows that a weighted (=mean taking) predictor converges
possibly exponentially faster than maximum likelihood predictor

(3) he shows that the expectimax algorithm is optimal given unbound
resources = you can view things functionally: best possible policy
(program, strategy), or iteratively (expectimax)

(4) he discusses the issue of choosing horizon, reinforcement learning
(=RL) work usually uses geometric discounting, Hutter shows that it
gives (ln 0.5 / ln d) effective horizon (where d is the discount
rate), (and it would be theoretically justified when the agent has
probability d of surviving to next cycle)

(5) he discusses RL from a general stance, e.g. classes of
environments, application to learning frameworks more specific than RL
(supervised learning, optimization) (but only theoretically)

(6) he discusses the issues of dividing computation resources into
using currently best strategy and searching for new strategy (in his
time-optimal algorithm for all well-specified problems)

(7) his computational AIXItl model, assuming that it's the best out
there since Marcus didn't come with something better :-), justifies
some practical approaches of letting the competing policies estimate
their expected utility: AIXItl only allows policies for which it can
find a proof that the policy doesn't overestimate its utility, Eric
Baum's market economy uses the policies' claims as a currency
(cheating policies go bankrupt), accuracy-based Michigan-style
learning classifier systems XCS use (evolutionary) selection pressure
on the accuracy of the policies' claims

(8) the Kolmogorov-complexity-inspired distribution over programs is
related to new better than genetic programming approaches
(Schmidhuber's OOPS, MOSES) (but perhaps only distantly)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63908199-d1781a


Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-09 Thread Lukasz Stafiniak
On Nov 9, 2007 5:26 AM, Edward W. Porter [EMAIL PROTECTED] wrote:
 ED ## what is the value or advantage of conditional complexities
 relative to conditional probabilities?

Kolmogorov complexity is universal. For probabilities, you need to
specify the probability space and initial distribution over this
space.

 ED ## What's a TM?
(Turing Machine, or a code for a universal Turing Machine = a program...)

 Also are you saying that the system would develop programs for matching
 patterns, and then patterns for modifying those patterns, etc, So that
 similar patterns would be matched by programs that called a routine for a
 common pattern, but then other patterns to modify them to fit different
 perceptions?

Yes, these programs will be compact description od data when enough
data gets collected, so their (posterior) probability will grow with
time. But the most probable programs will be very cryptic, without
redundancy to make the structure evident.

 So are the programs just used for computing Kolmogorov complexity or are
 they also used for generating and matching patterns.

It is difficult to say: in AIXI, the direct operation is governed by
the expectimax algorithm, but the algorithm works in future (is
derived from the Solomonoff predictor). Hutter mentions alternative
model AIXI_alt, which models actions the same way as the
environment...

 Does it require that the programs exactly match a current pattern being
 received, or does it know when a match is good enough that it can be relied
 upon as having some significance?

It is automatic: when you have a program with a good enough match,
then you can parameterize it over the difference and apply twice,
thus saving the code. Remember that the programs need to represent the
whole history.

 Can the programs learn that similar but different patterns are different
 views of the same thing?
 Can they learn a generalizational and compositional hierarchy of patterns?

With an egzegetic enough interpretation...

I will comment on further questions in a few hours.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63453551-e3704c


Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-09 Thread Lukasz Stafiniak
On Nov 9, 2007 5:26 AM, Edward W. Porter [EMAIL PROTECTED] wrote:

 So are the programs just used for computing Kolmogorov complexity or are
 they also used for generating and matching patterns.

The programs do not compute K complexity, they (their length) _are_ (a
variant of) Kolmogorov complexity. The programs compute (predict) the
environment.

 Does it require that the programs exactly match a current pattern being
 received, or does it know when a match is good enough that it can be relied
 upon as having some significance?

The programs are generally required to exactly match in AIXI (but not
in AIXItl I think). But the significance is provided by the
compression on representation of similar things, which favors the same
sort of similarity in the future.

 Can they run on massively parallel processing.

I think they can... In AIXI, you would build a summation tree for the
posterior probability.

 The Hutters expectimax tree appears to alternate levels of selection and
 evaluation.   Can the Expectimax tree run in reverse and in parallel, with
 information coming up from low sensory levels, and then being selected based
 on their relative probability, and then having the selected lower level
 patterns being fed as inputs into higher level patterns and then repeating
 that process.  That would be a hierarchy that alternates matching and then
 selecting the best scoring match at alternate levels of the hierarchy as is
 shown in the Serre article I have cited so many times before on this list.

To be optimal, the expectimax must be performed chronologically from
the end of the horizon (dynamic programming principle: close to the
end of the time horizon, you have smaller planning problems -- less
opportunities; from smaller solutions to smaller problems you build
bigger solutions backwards in time). But the probabilities are
conditional on all current history including low sensory levels.

(Generally, your comment above doesn't make much sense in the AIXI context.)

 ED## are these short codes sort of like Wolfram little codelettes,
 that can hopefully represent complex patterns out of very little code, or do
 they pretty much represent subsets of visual patterns as small bit maps.

It depends on reality, whether the reality supports Wolfram's hypothesis.

Best Regards.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63539823-b308a9


Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread Lukasz Stafiniak
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:



 HOW VALUABLE IS SOLMONONOFF INDUCTION FOR REAL WORLD AGI?

I will use the opportunity to advertise my equation extraction of
the Marcus Hutter UAI book.
And there is a section at the end about Juergen Schmidhuber's ideas,
from the older AGI'06 book. (Sorry biblio not generated yet.)

http://www.ii.uni.wroc.pl/~lukstafi/pmwiki/uploads/AGI/UAI.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63153472-64b600


Re: [agi] The Grounding of Maths

2007-10-12 Thread Lukasz Stafiniak
What you describe is not a visualization, but the silent inner speech.
http://en.wikipedia.org/wiki/Lev_Vygotsky#Thinking_and_Speaking

On 10/12/07, a [EMAIL PROTECTED] wrote:
 Benjamin Goertzel wrote:
 
  Well, it's hard to put into words what I do in my head when I do
  mathematics... it probably does use visual cortex in some way, but's
  not visually manipulating mathematical expressions nor using visual
  metaphors...
 I can completely describe. I completely do mathematics by visually
 manipulating and visually replacing symbols with other symbols. I also
 do mathematical reasoning and theorem proving with that. Mathematicians
 commonly have high visuospatial intelligence, that's why they have high IQs.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53098509-58aae0


Re: [agi] The Grounding of Maths

2007-10-12 Thread Lukasz Stafiniak
For those interested in higher dimensions, I've just grabbed a link
from wikipedia:
* Christopher E. Heil, A basis theory primer, 1997.
http://www.math.gatech.edu/~heil/papers/bases.pdf

Well, a mathematician needs to _understand_ (as opposed to what I
would call a knowledge base - inference engine disconnect), and
visualization is a metaphor for understanding, not the understanding
itself.

What visualization actually often means, is an _unsound reduction_
of a general sophisticated notion to a simple model, without ever
realizing that the model contradicts the notion, but instead
supplementing this heuristical model with additional mental
discipline at rough corners.

On 10/12/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 Benjamin Goertzel wrote:
 
  Well ... going beyond imaginary numbers...  how do *you* do mathematics
  in quaternionic and octonionic algebras?  Via visualization?
  Personally, I can sorta visualize 4D, but I I suck at visualizing
  8-dimensional space, so I tend to reason more abstractly when thinking
  about such things...

 Just visualize it in N-dimensional space, then let N go to 8.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53080937-a6329b


Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Lukasz Stafiniak
On 10/12/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 some of us are much impressed by it.  Anyone with even a surface grasp
 of the basic concept on a math level will realize that there's no
 difference between self-modifying and writing an outside copy of
 yourself, but *either one* involves the sort of issues I've been
 calling reflective.

Well, this could be at least a definition of self-modifying ;-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53163374-01d6ba


[agi] Learning Classifier Systems vs. evolutionary economy systems by Eric Baum

2007-10-09 Thread Lukasz Stafiniak
Hi,

Has anyone done in-depth (i.e. experimental or theoretical) comparison
of accuracy-based LCSs (XCS) and Eric Baum's economy? Eric only
mentions superiority over ZCS. But XCS are closer to Eric's systems,
fitness of rules is based on their prediction of reward (compare to
making bids). I wonder if economics could be explained by RL theory.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51379467-c3f951


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Lukasz Stafiniak
When looking at it through a crisp glass, the relation is a
preorder, not a (partial) order. And priming is essential. For
example, in certain contexts, we think that an animal is a human
(anthropomorphism).

On 10/9/07, Mark Waser [EMAIL PROTECTED] wrote:

 Ack!  Let me rephrase.  Despite the fact that Pei always uses the words of
 inheritance (and is technically correct), what he means is quite different
 from what most people assume that he means.  You are stuck on the common
 meanings of the terms  is an ancestor of and is a descendant of and it's
 impeding your understanding.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51437008-630e6a


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Lukasz Stafiniak
Major premise and minor premise in a syllogism are not
interchangeable. Read the derivation of truth tables for abduction and
induction from the semantics of NAL to learn that different ordering
of premises results in different truth values. Thus while both
orderings are applicable, one will usually give more confident result
which will dominate the other.

On 10/6/07, Edward W. Porter [EMAIL PROTECTED] wrote:


 But I don't understand the rules for induction and abduction which are as
 following:

 ABDUCTION INFERENCE RULE:
  Given S -- M and P -- M, this implies S -- P to some degree

 INDUCTION INFERENCE RULE:
  Given M -- S and M -- P, this implies S -- P to some degree

 The problem I have is that in both the abduction and induction rule --
 unlike in the deduction rule -- the roles of S and P appear to be
 semantically identical, i.e., they could be switched in the two premises
 with no apparent change in meaning, and yet in the conclusion switching S
 and P would change in meaning.  Thus, it appears that from premises which
 appear to make no distinctions between S and P a conclusion is drawn that
 does make such a distinction.  At least to me, with my current limited
 knowledge of the subject, this seems illogical.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50749379-2a7926


[agi] Highlights: Ute Schmid, Inductive Synthesis of Functional Programs

2007-09-29 Thread Lukasz Stafiniak
Ute Schmid publications:
http://www.cogsys.wiai.uni-bamberg.de/schmid/publications.html


 About this book

Because of its promise to support human programmers in developing
correct and efficient program code and in reasoning about programs,
automatic program synthesis has attracted the attention of researchers
and professionals since the 1970s.

This book focusses on inductive program synthesis, and especially on
the induction of recursive functions; it is organized into three parts
on planning, inductive program synthesis, and analogical problem
solving and learning. Besides methodological issues in inductive
program synthesis, emphasis is placed on its applications to control
rule learning for planning. Furthermore, relations to problem solving
and learning in cognitive psychology are discussed.


http://www.springer.com/west/home?SGWID=4-102-22-6954766-0changeHeader=truereferer=www.springeronline.comSHORTCUT=www.springer.com/3-540-40174-1

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48074982-038f88


Re: [agi] Re: HTM vs. IHDR

2007-06-29 Thread Lukasz Stafiniak

On 6/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:



I've talked to John Weng many times before, and I found that his AGI has
some problems but he wasn't very eager to talk about them.  For example, it
could only recognize pre-trained objects (eg, a certain doll) but not
general object classes like dolls, cups or cars.


It seems intuitive that bottom-up approach is better at
generalization. HTM is much more sophisticated, conditional
probabilities, and the learning in context of sequences, must really
be helpful. (IHDR can have time-chunking but this is not that useful
at categorization.) It seems that the advantages of IHDR are limited
to quick learning and one-instance learning (HTM cannot do
one-instance learning, which is simple for IHDR). I'm not sure if HTM
could learn online.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Re: HTM vs. IHDR

2007-06-29 Thread Lukasz Stafiniak

On 6/29/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:


It seems intuitive that bottom-up approach is better at
generalization. HTM is much more sophisticated, conditional
probabilities, and the learning in context of sequences, must really
be helpful. (IHDR can have time-chunking but this is not that useful
at categorization.) It seems that the advantages of IHDR are limited
to quick learning and one-instance learning (HTM cannot do
one-instance learning, which is simple for IHDR). I'm not sure if HTM
could learn online.


And, IHDR would still be better at quick and one-instance learning
than HTM naively augmented to do that.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Re: HTM vs. IHDR

2007-06-29 Thread Lukasz Stafiniak

BTW, has HTM been seriously tried at medical images understanding?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-27 Thread Lukasz Stafiniak

On 6/27/07, Vladimir Nesov [EMAIL PROTECTED] wrote:


I think AI books are not particularly helpful, not at first (if you know
enough about algorithms, programming and math, generally).
AI provides technical answers to well-formulated
questions, but with AGI right questions is what's lacking.

So, my current reading is
The Cambridge Handbook of Thinking and Reasoning.
http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=0521531012


I guess I'll finish reading Bayesian Approach to Imitation in RL
before they put that Cambridge book online ;-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] HTM vs. IHDR

2007-06-24 Thread Lukasz Stafiniak

I'm starting to learn about Numenta's HTM, but perhaps someone would
like to share in advance:
what are the essential differences between HTM and Yuang Weng's IHDR
augmented with Observation-driven MDPs?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Re: HTM vs. IHDR

2007-06-24 Thread Lukasz Stafiniak

Ouch, they differ more than I thought... Good :-)

(HTM based more on Bayes nets)

On 6/24/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I'm starting to learn about Numenta's HTM, but perhaps someone would
like to share in advance:
what are the essential differences between HTM and Yuang Weng's IHDR
augmented with Observation-driven MDPs?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-24 Thread Lukasz Stafiniak

On 6/24/07, Bob Mottram [EMAIL PROTECTED] wrote:

I have one of Richard Sutton's books, and RL methods are useful but I
also have some reservations about them.  Often in this sort of
approach a strict behaviorist position is adopted where the system is
simply trying to find an appropriate function mapping inputs to
outputs.  The internals of the system are usually treated as a black
box with a homogenous structure, and it's this zero architecture or
trivial architecture approach which can make the learning problem
exceptionally hard.


But they don't need to be, there is always place to accomodate
knowledge. You can use structured value function approximators, use
off-policy methods for supervised learning, etc.

BTW, has anyone tried value estimates with an uncertainty dimension,
and with a prior that favors more certain estimates but degrades
smoothly for less certain estimates, for both action selection and
value backup, or at least for action selection? This kind of action
selection seems to be smarter than e-greedy strategies.
(More certain value estimate is less optimistic, smaller, but is based
more experience.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Re: HTM vs. IHDR

2007-06-24 Thread Lukasz Stafiniak

The obvious observation is that HTM is bottom-up and IHDR is top-down.
HTM builds hierarchy by merging fixed, topologically-organized,
coordinate-system-based subspaces: tilings, where IHDR builds
hierarchy by splitting input space by adaptively learned Gaussian
features.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Lukasz Stafiniak

On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Lukasz Stafiniak

On 6/23/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.


For example, the RL book shows how to integrate planning and reactive
reinforcement at the conceptual level.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Lukasz Stafiniak

On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


Thanks! As a first note, SAIL seems to me a better replacement for
Cog, because SAIL has much generality and some theoretical
accomplishment where Cog is (AFAIK) hand-crafted engineering.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Association for Uncertainty in Artificial Intelligence

2007-06-22 Thread Lukasz Stafiniak

Looking through Wikipedia articles I stumbled upon a probably very
interesting place:
http://www.auai.org/
Association for Uncertainty in Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-22 Thread Lukasz Stafiniak

Obligatory reading:
http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html

Cheers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-22 Thread Lukasz Stafiniak

On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:


Reinforcement learning is a simple theory that only solves problems for
which we can design value functions.


But it is good for AGI newbies like me to start with :-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Autonomous Training

2007-06-17 Thread Lukasz Stafiniak

Hello,

Have you worked on or thought about autonomous training? An AGI,
before engaging into a critical mission, has to prepare herself for
that, so she has to learn and simulate the domain of the mission.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Lukasz Stafiniak

On 6/14/07, Matt Mahoney [EMAIL PROTECTED] wrote:


I don't believe this addresses the issue of machine pain.  Ethics is a complex
function which evolves to increase the reproductive success of a society, for
example, by banning sexual practices that don't lead to reproduction.  Ethics
also evolves to ban harm to other members of the group, but not to non-members
(e.g. war is allowed), and not to other species (hunting is allowed), except
to the extent that such actions would harm the group.

There is no precedent for ethics with regard to machines.  We protect machines
only to the extent that harming them harms the owner.  Nevertheless, I think
your argument about pain being related to complexity relates to the more
general principle of protecting that which resembles a human, even if that
resemblance is superficial or based on emotion.  I was reminded of this when I
was playing Grand Theft Auto III.  Besides carjacking, murder, and assorted
mayhem, the game allows you to pick up prostitutes.  Afterwards, the game
gives you the option of getting your money back by beating her to death, but I
declined.  I felt empathy for a video game character.


http://www.goertzel.org/books/spirit/uni3.htm  -- VIRTUAL ETHICS

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:


If yes, then how do you define pain in a machine?


A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal in
going through the pain).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/13/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:

 If yes, then how do you define pain in a machine?

A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal in
going through the pain).


To clarify:
(1) there exists a person empathizing with that machine
(2) this person would avoid putting the machine into the state of pain

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/14/07, Matt Mahoney [EMAIL PROTECTED] wrote:


I would avoid deleting all the files on my hard disk, but it has nothing to do
with pain or empathy.

Let us separate the questions of pain and ethics.  There are two independent
questions.

1. What mental or computational states correspond to pain?
2. When is it ethical to cause a state of pain?


There is a gradation:
- pain as negative reinforcement
- pain as an emotion
- pain as a feeling

When you ask if something feels pain, then you don't ask if pain
is adequate description of some aspect in that thing or person X, but
whether X can be attributed as feeling. And this is related to the
comlexity of X, and this complexity is related with ethics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread Lukasz Stafiniak

On 6/12/07, Mark Waser [EMAIL PROTECTED] wrote:



 a question is whether a software program could tractably learn language
without such associations, by relying solely on statistical associations
within texts.

Isn't there an alternative (or middle ground) of starting the software
program with a seed of initial structure and then letting it grow from there
(rather than relying only on statistical associations -- which I believe
will be intractable for quite some time).


It is at least conceivable. The idea is that you give the system
reasonable means to build models (= simulations). The initial
structure lets the system build approximate models to at least some
minimal but not isolated amount of texts. The system then should have
some explorative means to build new more complicated models (the hard
part). The model extension should be guided by parts of (partially or
approxiamtely) interpretable texts that do not quite fit (e.g. have
uninterpreted words). The extensions are then evaluated by their
predictive characteristics (how much new text can be consistently
interpreted in them).

Also, have a look at Polyscheme, etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread Lukasz Stafiniak

On 6/12/07, Derek Zahn [EMAIL PROTECTED] wrote:


 Some people, especially those espousing a modular software-engineering type
of approach seem to think that a perceptual system basically should spit out
a token for chair when it sees a chair, and then a reasoning system can
take over to reason about chairs and what you might do with them -- and
further it is thought that the reasoning about chairs part is really the
essence of intelligence, whereas chair detection is just discardable
pre-processing.  My personal intuition says that by the time you have taken
experience and boiled it down to a token labeled chair you have discarded
almost everything important about the experience and all that is left is
something that can be used by our logical inference systems.


Assume that the inference systems do well. Therefore, not *that* much
information is discarded. Therefore, the inference systems have found
a workaround to collect the information about a particular chair
that is not directly accessible through a single token (e.g by a
subtle context of a myriad of other tokens).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] about AGI designers

2007-06-11 Thread Lukasz Stafiniak

On 6/6/07, Peter Voss [EMAIL PROTECTED] wrote:


'fraid not. Have to look after our investors' interests… (and, like Ben, I'm
not keen for AGI technology to be generally available)


But at least Novamente makes a convinceable amount of their ideas
available IMHO.

P.S. Probabilistic Logic Networks is coming no later than early 2008
I hope? :-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

I've ended up with the following list. What do you think?

*  Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity
and Its Applications, Springer Verlag 1997
   * Marcus Hutter, Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability, Springer Verlag 2004
   * Vladimir Vapnik, Statistical Learning Theory, Wiley-Interscience 1998
   * Pedro Larrañaga, José A. Lozano (Editors), Estimation of
Distribution Algorithms: A New Tool for Evolutionary Computation,
Springer 2001
   * Ben Goertzel, Cassio Pennachin (Editors), Artificial General
Intelligence (Cognitive Technologies), Springer 2007
   * Pei Wang, Rigid Flexibility: The Logic of Intelligence, Springer 2006
   * Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
Probabilistic Logic Networks, in preparation
   * Juyang Weng et al., SAIL and Dav Developmental Robot Projects:
the Developmental Approach to Machine Intelligence, publication list
   * Ralf Herbrich, Learning Kernel Classifiers: Theory and
Algorithms, MIT Press 2001
   * Eric Baum, What is Thought?, MIT Press 2004
   * Marvin Minsky, The Emotion Machine: Commonsense Thinking,
Artificial Intelligence, and the Future of the Human Mind, Simon 
Schuster 2006
   * Ben Goertzel, The Hidden Pattern: A Patternist Philosophy of
Mind, Brown Walker Press 2006
   * Ronald Brachman, Hector Levesque, Knowledge Representation and
Reasoning, Morgan Kaufmann 2004
   * Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought,
MIT Press 2004
   * Wayne D. Gray (Editor), Integrated Models of Cognitive
Systems, Oxford University Press 2007
   * Logica Universalis, Birkhäuser Basel, January 2007

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/9/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I've ended up with the following list. What do you think?



I would like to add Locus Solum by Girard to this list, and then is
seems to collapse into a black hole... Don't care?


*  Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity
and Its Applications, Springer Verlag 1997
* Marcus Hutter, Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability, Springer Verlag 2004
* Vladimir Vapnik, Statistical Learning Theory, Wiley-Interscience 1998
* Pedro Larrañaga, José A. Lozano (Editors), Estimation of
Distribution Algorithms: A New Tool for Evolutionary Computation,
Springer 2001
* Ben Goertzel, Cassio Pennachin (Editors), Artificial General
Intelligence (Cognitive Technologies), Springer 2007
* Pei Wang, Rigid Flexibility: The Logic of Intelligence, Springer 2006
* Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
Probabilistic Logic Networks, in preparation
* Juyang Weng et al., SAIL and Dav Developmental Robot Projects:
the Developmental Approach to Machine Intelligence, publication list
* Ralf Herbrich, Learning Kernel Classifiers: Theory and
Algorithms, MIT Press 2001
* Eric Baum, What is Thought?, MIT Press 2004
* Marvin Minsky, The Emotion Machine: Commonsense Thinking,
Artificial Intelligence, and the Future of the Human Mind, Simon 
Schuster 2006
* Ben Goertzel, The Hidden Pattern: A Patternist Philosophy of
Mind, Brown Walker Press 2006
* Ronald Brachman, Hector Levesque, Knowledge Representation and
Reasoning, Morgan Kaufmann 2004
* Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought,
MIT Press 2004
* Wayne D. Gray (Editor), Integrated Models of Cognitive
Systems, Oxford University Press 2007
* Logica Universalis, Birkhäuser Basel, January 2007



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/9/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


I'm not aware of any book on pattern recognition with a view on AGI, except
The Pattern Recognition Basis of Artificial Intelligence by Don Tveter
(1998):
http://www.dontveter.com/basisofai/basisofai.html

You may look at The Cambridge Hankbook of Thinking and Reasoning first,
especially the chapters on similarity and analogy.


Thanks, it's interesting.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI Consortium

2007-06-08 Thread Lukasz Stafiniak

On 6/8/07, Mark Waser [EMAIL PROTECTED] wrote:


You are never going to see a painting by committee that is a great
painting.

And he's right. This was Sterling's indictment of Wikipedia–and to the
wisdom of crowds fad sweeping the Web 2.0 pitch sessions of Silicon
Valley–but it's also a fair assessment of what holds most (not all) open
source enterprises back: Lack of vision.


Every project has some developers recruitment policy; a smart mind
is an integrated mind. The ideological divide goes between Open
Knowledge and Source, and Closed Knowledge and Source.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] AGI Consortium

2007-06-08 Thread Lukasz Stafiniak

On 6/8/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

This is basically right. There are plenty of innovative Open Source programs
out there, but they are typically some academic's thesis work. Being Open
Source can allow them to be turned into solid usable applications, but it
can't create them in the first place.


Being Closed Source can't create them neither (just a note for the
sake of completeness).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Books

2007-06-07 Thread Lukasz Stafiniak

Which books would you recommend? For which there is a better
replacement? My results of quick amazon.com browsing:

Body Language: Representation in Action (Bradford Books) (Hardcover)
by Mark Rowlands (Author)
http://www.amazon.com/Body-Language-Representation-Action-Bradford/dp/0262182556/ref=pd_bxgy_b_text_b/104-8541523-0043944?ie=UTF8qid=1181214688sr=1-91

Integrated Models of Cognitive Systems (Cognitive Models and
Architectures) (Hardcover)
by Wayne D. Gray (Editor)
http://www.amazon.com/Integrated-Models-Cognitive-Systems-Architectures/dp/0195189191/ref=sr_1_168/104-8541523-0043944?ie=UTF8s=booksqid=1181215075sr=1-168

Reasoning about Uncertainty (Paperback)
by Joseph Y. Halpern
http://www.amazon.com/Reasoning-about-Uncertainty-Joseph-Halpern/dp/0262582597/ref=sr_1_132/104-8541523-0043944?ie=UTF8s=booksqid=1181214901sr=1-132

Pattern Recognition, Third Edition (Hardcover)
by Sergios Theodoridis (Author), Konstantinos Koutroumbas (Author)
http://www.amazon.com/Pattern-Recognition-Third-Sergios-Theodoridis/dp/0123695317/ref=sr_1_110/104-8541523-0043944?ie=UTF8s=booksqid=1181214779sr=1-110

Knowledge Representation and Reasoning (The Morgan Kaufmann Series in
Artificial Intelligence)
by Ronald Brachman (Author), Hector Levesque (Author)
http://www.amazon.com/Knowledge-Representation-Reasoning-Artificial-Intelligence/dp/1558609326/ref=sr_1_47/104-8541523-0043944?ie=UTF8s=booksqid=1181214465sr=1-47

Pattern Recognition and Machine Learning (Information Science and
Statistics) (Hardcover)
by Christopher M. Bishop (Author)
http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_6/104-8541523-0043944?ie=UTF8s=booksqid=1181214349sr=1-6

Learning Kernel Classifiers: Theory and Algorithms (Adaptive
Computation and Machine Learning) (Hardcover)
by Ralf Herbrich (Author)
http://www.amazon.com/Learning-Kernel-Classifiers-Algorithms-Computation/dp/026208306X/ref=sr_1_184/104-8541523-0043944?ie=UTF8s=booksqid=1181214108sr=1-184

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Re: Books

2007-06-07 Thread Lukasz Stafiniak

OK,

(1) Which book on pattern recognition is the most AGIsh? (Vapnik comes
in his own right)

((2) - (3) as before),

(4) When will Probabilistic Logic Networks be out?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Vectorianism and a2i2

2007-06-07 Thread Lukasz Stafiniak

It's a far better answer than I asked for :-)

On 6/6/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:


Norm of our vectors, known of old--
   Lord of our far-flung number line
Beneath whose measured length we hold
   Dominion over quad and spline--
Memory trace, be with us yet,
Lest we forget - lest we forget!

Thy referents to meanings bind;
   The captains and the kings are nil:
Still stand Thine ancient symbols still
   And theory of the pattern mind.
Oh R of N, be with us yet,
Lest we forget - lest we forget!

Far-called, our transforms melt away;
   On sine and cosine sinks the fire:
Lo, all our linearity
   Is one with Nineveh and Tyre!
Space of all Phases, spare us yet,
Lest we forget - lest we forget!

If, drunk with powersets, we loos'd
   Mere discrete symbol sequences--
Such boasting as Eliza used
   Or lesser breeds of expert sys--
Eigenvector, keep us yet,
Lest we forget - lest we forget!

For heathen heart that puts its trust
   In vacuum tube and FET--
All valiant dust that builds on dust,
   And logic which can't count to 3:
For foolish boast and pie in sky,
Have mercy on Thine AGI!

* * *

Oh, wait, you said VECtorian ... I thought you said VICtorian...

Never mind!

Josh



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Vectorianism and a2i2

2007-06-06 Thread Lukasz Stafiniak

On 6/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


I believe:
The practice of representing knowlege using high-dimensional
numerical vectors ;-)


I've misspelled from vectorialism, see:
Churchland on connectionism
http://www-cse.ucsd.edu/users/gary/pubs/laakso-church-chap.pdf
(vectorialism as opposed to propositionalism)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] about AGI designers

2007-06-06 Thread Lukasz Stafiniak

On 6/6/07, Derek Zahn [EMAIL PROTECTED] wrote:

 D.  There are no consortiums to join.

 I see talk about joining Novamente, but are they hiring?  It might be
possible to volunteer to work on peripheral things like AGISIM, but I sort
of doubt that Ben is eager to train volunteers on the AGI-type code itself.
On average, the cost/benefit of that would probably be quite poor.

 I see that AdaptiveAI has an opening for a programmer.  We don't talk about
them much, probably because they have chosen not to make much information
availableabout what they're up to, beyond Peter Voss's vague overview paper.


I think, that with understanding of what major projects are up to, a
new startup should aim into complementary space (and to interoperate
at some stage). Otherwise, I would insist on joining an existing
project, unless they really are over the threshold with manpower.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] about AGI designers

2007-06-06 Thread Lukasz Stafiniak

On 6/7/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Sure, but the nature of AGI is that wizzy demos are likely to come fairly
late in the development process.  All of us actively working in the field
understand this


What about LIDA? Even if she is not very general she is more
cognitive than Numenta, and has some nice HALish activity in the
wild. :-)
And for LIDA, I guess she is already quite mature by development (filogeny).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] analogy, blending, and creativity

2007-06-05 Thread Lukasz Stafiniak

On 6/2/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

And many scientists refer to potential energy surfaces and the like. There's a
core of enormous representational capability with quite a few well-developed
intellectual tools.


Another Grand Unification theory: Estimation of Distribution
Algorithms behind Bayesian Nets, Genetic Programming and unsupervised
Neural Networks.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Re: PolyContextural Logics

2007-06-04 Thread Lukasz Stafiniak

One more bite:
Locus Solum: From the rules of logic to the logic of rules by
Jean-Yves Girard, 2000.
http://lambda-the-ultimate.org/node/1994

On 6/5/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

Speaking of logical approaches to AGI... :-)

http://www.thinkartlab.com/pkl/



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] analogy, blending, and creativity

2007-06-02 Thread Lukasz Stafiniak

On 5/17/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

On Wednesday 16 May 2007 04:47:53 pm Mike Tintner wrote:
 Josh
  . If you'd read the archives,
  you'd see that I've advocated constructive solid geometry in Hilbert
  spaces
  as the basic representational primitive.

 Would you like to say more re your representational primitives? Sounds
 interesting. The archives have no reference to constructive solid geometry
 in Hilbert spaces in any form. Personally, I think it's a plot.

MOOO ha ha ha! It's all in your mind :-)

Actually, I can't find it either but (and this is apropos to the subject) we
rarely remember the exact words we said or heard; we remember more abstract
representations. Chances I used CSG and/or vector spaces. Hilbert space
is a rhetorical flourish anyway -- they may need it to describe quantum
mechanics precisely but we'll never implement it...


Many engineering departments make the mistake of never mentioning the
term Hilbert space and calling it all signal analysis.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Is the Digital Processor Really Analog?

2007-06-02 Thread Lukasz Stafiniak

In the name of Church-Turing-von Neumann, don't follow that heresy.

Quantum computers are kind-of Heglian synthesis of analog-digital.

There are quirks going on inside computers, like error correction on
memory retrieval, not to mess up with your (or the computer user)
symbols.

If you read analog as generating its own symbols, then computers
are not meant to do so.

[ just a bit of loose crap from me ;-) ]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-02 Thread Lukasz Stafiniak

On 6/2/07, Mark Waser [EMAIL PROTECTED] wrote:


 By some measures Google is more intelligent than any human.  Should it
 have
 human rights?  If not, then what measure should be used as criteria?

Google is not conscious.  It does not need rights.  Sufficiently complex
consciousness (or even just the appearance of such) is the criteria that
should be used.


Google has its rights. No crazy totalitarian government tells Google what to do.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-02 Thread Lukasz Stafiniak

On 6/2/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:


Google has its rights. No crazy totalitarian government tells Google what to do.


(perhaps it should go: Google struggles for its rights, sometimes
making moral compromises)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-02 Thread Lukasz Stafiniak

On 6/2/07, Derek Zahn [EMAIL PROTECTED] wrote:


 For a for-profit AGI project I suggest the following definition of
intelligence:

 The ability to create information-based objects of economic value.


What about:

The ability to create information-based objects generating income.

This is less ambiguous and more demanding.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Donations to support open-source AGI research (SIAI Research Program)

2007-06-02 Thread Lukasz Stafiniak

On 6/2/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


http://www.singinst.org/research/summary

[see menu to the left] embodying some research I think will be extremely
valuable for pushing forward toward AGI, and that I think is well-
pursued in an open, nonprofit context.



* Research Area 5: Design and Creation of Safe Software
Infrastructure has enough support in the mainstream (industry and
academia) IMHO. Microsoft Research works on it; several little
companies are in this bussiness; every CompSci university department
has someone working on it. Feeling lucky with Google gives:
http://www.cs.utexas.edu/users/moore/publications/zurich-05/talk.html

Citing:



However, currently, there is no programming language that both
supports proof-based program correctness checking, and is sufficiently
efficient in terms of execution to be usable for pragmatic AGI
purposes.


This is not how I would state things.  The burden is of course
efficiency of program development (proving program properties or
deriving program from specification) which is not much correlated (if
at all) with efficiency of the extracted/verified program. For
example, you can prove properties of assembly code (like in
dependently typed assembly languages).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


  1   2   >