Re: [agi] Free AI Courses at Stanford

2008-09-18 Thread Kingma, D.P.
Hi List, Also interesting to some of you may be VideoLectures.net, which offers lots of interesting lectures. Although not all are of Stanford quality, still I found many interesting lectures by respected lecturers. And there are LOTS (625 at the moment) of lectures about Machine Learning... :)

Re: [agi] uncertain logic criteria

2008-09-17 Thread Kingma, D.P.
On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: --a. Makes unwarranted independence assumptions Yes, I think independence should always be assumed unless otherwise stated -- which means there exists a Bayesian network link between X and Y. Small question...

Re: [agi] Is clustering fundamental?

2008-07-06 Thread Kingma, D.P.
On Sun, Jul 6, 2008 at 4:22 AM, Abram Demski [EMAIL PROTECTED] wrote: ... So the question is: is clustering in general powerful enough for AGI? Is it fundamental to how minds can and should work? You seem to be referring to *k-means clustering*, which assumes a special form of *mixture

Re: [agi] Paper rec: Complex Systems: Network Thinking

2008-07-01 Thread Kingma, D.P.
Idem dito. On Mon, Jun 30, 2008 at 10:33 PM, Daniel Allen [EMAIL PROTECTED] wrote: Thanks. I have downloaded the paper and pre-ordered the book. -- *agi* | Archives http://www.listbox.com/member/archive/303/=now http://www.listbox.com/member/archive/rss/303/ |

Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Kingma, D.P.
As far as I know, GPU's are not very optimal for neural net calculation. For some applications, speedup factors come in the 1000 range, but for NN's I have only seen speedups of one order of magnitude (10x). For example, see attached paper On Thu, Jun 12, 2008 at 4:59 PM, Matt Mahoney [EMAIL

Re: [agi] upcoming oral at Princeton

2008-05-04 Thread Kingma, D.P.
Josh, thanks for this very, very interesting project, primarily because of the great shortage in quality, large data sets for immage annotation! 1000*30.000= 30 million images: truly immense. Very valueable to the computer vision and machine learning community! Now the datasets, computer power are

Re: [agi] Big Dog

2008-04-11 Thread Kingma, D.P.
On Fri, Apr 11, 2008 at 12:22 PM, Mike Tintner [EMAIL PROTECTED] wrote: So natural, I wondered whether it wasn't a hoax with real people in there. They put a new BigDog video up, you gotta see it! ;) http://www.youtube.com/watch?v=VXJZVZFRFJc ---

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Although I symphathize with some of Hawkin's general ideas about unsupervised learning, his current HTM framework is unimpressive in comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's convolutional nets and the promising low-entropy coding variants. But it should be quite

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Mike, you seem to have misinterpreted my statement. Perception is certainly not 'passive', as it can be described as active inference using a (mostly actively) learned world model. Inference is done on many levels, and could integrate information from various abstraction levels, so I don't see it

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
On Sun, Mar 30, 2008 at 6:48 PM, William Pearson [EMAIL PROTECTED] wrote: On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote: An audiovisual perception layer generates semantic interpretation on the (sub)symbolic level. How could a symbolic engine ever reason about the real world without

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Vladimir, I agree with you on many issues, but... On Sun, Mar 30, 2008 at 9:03 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: This way, for example, it should be possible to teach a 'modality' for understanding simple graphs encoded as text, so that on one hand text-based input is sufficient,

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
... Is this indeed the direction you're going? Greets, Durk On Sun, Mar 30, 2008 at 10:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. [EMAIL PROTECTED] wrote: Vector graphics can indeed be communicated to an AGI by relatively low-bandwidth textual input

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
to learn a sufficient model about entities embedded in a complex physical world, such as humans. On Sun, Mar 30, 2008 at 10:50 PM, Mark Waser [EMAIL PROTECTED] wrote: From: Kingma, D.P. [EMAIL PROTECTED] Sure, you could argue that an intelligence purely based on text, disconnected from

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
On Sun, Mar 30, 2008 at 11:00 PM, Mark Waser [EMAIL PROTECTED] wrote: From: Kingma, D.P. [EMAIL PROTECTED] Vector graphics can indeed be communicated to an AGI by relatively low-bandwidth textual input. But, unfortunately, the physical world is not made of vector graphics, so reducing

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
(Sorry for triple posting...) On Sun, Mar 30, 2008 at 11:34 PM, William Pearson [EMAIL PROTECTED] wrote: On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote: Intelligence is not *only* about the modalities of the data you get, but modalities are certainly important. A deafblind person

Re: [agi] A HIGHLY RELEVANT AND interesting Google Tech Talk about Neural Nets

2008-03-19 Thread Kingma, D.P.
an AGI seem considerably less complex. Ed Porter -Original Message- From: Kingma, D.P. [mailto:[EMAIL PROTECTED] Sent: Sunday, March 02, 2008 12:08 PM To: agi@v2.listbox.com Subject: [agi] interesting Google

Re: [agi] A HIGHLY RELEVANT AND interesting Google Tech Talk about Neural Nets

2008-03-19 Thread Kingma, D.P.
/index.html Look under 2007-12-06: Learning a Deep Hierarchy of Sparse Invariant Features Durk On Wed, Mar 19, 2008 at 11:41 PM, Kingma, D.P. [EMAIL PROTECTED] wrote: No problem ;) One other autoencoder architecture you might find interesting is Yann Lecun's deep belief network: http

Re: [agi] if yu cn rd tihs, u slhud tke a look

2008-03-13 Thread Kingma, D.P.
I reckon that the shuffled words (meaningless and low probability) trigger an internal representation that is close enough to the meaning_full_ representation to be correctly classified. One part of this triggered internal representation is about WHAT is present, the other part about WHERE these

Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Kingma, D.P.
On Mon, Mar 3, 2008 at 6:33 AM, [EMAIL PROTECTED] wrote: Thanks for that. Dont you see the way to go on Neural nets is hybrid with genetic algorithms in mass amounts? No, I dont agree with your buzzword-laden statement :) I experimented EA + NN's and its still intractable when scaled up to

Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Kingma, D.P.
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED] wrote: The problems with bolting together NN and GA are so numerous it is hard to know where to begin. For one thing, you cannot represent structured information with NNs unless you go to some trouble to add extra

Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Kingma, D.P.
, Richard Loosemore [EMAIL PROTECTED] wrote: Kingma, D.P. wrote: On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: The problems with bolting together NN and GA are so numerous it is hard to know where to begin. For one thing, you

[agi] interesting Google Tech Talk about Neural Nets

2008-03-02 Thread Kingma, D.P.
Gentlemen, For guys interested in vision, neural nets and the like, there's a very interesting talk by Geoffrey Hinton about unsupervised learning of low-dimensional codes: It's been on Youtube since December, but somehow it escaped my attention for some months.

Re: [agi] Visual Reasoning Part 1 The Scene

2008-02-17 Thread Kingma, D.P.
Yes, great presentation and summary of the work done by these guys at CMU. It reinforces my belief that sensory processing should be explained in terms of finding efficient representations, in other words: dimensionality reduction. One nice aspect is that there are machine learning techniques that

Re: [agi] List of Java AI tools librarie

2007-12-17 Thread Kingma, D.P.
Nice list, although I'm missing WEKA, a widely used Machine Learning library / tool: http://en.wikipedia.org/wiki/WEKA Durk On Dec 17, 2007 8:03 PM, Stephen Reed [EMAIL PROTECTED] wrote: I've published a roughly categorized link list of Java AI tools and libraries, that may be helpful to Java

Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread Kingma, D.P.
Dear Edward, may I ask why you regularly choose to type in all-caps? Do you have a broken keyboard? Otherwise, please restrain from doing so since (1) many people associate it with shouting and (2) small-caps is easier to read... Kind regards, Durk Kingma On 10/12/07, Edward W. Porter [EMAIL

Re: [agi] Re: HTM vs. IHDR

2007-06-29 Thread Kingma, D.P.
In theory, HTM's are not restricted to off-line learning. For some reason the NuPIC software doesn't allow it ye, primarily because of implementation issues. One reason is that a HTM module's learning mechanism presumes a predetermined input alphabet. They're working on improvements though, iirc.

Re: [agi] Re: HTM vs. IHDR

2007-06-29 Thread Kingma, D.P.
I have followed HTM progress to some extend but have not seen any medical applications of NuPIC. Or any serious applications for that matter, unless groups beside Numenta have created an advanced HTM implementation... To get an idea of current applications you could check out the (quite shallow)

Re: [agi] about AGI designers

2007-06-06 Thread Kingma, D.P.
On 6/6/07, Peter Voss [EMAIL PROTECTED] wrote: ... Our goal is to create full AGI, but our business plan is to commercialize an intermediate-level AGI engine via some highly lucrative applications. Our target date to commence commercialization is the end of next year. Peter Voss a2i2 The

Re: [agi] analogy, blending, and creativity

2007-05-16 Thread Kingma, D.P.
John, Thanks for your reply to my questions about your project Tommy in your previous post. I'm very interested about the details but please forgive my relative freshness to this field (CS graduate heading to an AI master :) I'm particularly interested in the types of pattern mining you're

Re: [agi] Tommy

2007-05-13 Thread Kingma, D.P.
John, as I wrote earlier, I'm very interested in learning more about your particular approach to: - Concept and pattern representation. (i.e. types of concept, patterns, relations?) - Concept creation. (searching for statistically signifcant spatiotemporal correlations, genetic programming,

Re: [agi] Tommy

2007-05-11 Thread Kingma, D.P.
Yes, thank you, a meaningful and very interesting project. I discussed this kind of system with a friend of mine half an hour ago. On 5/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: 2. The hard part is learning: the AI has to build its own world model. My instinct and experience

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-18 Thread Kingma, D.P.
On 4/18/07, Matt Mahoney [EMAIL PROTECTED] wrote: ... I would go further and include lossy compression tests. In theory, you could compress speech to 10 bits per second by converting it to text and using text compression. The rate at which the human brain can remember video is not much

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-18 Thread Kingma, D.P.
[Spelling corrected and reworded...] I'm not convinced by this reasoning. First, the way individuals store audiovisual information differs, simply because of slight differences in brain development (nurture). Also, memory is condensed information about the actual high-level sensory/experience

Re: [agi] A Course on Foundations of Theoretical Psychology...

2007-04-16 Thread Kingma, D.P.
I like the idea very much. Some weeks of education in AI/CogPsy/Neuroscience would be very cool, no doubt. If I lived in in the U.S., I would sign and go for sure. The only problem is for me, as a Dutch student, would be budgetary. Unless someone can get my Amsterdam-NY flight sponsored ;) Durk

Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Kingma, D.P.
FYI...After reading Hawkins book I actually believe that his ideas may indeed underlie a future AGI system...but they need to be fleshed out in much greater detail... Cheers, K Their current concept implementation did not change substantially since their first proof-of-concept implementation.

Re: [agi] Has anyone read On Intelligence

2007-02-22 Thread Kingma, D.P.
Dear Aki Iskandar My 2 cents. I haven't read the book On Intelligence, but from the book flowed a proof-of-concept program which I analyzed thoroughly. I read the source of the program, analyzed the Dileep George and Hawkin's papers etc. The idea of the proof-of-concept is basically a

[agi] Larry Page, Google: We have some people at Google (who) are really trying to build artificial intelligence...

2007-02-19 Thread Kingma, D.P.
Larry Page, Google co-founder: We have some people at Google (who) are really trying to build artificial intelligence and to do it on a large scale, Page said to a packed Hilton ballroom of scientists. It's not as far off as people think. link: http://news.com.com/2100-11395_3-6160372.html

[agi] Re: (video)The Future of Cognitive Computing

2007-01-20 Thread Kingma, D.P.
Forgot to say: If anyone has found similarly informative video's regarding cognitive computing or AGI in general, I'm very interested. On 1/20/07, Kingma, D.P. [EMAIL PROTECTED] wrote: (lmaden Institute Conference on Cognitive Computing) http://video.google.com/videoplay?docid

Re: [agi] Sophisticated models of spiking neural networks.

2006-12-26 Thread Kingma, D.P.
Dear Nathan, Your description of the kind of neural-net scheme needs more detail before I can give any more particular direction. It leaves a huge amount of possibilities. For example, is your neural net similar to hierarchical network of abstract facts, being build by agents, whereas the agent's

Re: [agi] Geoffrey Hinton's ANNs

2006-12-12 Thread Kingma, D.P.
be to not make use of it? This is what the (mainly non-gofai) AI community has discovered in the 80's. I'm affraid much people confuse mathematics and statistics with rigidness and non-creativeness. Durk John Kingma, D.P. wrote: The ability to generate meaningful images it not a goal

Re: [agi] Geoffrey Hinton's ANNs

2006-12-11 Thread Kingma, D.P.
On 12/8/06, Bob Mottram [EMAIL PROTECTED] wrote: However, as the years went by I became increasingly dissatisfied with this kind of approach. I could get NN systems to work quite well on small toy problems, but when trying to build larger more practical systems (for example robots handling

[agi] Geoffrey Hinton's ANNs

2006-12-08 Thread Kingma, D.P.
My opinion is that this list should accidently be used to just point to interesting papers. If you disagree, please let me know. Some very recent papers by Geoffrey Hinton have raised my hope on academic progress towards neural networks research. The guy has always been an ANN guru but I find his

Re: [agi] HTM Theory

2006-10-28 Thread Kingma, D.P.
://www.phillylac.org/prediction/ PeiOn 10/26/06, Kingma, D.P. [EMAIL PROTECTED] wrote: I'm a Dutch student currently situated in Rome for six months. Due to my recent interest in AGI I have initiated a small research project into HTM theory (J. Hawkins / D. George). HTM learning is (in my eyes

Re: [agi] HTM Theory

2006-10-28 Thread Kingma, D.P.
looking forward to reading your paper.Yes, people sometimes take the HTM model to be similar to a neural net, though it is actually much closer to a Bayesian net.PeiOn 10/28/06, Kingma, D.P. [EMAIL PROTECTED] wrote: Thank you. I've studied the paper and the tested 'improvements'. The experiments

Re: [agi] Vision

2006-09-05 Thread Kingma, D.P.
YKY, I agree with your views, predicate logic is much more straightforward to work with, and I absolutely respect all the work and thoughts put into it. A problem within the AI domain is that Vision has not been solved yet. The existing and functioning algorithms are mainly specialised into sub

Re: [agi] Vision

2006-09-05 Thread Kingma, D.P.
;) Greetings, Durk On 9/5/06, Bob Mottram [EMAIL PROTECTED] wrote: On 05/09/06, Kingma, D.P. [EMAIL PROTECTED] wrote: A problem within the AI domain is that Vision has not been solved yet. The existing and functioning algorithms are mainly specialised into sub domains like face recognition etc

Re: [agi] G0: new AGI architecture

2006-09-04 Thread Kingma, D.P.
Hi, I'm Durk Kingma from the Netherlands, undergraduate in Computer Science at University of Utrecht (Netherlands). I'm just a novice in terms of AGI theory and your pages form a gentle introduction to the theme. Thank you for your work and I'm looking forward to the additions, especially