Hi List,
Also interesting to some of you may be VideoLectures.net, which offers
lots of interesting lectures. Although not all are of Stanford
quality, still I found many interesting lectures by respected
lecturers. And there are LOTS (625 at the moment) of lectures about
Machine Learning... :)
On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
--a. Makes unwarranted independence assumptions
Yes, I think independence should always be assumed unless otherwise
stated -- which means there exists a Bayesian network link between X
and Y.
Small question...
On Sun, Jul 6, 2008 at 4:22 AM, Abram Demski [EMAIL PROTECTED] wrote:
...
So the
question is: is clustering in general powerful enough for AGI? Is it
fundamental to how minds can and should work?
You seem to be referring to *k-means clustering*, which assumes a special
form of *mixture
Idem dito.
On Mon, Jun 30, 2008 at 10:33 PM, Daniel Allen [EMAIL PROTECTED] wrote:
Thanks. I have downloaded the paper and pre-ordered the book.
--
*agi* | Archives http://www.listbox.com/member/archive/303/=now
http://www.listbox.com/member/archive/rss/303/ |
As far as I know, GPU's are not very optimal for neural net calculation. For
some applications, speedup factors come in the 1000 range, but for NN's I
have only seen speedups of one order of magnitude (10x).
For example, see attached paper
On Thu, Jun 12, 2008 at 4:59 PM, Matt Mahoney [EMAIL
Josh, thanks for this very, very interesting project, primarily because of
the great shortage in quality, large data sets for immage annotation!
1000*30.000= 30 million images: truly immense. Very valueable to the
computer vision and machine learning community! Now the datasets, computer
power are
On Fri, Apr 11, 2008 at 12:22 PM, Mike Tintner [EMAIL PROTECTED]
wrote:
So natural, I wondered whether it wasn't a hoax with real people in there.
They put a new BigDog video up, you gotta see it! ;)
http://www.youtube.com/watch?v=VXJZVZFRFJc
---
Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.
But it should be quite
Mike, you seem to have misinterpreted my statement. Perception is certainly
not 'passive', as it can be described as active inference using a (mostly
actively) learned world model. Inference is done on many levels, and could
integrate information from various abstraction levels, so I don't see it
On Sun, Mar 30, 2008 at 6:48 PM, William Pearson [EMAIL PROTECTED] wrote:
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
An audiovisual perception layer generates semantic interpretation on the
(sub)symbolic level. How could a symbolic engine ever reason about the real
world without
Vladimir, I agree with you on many issues, but...
On Sun, Mar 30, 2008 at 9:03 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
This way, for example, it should be possible to teach a 'modality' for
understanding simple graphs encoded as text, so that on one hand
text-based input is sufficient,
... Is this indeed the direction you're going?
Greets,
Durk
On Sun, Mar 30, 2008 at 10:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input
to learn a sufficient model
about entities embedded in a complex physical world, such as humans.
On Sun, Mar 30, 2008 at 10:50 PM, Mark Waser [EMAIL PROTECTED] wrote:
From: Kingma, D.P. [EMAIL PROTECTED]
Sure, you could argue that an intelligence purely based on text,
disconnected from
On Sun, Mar 30, 2008 at 11:00 PM, Mark Waser [EMAIL PROTECTED] wrote:
From: Kingma, D.P. [EMAIL PROTECTED]
Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing
(Sorry for triple posting...)
On Sun, Mar 30, 2008 at 11:34 PM, William Pearson [EMAIL PROTECTED] wrote:
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person
an AGI seem considerably less complex.
Ed Porter
-Original Message-
From: Kingma, D.P. [mailto:[EMAIL PROTECTED]
Sent: Sunday, March 02, 2008 12:08 PM
To: agi@v2.listbox.com
Subject: [agi] interesting Google
/index.html
Look under 2007-12-06: Learning a Deep Hierarchy of Sparse Invariant
Features
Durk
On Wed, Mar 19, 2008 at 11:41 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
No problem ;)
One other autoencoder architecture you might find interesting is Yann
Lecun's deep belief network:
http
I reckon that the shuffled words (meaningless and low probability) trigger
an internal representation that is close enough to the meaning_full_
representation to be correctly classified.
One part of this triggered internal representation is about WHAT is present,
the other part about WHERE these
On Mon, Mar 3, 2008 at 6:33 AM, [EMAIL PROTECTED] wrote:
Thanks for that.
Dont you see the way to go on Neural nets is hybrid with genetic
algorithms in mass amounts?
No, I dont agree with your buzzword-laden statement :) I experimented EA +
NN's and its still intractable when scaled up to
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:
The problems with bolting together NN and GA are so numerous it is hard
to know where to begin. For one thing, you cannot represent structured
information with NNs unless you go to some trouble to add extra
, Richard Loosemore [EMAIL PROTECTED]
wrote:
Kingma, D.P. wrote:
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
The problems with bolting together NN and GA are so numerous it is
hard
to know where to begin. For one thing, you
Gentlemen,
For guys interested in vision, neural nets and the like, there's a very
interesting talk by Geoffrey Hinton about unsupervised learning of
low-dimensional codes:
It's been on Youtube since December, but somehow it escaped my attention for
some months.
Yes, great presentation and summary of the work done by these guys at
CMU. It reinforces my belief that sensory processing should be
explained in terms of finding efficient representations, in other
words: dimensionality reduction. One nice aspect is that there are
machine learning techniques that
Nice list, although I'm missing WEKA, a widely used Machine Learning library
/ tool:
http://en.wikipedia.org/wiki/WEKA
Durk
On Dec 17, 2007 8:03 PM, Stephen Reed [EMAIL PROTECTED] wrote:
I've published a roughly categorized link list of Java AI tools and
libraries, that may be helpful to Java
Dear Edward, may I ask why you regularly choose to type in all-caps? Do you
have a broken keyboard? Otherwise, please restrain from doing so since (1)
many people associate it with shouting and (2) small-caps is easier to
read...
Kind regards,
Durk Kingma
On 10/12/07, Edward W. Porter [EMAIL
In theory, HTM's are not restricted to off-line learning. For some
reason the NuPIC software doesn't allow it ye, primarily because of
implementation issues. One reason is that a HTM module's learning
mechanism presumes a predetermined input alphabet. They're working on
improvements though, iirc.
I have followed HTM progress to some extend but have not seen any
medical applications of NuPIC. Or any serious applications for that
matter, unless groups beside Numenta have created an advanced HTM
implementation... To get an idea of current applications you could
check out the (quite shallow)
On 6/6/07, Peter Voss [EMAIL PROTECTED] wrote:
...
Our goal is to create full AGI, but our business plan is to commercialize
an
intermediate-level AGI engine via some highly lucrative applications. Our
target date to commence commercialization is the end of next year.
Peter Voss
a2i2
The
John,
Thanks for your reply to my questions about your project Tommy in your
previous post. I'm very interested about the details but please forgive my
relative freshness to this field (CS graduate heading to an AI master :)
I'm particularly interested in the types of pattern mining you're
John, as I wrote earlier, I'm very interested in learning more about your
particular approach to:
- Concept and pattern representation. (i.e. types of concept, patterns,
relations?)
- Concept creation. (searching for statistically signifcant spatiotemporal
correlations, genetic programming,
Yes, thank you, a meaningful and very interesting project. I discussed this
kind of system with a friend of mine half an hour ago.
On 5/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
2. The hard part is learning: the AI has to build its own world
model. My instinct and experience
On 4/18/07, Matt Mahoney [EMAIL PROTECTED] wrote:
...
I would go further and include lossy compression tests. In theory, you
could
compress speech to 10 bits per second by converting it to text and using
text
compression. The rate at which the human brain can remember video is not
much
[Spelling corrected and reworded...]
I'm not convinced by this reasoning. First, the way individuals store
audiovisual information differs, simply because of slight differences in
brain development (nurture). Also, memory is condensed information about the
actual high-level sensory/experience
I like the idea very much. Some weeks of education in AI/CogPsy/Neuroscience
would be very cool, no doubt.
If I lived in in the U.S., I would sign and go for sure. The only problem is
for me, as a Dutch student, would be budgetary. Unless someone can get my
Amsterdam-NY flight sponsored ;)
Durk
FYI...After reading Hawkins book I actually believe that his ideas may
indeed underlie a future AGI system...but they need to be fleshed out in
much greater detail...
Cheers,
K
Their current concept implementation did not change substantially since
their first proof-of-concept implementation.
Dear Aki Iskandar
My 2 cents.
I haven't read the book On Intelligence, but from the book flowed a
proof-of-concept program which I analyzed thoroughly. I read the source of
the program, analyzed the Dileep George and Hawkin's papers etc.
The idea of the proof-of-concept is basically a
Larry Page, Google co-founder: We have some people at Google (who) are
really trying to build artificial intelligence and to do it on a large
scale, Page said to a packed Hilton ballroom of scientists. It's not as
far off as people think.
link:
http://news.com.com/2100-11395_3-6160372.html
Forgot to say: If anyone has found similarly informative video's regarding
cognitive computing or AGI in general, I'm very interested.
On 1/20/07, Kingma, D.P. [EMAIL PROTECTED] wrote:
(lmaden Institute Conference on Cognitive Computing)
http://video.google.com/videoplay?docid
Dear Nathan,
Your description of the kind of neural-net scheme needs more detail before I
can give any more particular direction. It leaves a huge amount of
possibilities.
For example, is your neural net similar to hierarchical network of abstract
facts, being build by agents, whereas the agent's
be to not make use of it? This is what the (mainly non-gofai) AI
community has discovered in the 80's.
I'm affraid much people confuse mathematics and statistics with rigidness
and non-creativeness.
Durk
John
Kingma, D.P. wrote:
The ability to generate meaningful images it not a goal
On 12/8/06, Bob Mottram [EMAIL PROTECTED] wrote:
However, as the years went by I became increasingly dissatisfied with this
kind of approach. I could get NN systems to work quite well on small toy
problems, but when trying to build larger more practical systems (for
example robots handling
My opinion is that this list should accidently be used to just point to
interesting papers. If you disagree, please let me know.
Some very recent papers by Geoffrey Hinton have raised my hope on
academic progress towards neural networks research. The guy has always been
an ANN guru but I find his
://www.phillylac.org/prediction/
PeiOn 10/26/06, Kingma, D.P. [EMAIL PROTECTED] wrote: I'm a Dutch student currently situated in Rome for six months. Due to my recent interest in AGI I have initiated a small research project into HTM
theory (J. Hawkins / D. George). HTM learning is (in my eyes
looking forward to reading your paper.Yes, people sometimes take the HTM model to be similar to a neural
net, though it is actually much closer to a Bayesian net.PeiOn 10/28/06, Kingma, D.P. [EMAIL PROTECTED] wrote: Thank you. I've studied the paper and the tested 'improvements'. The
experiments
YKY, I agree with your views, predicate logic is much more
straightforward to work with, and I absolutely respect all the work
and thoughts put into it.
A problem within the AI domain is that Vision has not been solved yet.
The existing and functioning algorithms are mainly specialised into
sub
;)
Greetings, Durk
On 9/5/06, Bob Mottram [EMAIL PROTECTED] wrote:
On 05/09/06, Kingma, D.P. [EMAIL PROTECTED] wrote:
A problem within the AI domain is that Vision has not been solved yet.
The existing and functioning algorithms are mainly specialised into
sub domains like face recognition etc
Hi, I'm Durk Kingma from the Netherlands, undergraduate in Computer
Science at University of Utrecht (Netherlands). I'm just a novice in
terms of AGI theory and your pages form a gentle introduction to the
theme. Thank you for your work and I'm looking forward to the
additions, especially
47 matches
Mail list logo