Via Slashdot:
*According to a new article published in Scientific American, the nature of
and evolutionary development of animal intelligence is significantly more
complicated than many have
assumedhttp://www.sciam.com/article.cfm?id=one-world-many-minds.
In opposition to the widely held view
On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:
I wrote down my thoughts on this in a little more detail here (with some
pastings from these emails plus some new info):
http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html
I
'On Sun, Dec 28, 2008 at 1:02 AM, Ben Goertzel b...@goertzel.org wrote:
See mildly revised version, where I replaced real world with everyday
world (and defined the latter term explicitly), and added a final section
relevant to the distinctions between the everyday world, simulated everyday
Matthias,
You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
On Tue, Oct 21, 2008 at 12:56 AM, Dr. Matthias Heger [EMAIL PROTECTED]wrote:
Any argument of the kind you should better first read xxx + yyy +… is
very weak. It is a pseudo killer argument against everything with no content
at all.
If xxx , yyy … contains really relevant information for
An excellent post, thanks!
IMO, it raises the bar for discussion of language and AGI, and should be
carefully considered by the authors of future posts on the topic of language
and AGI. If the AGI list were a forum, Matthias's post should be pinned!
-dave
On Sun, Oct 19, 2008 at 6:58 PM, Dr.
On Sat, Oct 18, 2008 at 9:48 PM, Mike Tintner [EMAIL PROTECTED]wrote:
[snip] We understand and think with our whole bodies.
Mike, these statements are an *enormous* leap from the actual study of
mirror neurons. It's my hunch that the hypothesis paraphrased above is
generally true, but it is
Mike, I think you won't get a disagreement in principle about the benefits
of melding creativity and rationality, and of grokking/manipulating concepts
in metaphorical wholes. But really, a thoughtful conversation about *how*
the OCP design addresses these issues can't proceed until you've RTFBs.
On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales
[EMAIL PROTECTED]wrote:
So you'll just have to wait. Sorry. I also have patent/IP issues.
Exactly what qualia am I expected to feel when you say the words
'Intellectual Property'? (that's a rhetorical question, just in case there
was any doubt!)
CogDev is a free 1-day workshop where you can learn about OpenCog and
OpenCogPrime and meet some of the team.
More info at http://opencog.org/wiki/CogDev2008
Signup / Registration Form at
http://spreadsheets.google.com/viewform?key=pT15xTF3ys-1Aola-Yb_UFw
When? Sunday, October 26, 2008 - 10am -
Hi Brad,
An interesting point of conceptual agreement between OCP and Texai designs
is that very specifically engineered bootstrapping processes are necessary
to push into AGI territory. Attempting to summarize using my limited
knowledge, Texai hopes to achieve that boostrapping via reasoning
On Sun, Oct 12, 2008 at 3:37 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
There are other differences with OCP, as you know I plan to use PZB
logic, and I've written part of a Lisp prototype. I'm not sure what's
the best way to opensource it -- integrating with OCP, or as a
separate
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
So, it has, in fact, been tried before. It has, in fact, always failed.
Your comments about the quality of Ben's approach are noted. Maybe you're
right. But, it's not germane to my argument which is that those parts of
Brad,
Your post describes your position *very* well, thanks.
But, it does not describe *how* or *why* your AI system might achieve domain
expertise any faster/better/cheaper than other narrow-AI systems (NLU
capable, embodied, or otherwise) on its way to achieving networked-AGI. The
list would
On Tue, Oct 7, 2008 at 10:43 AM, Charles Hixson
[EMAIL PROTECTED]wrote:
I feel that an AI with quantum level biases would be less general. It would
be drastically handicapped when dealing with the middle level, which is
where most of living is centered. Certainly an AGI should have modules
On Sun, Oct 5, 2008 at 3:55 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding. Looking for new approaches to
this
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
[snip] Unfortunately, as long as the mainstream AGI community continue to
hang on to what should, by now, be a thoroughly-discredited strategy, we
will never (or too late) achieve human-beneficial AGI.
What a strange
On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Arguably, for instance, camera+lidar gives enough data for reconstruction
of the visual scene ... note that lidar gives more more accurate 3D depth
ata than stereopsis...
Also, for that matter, 'visual' input to an AGI
On Tue, Sep 30, 2008 at 5:23 AM, Mike Tintner [EMAIL PROTECTED]wrote:
How does Stephen or YKY or anyone else propose to read between the lines?
And what are the basic world models, scripts, frames etc etc. that you
think sufficient to apply in understanding any set of texts, even a
Hi YKY,
Can you explain what is meant by collect commonsense knowledge?
Playing the friendly devil's advocate, I'd like to point out that Cyc seems
to have been spinning its wheels for 20 years, building a nice big database
of 'commonsense knowledge' but accomplishing no great leaps in AI. Cyc's
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Training will be the overwhelming cost of AGI. Any language model
improvement will help reduce this cost.
How do you figure that training will cost more than designing, building and
operating AGIs? Unlike a training a
On Fri, Sep 19, 2008 at 8:40 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Fri, Sep 19, 2008 at 7:30 AM, David Hart [EMAIL PROTECTED] wrote:
Take the hypothetical case of R. Marketroid, who's hardware is on the
books
as an asset at ACME Marketing LLC and who's programming has been
On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas [EMAIL PROTECTED]wrote:
I agree that the topic is worth careful consideration. Sacrificing the
'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
AGI safety and/or the prevention of abuse may indeed be necessary one
day.
On Fri, Sep 19, 2008 at 3:53 AM, Linas Vepstas [EMAIL PROTECTED]wrote:
Exactly. If opencog were ever to reach the point of
popularity where one might consider a change of
licensing, it would also be the case that most of the
interested parties would *not* be under SIAI control,
and thus
On Thu, Sep 18, 2008 at 9:44 PM, Trent Waddington
[EMAIL PROTECTED] wrote:
Claiming a copyright and successfully defending that claim are different
things.
What ways do you envision someone challenging the copyright?
Take the hypothetical case of R. Marketroid, who's hardware is on the
From http://machineslikeus.com/news/time-teaches-brain-how-recognize-objects
In work that could aid efforts to develop more brain-like computer vision
systems, MIT neuroscientists have tricked the visual brain into confusing
one object with another, thereby demonstrating that time teaches us how
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI, and that thinking about 'weak RSI' (e.g. in a GA system
or some other non-self-aware system) has value, but only insofar as it can
On 8/29/08, David Hart [EMAIL PROTECTED] wrote:
The best we can hope for is that we participate in the construction and
guidance of future AGIs such they they are able to, eventually, invent,
perform and carefully guide RSI (and, of course, do so safely every single
step of the way without
On 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
any specific areas has been considered?
To quote Charles Babbage, I am not able rightly to apprehend the kind of
confusion of ideas that could provoke such a question.
On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Is anyone trying to design a self-exploring robot or computer? Does this
principle have a name?
Interestingly, some views on AI advocate specifically prohibiting
self-awareness and self-exploration as a precaution against the development
of
Where is the hard dividing line between designed cognition and designed
simulation (where intelligent behavior is intended to be emergent in both
cases)? Even if an approach is taken where everything possible is done allow
a 'natural' type evolution of behavior, the simulation design and
Of course the brain also manifests complex self-organizing adaptive system
characteristics (particularly in patterns of activity), although these
characteristics are not apparent from static images.
-dave
On 8/7/08, Jim Bromer [EMAIL PROTECTED] wrote:
Yeah, they were amazing and they explain a
Jim,
I believe that terminology continues to thwart us. It appears that the term
'complexity' as you're using it means 'mechanistically intricate' and not
'Santa Fe Institute style complexity'.
The term 'complexity' never should have been overloaded in the first place
(ugh), but since we must
On 8/4/08, Jim Bromer [EMAIL PROTECTED] wrote:
Sorry if I seem a little petty about this, but my use of the concept
of complexity -in the more general sense- could also involve some kind
of manifestation of a complex adaptive system, although that is not a
definite aspect of it.
I agree
I favor voluntary adoption of Crocker's Rules (explained at
http://www.sl4.org/crocker.html more at
http://www.google.com/search?q=crocker's+rules).
-dave
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
On 8/2/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Thus: in my paper there is a quote from a book in which Conway's efforts
were described, and it is transparently clear from this quote that the
method Conway used was random search:
I believe this statement misinterprets the quote and
Hi All,
An excellent 20-minute TED talk from Susan Blackmore (she's a brilliant
speaker!)
http://www.ted.com/talks/view/id/269
I considered posting to the singularity list instead, but Blackmore's
theoretical talk is much more germane to AGI than any other
singularity-related technology.
-dave
Derek, you make an excellent point about the OpenCog project appearing too
open-ended and unfocused. Ben is writing documentation for a specific
cognitive architecture, OpenCog Prime, that is intended to address these
concerns. The first iteration of OpenCog Prime is targeted for July and will
be
Hi,
Some news with interesting implications for future AGI development, from
http://www.theregister.co.uk/2007/12/10/amd_violin_memory/ - more at
http://www.violin-memory.com/
10TB of DRAM? Why not?By Ashlee Vance in Mountain View
[EMAIL PROTECTED] → More by this
On 12/5/07, Matt Mahoney [EMAIL PROTECTED] wrote:
[snip] Centralized search is limited to a few big players that
can keep a copy of the Internet on their servers. Google is certainly
useful,
but imagine if it searched a space 1000 times larger and if posts were
instantly added to its
On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:
I think this is the view put forward by Hugo De Garis. I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build
Hi YKY,
On 1/28/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Thanks, but I favor a license that supports some commercial rights, or
I'll need to create one. Google Code only supports free /
copyleft licenses.
Licensing is typically more intricate than it first appears. KB content and
On 1/27/07, Charles D Hixson [EMAIL PROTECTED] wrote:
Philip Goetz wrote:
On 1/17/07, Charles D Hixson [EMAIL PROTECTED] wrote:
It's find to talk about making the data public domain, but that's not
a good idea.
Why not?
Because public domain offers NO protection. If you want something
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
Ben,
Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit]
curricula.
--
David Hart
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Hi,
Google has announced the release of a trillion-word training corpus
including one billion five-word sequences that appear at least 40 times
in a their database of web pages.
More at
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
The 6 DVD set will be
http://economist.com/science/displayStory.cfm?story_id=5354696
Snarfed from Slashdot.
David
Economist.com Mathematics Proof and beauty - Controversial
computer-generated proofs
http://www.economist.com/science/displayStory.cfm?story_id=3809661
-dave
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
Hi Ben,
The NM_human paper is excellent! I found it very polished. It should be
a great tool to help the average science-literate person begin to grok
Novamente -- I'll be passing it on a great deal! :-)
-dave
Ben Goertzel wrote:
Hi,
As part of the process of finalizing my long-in-progress
Hi All,
TurboExcel isn't
directly AGI related, but I find it fascinating that someone figured
out how to compile a spreadsheet into portable C++. Although, this
technology could have an impact on using the spreadsheet metaphor to
prototype or even write AGI subsystems; it has the additional
Hi,
I've thought this type of representation might be most efficiently
achieved with a vector-driven internal representation. That is,
Novamente's internal construction and representation of such a
demonstration model might be done with vectors (animated by schema
procedures), using pixels
51 matches
Mail list logo