On Sun, Oct 5, 2008 at 7:41 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Ben,
>
> I have heard the argument for point 2 before, in the book by Pinker,
> "How the Mind Works". It is the inverse-optics problem: physics can
> predict what image will be formed on the retina from material
> arrangemen
On Sun, Oct 5, 2008 at 7:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Agreed. Colin would need to show the inadequacy of both inborn and
> learned bias to show the need for extra input. But I think the more
> essential objection is that extra input is still consistent with
> computationalism.
cool ... if so, I'd be curious for the references... I'm not totally up on
that area...
ben
On Sun, Oct 5, 2008 at 8:20 PM, Trent Waddington <[EMAIL PROTECTED]
> wrote:
> On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Arguably, f
current
>> issue.
>>
>> --Abram
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbo
ram
> >>
> >>
> >> ---
> >> agi
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription: https://www.listbox.com/membe
On Sun, Oct 5, 2008 at 11:16 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Ben,
>
> I think the entanglement possibility is precisely what Colin believes.
> That is speculation on my part of course. But it is something like
> that. Also, it is possible that quantum computers can do more than
> nor
e the AGI list
> from going down in flames! ;-)
>
> -dave
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&
t;
> Now where have I heard that before, I wonder?
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/ar
>
> "I think we're at the stage where a team of a couple dozen could do it in
> 5-10 years"
>
> I repeat - this is outrageous. You don't have the slightest evidence of
> progress - you [the collective you] haven't solved a single problem of
> general intelligence - a single mode of generalising - s
> And you
> can't escape flaws in your reasoning by wearing a lab coat.
>
Maybe not a lab coat... but how about my trusty wizard's hat??? ;-)
http://i34.tinypic.com/14lmqg0.jpg
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
;>
>> END QUOTE.
>>
>>
>> I particularly liked his choice of words when he said: "We were able to
>> find a number of properties that were simply decoupled from the fundamental
>> interactions..."
>>
>> Now where have I heard that before, I
On Mon, Oct 6, 2008 at 7:36 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Matthias (cont),
>
> Alternatively, if you'd like *the* creative (& somewhat mathematical)
> problem de nos jours - how about designing a "bail-out" fund/ mechanism for
> either the US or the world, that will actually work?
Mike,
> by definition a creative/emergent problem is one where you have to bring
> about a given effect by finding radically new kinds of objects that move or
> relate in radically new kinds of ways - to produce that effect. By
> definition, you *do not know which domain is appropriate to solvin
>
>> On the contrary,it is *you* who repeatedly resort to essentially
>> *reference to authority* arguments - saying "read my book, my paper etc
>> etc" - and what basically amounts to the tired line "I have the proof, I
>> just don't have the time to write it in the margin"
>>
>
> No. I do not
Hi all,
In preparation for an upcoming (invitation-only, not-organized-by-me)
workshop on Evaluation and Metrics for Human-Level AI systems, I
concatenated a number of papers on the evaluation of AGI systems into a
single PDF file (in which the readings are listed alphabetically in order of
file n
>
> Maybe all we need is just a simple interface for entering facts...
>
> YKY
>
I still don't understand why you think a simple interface for entering facts
is so important... Cyc has a great UI for entering facts, and used it to
enter millions of them already ... how far did it get them toward
>
> So the key question is whether there will be enough opensource
> contributors with innovative ideas and expertise in AGI...
>
> YKY
It's a gamble ... and I don't yet know if my gamble with OpenCog will pay
off!!
A problem is that to recruit a lot of quality volunteers, you'll first need
to
archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Researc
---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
AGI is 5-10 years. Not very
> long-term at all.
>
> > Incidentally, once a true AGI is created the current software
> > development paradigm becomes obsolete anyway.
>
> This doesn't sound very logical. Food will turn into excretion anyway,
> so...?
>
> YKY
&g
r if you've read Bohm's Thought as a System, or if you've been
> influenced by Niklas Luhmann on any level.
>
> Terren
>
> --- On *Fri, 10/10/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> There is a sense in which social groups are "mindplexes":
7;m aware of (in the US anyway) that has talked about
> autopoieisis. I wonder what your thoughts are about it? To what extent has
> that influenced your philosphy? Not looking for an essay here, but I'd be
> interested in your brief reflections on it.
>
> Terren
>
> --- On
com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com
If my impression of these discussions
> is accurate, if the partisan arguments for logic, probability or
> neural networks and the like are really arguments for choosing one or
> the other as a preponderant decision process, then it is my opinion
> that the discussants are missing the major proble
Abram,
I finally read your long post...
> The basic idea is to treat NARS truth values as representations of a
> statement's likelihood rather than its probability. The likelihood of
> a statement given evidence is the probability of the evidence given
> the statement. Unlike probabilities, calc
On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > In particular, the result that NARS induction and abduction each
> > depend on **only one** of their premise truth
mers are birds".
>
> Now I wonder if PLN shows a similar asymmetry in induction/abduction
> on negative evidence. If it does, then how can that effect come out of
> a symmetric truth-function? If it doesn't, how can you justify the
> conclusion, which looks counter-intuitive?
&g
I meant frequency, sorry
"Strength" is a term Pei used for frequency in some old sicsussions...
> If I were taking more the approach Ben suggests, that is, making
> reasonable-sounding assumptions and then working forward rather than
> assuming NARS and working backward, I would have kept the fo
On Fri, Oct 10, 2008 at 6:01 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > I meant frequency, sorry
> >
> > "Strength" is a term Pei used for frequency in some old sic
Pei,
I finally took a moment to actually read your email...
>
> However, the negative evidence of one conclusion is no evidence of the
> other conclusion. For example, "Swallows are birds" and "Swallows are
> NOT swimmers" suggests "Birds are NOT swimmers", but says nothing
> about whether "Swim
our position.
>
> Let's go back to the example. If the only relevant domain knowledge
> PLN has is "Swallows are birds" and "Swallows are
> NOT swimmers", will the system assigns the same lower-than-default
> probability to "Birds are swimmers" an
On Fri, Oct 10, 2008 at 8:29 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
> would
> > be the case...
> &g
This seems loosely related to the ideas in 5.10.6 of the PLN book, "Truth
Value Arithmetic" ...
ben
On Fri, Oct 10, 2008 at 9:04 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >> Gi
Abram,
>
> Anyway, perhaps I can try to shed some light on the broader exchange?
> My route has been to understand "A is B" as not P(A|B), but instead
> P("A is X" | "B is X") plus the extensional equivalent... under this
> light, the negative evidence presented by two statements "B is C" and
> "
Pei etc.,
First high level comment here, mostly to the non-Pei audience ... then I'll
respond to some of the details:
This dialogue -- so far -- feels odd to me because I have not been
defending anything special, peculiar or inventive about PLN here.
There are some things about PLN that would be
Brad,
>
> But, human intelligence is not the only general intelligence we can imagine
> or create. IMHO, we can get to human-beneficial, non-human-like (but,
> still, human-inspired) general intelligence much quicker if, at least for
> AGI 1.0, we avoid the twin productivity sinks of NLU and emb
oops, i meant 1895 ... damn that dyslexia ;-) ... though the other way was
funnier, it was less accurate!!
On Sat, Oct 11, 2008 at 8:55 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> I'm only pointing out something everybody here knows full well:
>> embodiment in va
> I'm only pointing out something everybody here knows full well:
> embodiment in various forms has, so far, failed to provide any real help in
> cracking the NLU problem. Might it in the future? Sure. But the key word
> there is "might."
To me, you sound like a guy in 1985 saying "So far,
On Sat, Oct 11, 2008 at 7:38 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> As I understand the way you guys and AI generally work, you create
> well-organized spaces which your programs can systematically search for
> options. Let's call them "nets" - which have systematic, well-defined and
> order
On Sat, Oct 11, 2008 at 9:46 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> I guess the obvious follow up question is when your systems search among
> options for a response to a situation, they don't search in a systematic way
> through spaces of options? They can just start anywhere and end up an
.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/membe
ts in
> large networks.
>
> -- Ben G
>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscripti
ide
> like that from the "top dog."
>
> It's too bad. I was just starting to fell "at home" here. Sigh.
>
> Cheers (and goodbye),
> Brad
>
> Ben Goertzel wrote:
>
>>
>> A few points...
>>
>> 1) Closely associating e
Thanks Pei!
This is an interesting dialogue, but indeed, I have some reservations about
putting so much energy into email dialogues -- for a couple reasons
1)
because, once they're done,
the text generated basically just vanishes into messy, barely-searchable
archives.
2)
because I tend to answe
gi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and
On Sat, Oct 11, 2008 at 12:27 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Ben,
>
> Thanks. But you didn't reply to the surely central-to-AGI question of
> whether this free-form knowledge base is or can be multi-domain - and
> particularly involve radically conflicting sets of rules about how gi
11, 2008 at 11:37 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Brad,
>
> Sorry if my response was somehow harsh or inappropriate, it really wasn't
> intended as such. Your contributions to the list are valued. These last
> few weeks have been rather tough for m
>
>
> Can you provide me with a link to how you deal with explanations and
> reasons in OCP?
> Jim Bromer
>
That topic is so broad I wouldn't know what to do except to point you to PLN
generally..
http://www.amazon.com/Probabilistic-Logic-Networks-Comprehensive-Framework/dp/0387768718
(alas the
own opinions", I try to be very clear
> that that is the role I'm adopting..
>
> ben g
>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
>
>
>
> I guess I'll try #3 and see what happens. Recently, I've decided to
> use Lisp as the procedural language, so that makes my approach even
> more similar to OCP's. One remaining big difference is that my KB is
> sentential but OCP's is graphical. Maybe we should spend some time
> discussing
Hi,
> > What this highlights for me is the idea that NARS truth values attempt
> > to reflect the evidence so far, while probabilities attempt to reflect
> > the world
>
I agree that probabilities attempt to reflect the world
> .
>
> Well said. This is exactly the difference between an
> exper
w come my posts aren't getting through? (Going out
> to the list) What do you call that?
>
> ATM/Mentifex
> --
> http://code.google.com/p/mindforth/
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ev
YKY wrote:
>
> In my approach (which is not even implemented yet) the KB contains
> rules that are used to construct propositional Bayesian networks. The
> rules contain variables in the sense of FOL. It's not clear how this
> is done in OCP.
>
OpenCog has VariableNodes in the AtomTable, which
underlying processes but this then becomes technical and lengthy!!
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ever be attempted if all possible objections must be first
overco
igure.pdf . A
> manifesto of EGS is at
> http://nars.wang.googlepages.com/wang.semantics.pdf
>
> Since the debate on the nature of "truth" and "meaning" has existed
> for thousands of years, I don't think we can settle down it here by
> some email exchanges
On Sun, Oct 12, 2008 at 1:32 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:
> On Sun, Oct 12, 2008 at 12:56 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> > OpenCog has VariableNodes in the AtomTable, which are used to represent
> > variables in the sense of FO
t;
> Aspects of the mind that are mainly unconscious and have to do mainly with
> the coordinated activity of a large number of different processes, are
> harder to describe in detail in specific instances. One can describe the
> underlying processes but this then becomes technical and lengt
ing to depend on.
> >>
> >> As usual, each theory has its strength and limitation. The issue is
> >> which one is more proper for AGI. MTS has been dominating in math,
> >> logic, and computer science, and therefore is accepted by the majority
> >> people. E
t; As before, comments are welcome.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscr
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://w
ork was written during the war, in the trenches I
> think. (I may be mistaken.)
> Jim Bromer
>
> On Mon, Oct 13, 2008 at 12:57 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > I agree it is far nicer when advocates of theories are willing to
> gracefully
&
> But when you see someone, theorist or critic, who almost never
> demonstrates any genuine capacity for reexamining his own theories or
> criticisms from any critical vantage point what so ever, then it's a
> strong negative indicator.
>
> Jim Bromer
>
I would be hesitant to draw strong conclusi
ource of the problem was. Anyway I read the HTML file just
fine, thanks!
On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Mon, 10/13/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > I was eager to debunk your supposed debunking of recursive
&
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Ben,
> Thanks for the comments on my RSI paper. To address your comments,
You seem to be addressing minor lacunae in my wording, while ignoring my
main conceptual and mathematical point!!!
>
>
> 1. I defined "improvem
Colin wrote:
> The only working, known model of general intelligence is the human. If we
> base AGI on anything that fails to account scientifically and completely for
> *all* aspects of human cognition, including consciousness, then we open
> ourselves to critical inferiority... and the rest of s
nd computers.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> --- On Mon, 10/13/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
> To: agi@v2.listbox.com
> Date: Monday, October 13,
Hi,
My main impression of the AGI-08 forum was one of over-dominance by
> singularity-obsessed and COMP thinking, which must have freaked me out a
> bit.
>
This again is completely off-base ;-)
COMP, yes ... Singularity, no. The Singularity was not a theme of AGI-08
and the vast majority of pa
gt; neuroscientific attempt to explain this (or perhaps explain it away). Know
> any more about this?
>
>
>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/
ue, Oct 14, 2008 at 5:27 PM, Colin Hales
<[EMAIL PROTECTED]>wrote:
>
>
> Ben Goertzel wrote:
>
>
> Hi,
>
> My main impression of the AGI-08 forum was one of over-dominance by
>> singularity-obsessed and COMP thinking, which must have freaked me out a
>> bit.
Matt,
> But no matter. Whichever definition you accept, RSI is not a viable path to
> AGI. An AI that is twice as smart as a human can make no more progress than
> 2 humans. You don't have automatic self improvement until you have AI that
> is billions of times smarter. A team of a few people isn
tage
> theres nothing much more for me to add. One day.
>
> - - - - - - - -- - - - -
> Not terribly satisfying. I know. There's no quick route through the
> information.
>
> The only guide I can give is that there is a 'trump card' approach that
> clears nomothe
and Pylyshyn's approaches, for instance, were too focused
on abstract reasoning and not enough on experiential learning and
grounding. But I don't think this makes their approaches **more
computational** than a CA model of QED ... it just makes them **bad
computational models of cognition
8 at 1:16 AM, Colin Hales
<[EMAIL PROTECTED]>wrote:
> Ben Goertzel wrote:
>
>
> Sure, I know Pylyshyn's work ... and I know very few contemporary AI
> scientists who adopt a strong symbol-manipulation-focused view of cognition
> like Fodor, Pylyshyn and so forth. That perspe
nd AGI also on the verge of collapse, should not
> escape you).
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscriptio
ese Rooms will sign up for the new COMP=false list...
>
> -dave
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&a
ww.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://w
e" ... etc.)
What are your thoughts on this?
-- Ben
On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Actually, I think COMP=false is a perfectly valid subje
ps://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Dire
Richard,
One of the mental practices I learned while trying to save my first marriage
(an effort that ultimately failed) was: when criticized, rather than
reacting emotionally, to analytically reflect on whether the criticism is
valid. If it's valid, then I accept it and evaluate it I should make
nyone?), so presumably there's a good chance it would show up
> here, and that is good for you and others actively involved in AGI research.
>
> Best,
> Terren
>
>
> --- On *Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> From: Ben Goertzel <
By the way, I'm avoiding responding to this thread till a little time has
passed and a larger number of lurkers have had time to pipe up if they wish
to...
ben
On Wed, Oct 15, 2008 at 3:07 PM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> 2008/10/15 Ben Goertzel <[EMAIL PROTECTE
chives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC
So, just
> setting up a forum site is not the answer...
>
> ben g
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/mem
>
>
> What I am trying to debunk is the perceived risk of a fast takeoff
> singularity launched by the first AI to achieve superhuman intelligence. In
> this scenario, a scientist with an IQ of 180 produces an artificial
> scientist with an IQ of 200, which produces an artificial scientist with an
>
>
> I don't really understand why moving to the forum presents any sort of
> technical or logistical issues... just personal ones from some of the
> participants here.
>
It's a psychological issue. I rarely allocate time to participate in
forums, but if I decide to pipe a mailing list to my in
for anyone else on the list who would look for funding... I'd want to
> see you defend your ideas, especially in the absence of peer-reviewed
> journals (something the JAGI hopes to remedy obv).
>
> Terren
>
> --- On *Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]>*
he list who would look for funding... I'd want to
> see you defend your ideas, especially in the absence of peer-reviewed
> journals (something the JAGI hopes to remedy obv).
>
> Terren
>
> --- On *Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> From: Ben G
n their subject lines and allow things to otherwise continue as they are.
> Then, when you fail, it won't poison other AGI efforts. Perhaps Matt or
> someone would like to separately monitor those postings.
>
> Steve Richfield
> ===
> On 10/15/08, Ben Goertzel <[
ities.
>
> =
> Rafael C.P.
> =
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <
This widget seems to integrate mailing lists and forums
in a desirable way...
http://mail2forum.com/forums/
http://mail2forum.com/v12-stable-release/
I haven't tried it out though, just browsed the docs...
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Res
any thoughts on this topic. I would just like to
> beable to get a rough idea to what extent the use of cell assemblies
> increase or decrease the number of semantic nodes a set of neural net nodes
> can represent.
>
> Ed Porter
>
>
>
>
>
> -------
>
Hi,
> Also, you are right that it does not apply to many real world problems.
> Here my objection (as stated in my AGI proposal, but perhaps not clearly) is
> that creating an artificial scientist with slightly above human intelligence
> won't launch a singularity either, but for a different reas
Matt wrote, in reply to me:
> > An AI twice as smart as any human could figure
> > out how to use the resources at his disposal to
> > help him create an AI 3 times as smart as any
> > human. These AI's will not be brains in vats.
> > They will have resources at their disposal.
>
> It depends on
. That is good to
see!
ben g
On Thu, Oct 16, 2008 at 11:22 AM, Abram Demski <[EMAIL PROTECTED]>wrote:
> I'll vote for the split, but I'm concerned about exactly where the
> line is drawn.
>
> --Abram
>
> On Wed, Oct 15, 2008 at 11:01 AM, Ben Goertzel <[EMA
er/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
> ---
> agi
> Archives: https://www
Brothers spent their time building
planes rather than laboriously poking holes in the
intuitively-obviously-wrong
supposed-impossibility-proofs of what they were doing...
ben g
On Thu, Oct 16, 2008 at 11:38 AM, Tim Freeman <[EMAIL PROTECTED]> wrote:
> From: "Ben Goertzel" <
> to extract from it guidance as to how to solve the problem I posed.
>
>
>
> Ed Porter
>
>
>
> -Original Message-
> *From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
> *Sent:* Thursday, October 16, 2008 11:32 AM
> *To:* agi@v2.listbox.com
> *Subject:* Re: [
unds (such as just the number of possible combinations), and I was more
> interested in lower bounds.
>
>
>
> -----Original Message-
> *From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
> *Sent:* Thursday, October 16, 2008 2:45 PM
> *To:* agi@v2.listbox.com
> *Subject:* Re: [agi]
at 6:40 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> One more addition...
>
> Actually the Hamming-code problem is not exactly the same as your problem
> because it does not place an arbitrary limit on the size of the cell
> assembly... oops
>
> But I'm not sure w
, Oct 16, 2008 at 6:43 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> They also note that according to their experiments, bounded-weight codes
> don't offer much improvement over constant-weight codes, for which
> analytical results *are* available... and for which lower bound
201 - 300 of 2064 matches
Mail list logo