We seem to be inadvertently empowering some VERY wrong people.
If you haven't noticed, a significant fraction of the population now
believes AGI is already here in a BIG way - not the way people here are
working toward, but in ways depicted in movies, etc. This appears to be
leading in some BAD
This whole thread is fucked up - and notice that in the last decade, this
is the ONLY time I have used this F-word word.
Capitalism is rewarding only to those who would trade EVERYTHING for money.
Our present society presents everyone with a simple choice - children or
retirement, take your
MP,
I see a simple reason the AGI nut hasn't been cracked - that everyone
working on the "problem" has rejected any consideration of its genesis:
Our brains started out being a process control system, e.g. as in a hydra.
With more processing ability we advanced to complex behavior, e.g. to fruit
Jim,
There are several potential interpretations of this, with Rob's being but
one (or just a few).
Continuing...
On 6:07PM, Tue, Sep 18, 2018 Jim Bromer via AGI
wrote:
>
> I already regret asking these questions, but do you truly (really -
> honestly) believe that:
> Conscious Experience or
John,
John vonNeuman once noted that the difference between mechanical,
electrical, and chemical processes disappears when the scale becomes small
enough. So, OF COURSE there are electrical phenomena to observe.
Steve
On 2:42PM, Wed, Sep 12, 2018 John Rose wrote:
> I’m tellin’ ya, nobody
try problems?
>
> Rob
>
> --
> *From:* Matt Mahoney via AGI
> *Sent:* Thursday, 02 August 2018 7:34 PM
> *To:* AGI
> *Subject:* Re: [agi] Reality
>
> I disagree with most of this.
>
> On Wed, Aug 1, 2018 at 7:31 PM Steve Richfield via AGI
> wrote:
If AGI were "alive" and working today, it would take too much time and
resources to learn how to overcome a quadrillion dollars being misdirected
by people who fall FAR short of genius.
OTOH, ordinary geniuses ARE smart enough to seek, find, and exploit
weaknesses in the coming enslavement of the
John,
The big thing you are missing (besides the absence of trans-dimensional
technology) is that each layer defends itself against inner layers as
though its life depends on it - which it does. This is what creates and
maintains the onion architecture.
Some future AGI would have no access to
A key feature of the onion I described was to delineate what various
people/entities control, and what controls them.
Then, applying high intelligence, there is some limited ability to reach
into the next outer layer of the onion.
To illustrate, two of my three kids had grabs for exclusive
I seems that reality is organized somewhat like an onion...
In the center are children.
The next layer is parents and family.
Then comes business.
Around this are political leaders in several layers.
Above politics are the super-rich who operate our bribocracy.
At the very top are ET/UFO who
ult?
> The social intelligence augmentation singularity will bloom up many weird
cults I suspect.
So, here we are, discussing something that neither one of us want to be a
part of. Why bother?
Steve
> It's also far more risky, but that's a feature, not a bug.
>
> On Mon, Jul 9, 2018, 10:51 AM
I am getting my act together to advance a plan to simultaneously maximize
lifespan and the Flynn effect through organized personal preferences - sort
of a cross between a new sexual orientation and a new religion. I suspect
that an AGI electronic singularity would have little of value to offer in
Some solutions, especially in game theory, REQUIRE the use of random number
generators. If they are simulatable, then they definitely are NOT random.
Steve
On Jun 26, 2018 1:47 AM, "Giacomo Spigler via AGI"
wrote:
>
> That's an interesting point, however:
>
> 1) it wouldn't be a closed
gt; Hash: SHA256
>
>
>
> On 06/22/18 19:02, Steve Richfield via AGI wrote:
> > Logan,
> >
> > What do I use to open a .tex file?
>
> any text file editor should do it.
>
> >
> > It is (nearly) impossible to optimize anything without criteria
>
, 2018 Logan Streondj via AGI
wrote:
> oh i thought you meant a reference culture of independent AGI civilization.
>
> i have a story about some robots on Venus for instance. see attached.
>
>
> On June 22, 2018 4:00:41 PM EDT, Steve Richfield via AGI <
> agi@agi.topicbox
wrote:
> me me me *raises hand*
>
>
> On June 13, 2018 8:14:07 PM EDT, Steve Richfield via AGI <
> agi@agi.topicbox.com> wrote:
>>
>> Until now, ALL sci fi, ALL philosophy, etc., has been "relative" - as
>> seen from another culture. I have been wor
intelligence
>> over time) goes to infinity. That can't happen in a universe with finite
>> computing power and finite memory. Or by singularity do you mean when AI
>> makes humans irrelevant or extinct?
>>
>> On Thu, Jun 14, 2018, 5:56 PM Steve Richfield via
Matt,
My own view is that a human-based singularity is MUCH closer. The problem
is NOT a shortage of GFLOPS or suitable software, but rather, a repairable
problem in our wetware. Sure, a silicon solution might eventually be
faster, but why simply wait until then?
Apparently, I failed to
Until now, ALL sci fi, ALL philosophy, etc., has been "relative" - as seen
from another culture. I have been working on describing a "reference"
culture, to be used to understand and improve all past, present, and future
real world cultures, guide legislation to at least not make optimal
behavior
I tried posting on the Singularity forum, but it bounced. What is the story
here?
Steve
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5ada390c367596a4-Mcb29e0ed3d1c9db66a1af505
Delivery options:
Art and MP,
My perception of the break from everyone else's shared reality is that Art
is treating his project as an ecperimental toy, yet is talking about it as
though it was a tested and working product. If a Art is to emerge from his
presrnt troll image, he needs to make a choice:
1. Talk
21 matches
Mail list logo