In the space of real world "problems", I suspect the distribution of
difficulty follows the Zipf function, like pretty much everything else does.
The curious thing about the Zipf function is the structure of its extreme
tail - it is finite, it drops off fast, and it doesn't encompass much of
the
The Singularity analogy was never intended to imply infinite power. Rather
it represents a point at which understanding and predictability breaks down
and becomes impossible.
On Jun 14, 2018 3:59 PM, "Matt Mahoney via AGI"
wrote:
> Vinge: when humans produce superhuman AI then so can it, only
Vinge: when humans produce superhuman AI then so can it, only faster. A
singularity in mathematics is a point where a function (like intelligence
over time) goes to infinity. That can't happen in a universe with finite
computing power and finite memory. Or by singularity do you mean when AI
makes
Matt,
My own view is that a human-based singularity is MUCH closer. The problem
is NOT a shortage of GFLOPS or suitable software, but rather, a repairable
problem in our wetware. Sure, a silicon solution might eventually be
faster, but why simply wait until then?
Apparently, I failed to
In his book "How to build a mind", Kurzweil talks about how a so-called pattern
recognizer is the central component of his pattern recognition theory of the
mind. I would like to test his theory, because of my background in (try not to
laugh) Hubbard’s dianetics and his theory of mind.
The singularity list (and SL4) died years ago. The singularity has been 30
years away for decades now. I guess we got tired of talking about it.
--
Artificial General Intelligence List: AGI
Permalink:
They’ve done demos for Intel in the past IIRC. But the secrecy (and yes, I’m
aware, as irritating as it is) resides in that they haven’t patented it yet,
and are afraid of secrets being stolen. BUT I can show you the high level
architecture and tell you guys now the core system is basically a
An Open Letter to the AGI list
Matt- firstly, well said.
Thanks for that perspective. To add, I would like to light a candle for
theoretical researchers that are designers. This, as opposed to researchers who
jump straight into coding modular tests, and/or designers who do
purely-academic
Kimera ... I mean they seem like smart people but the rhetoric
associated with the project is sufficiently overblown to make me not
want to pay attention...
On Thu, Jun 14, 2018 at 3:10 PM, MP via AGI wrote:
> Speaking of which, anyone here heard of Kimera Systems and their so-called
> AGI
Speaking of which, anyone here heard of Kimera Systems and their so-called AGI
Nigel? It seems they’re touting a blockchained powered causal inference engine
as this all-encompassing intelligence system.
I’m still on the fence. I’m pretty "in" with the company as it is, but even the
CEO Mounir
10 matches
Mail list logo