On Thu, Sep 25, 2014 at 7:31 AM, John Rose via AGI <[email protected]> wrote:
> Realistically they would not have identical behavior (as in "philosophical" 
> zombie). IMO much of human behavior originates from the qualia aspect. The 
> behavior in a p-zombie would need to be engineered and constructed 
> differently from a real person.

How would a p-conscious AGI behave differently than a p-zombie AGI?
What test are you using to distinguish them?

> I'm estimating that it might be much more engineering resources and runtime 
> resources to omit p-consciousness.

Experimentally we have a lot of p-zombie AI but not a single instance
of p-conscious AI (or at least AI that we would attribute
p-consciousness to because it behaves like a human). So it seems like
a zombie would be easier to build, not harder.

>> BTW, do you agree with my cost estimate?
>> https://docs.google.com/document/d/1Z0kr3XDoM6cr5TgHH0GXQTjyikr7WpCkpWFn9IglW3o
>>
>
> It's in interesting approach. There are many issues, here's a few:
>
> 1) The $70 trillion figure is not cited.

I didn't cite it because it is easy to look up and not controversial.
http://en.wikipedia.org/wiki/Gross_world_product

> 2) DNA upper bound should be more compressible.

It might be. I am simply applying the best known algorithms. Source
code might also be more compressible, which would move the estimate in
the opposite direction.

Another issue is whether to include non-coding DNA, which makes up
97.5 % of the human genome. We don't fully understand its purpose.
Some of it provides binding sites for gene regulation. Some of it
might have genes that could be switched on through evolution, allowing
us to evolve faster. The roundworm C. Elegans has 100M base pairs, 3%
of the human genome, but about the same number of genes (20,000). This
reduces the cost of cell division, but it also means that any
mutations are much more likely to be non-viable. Thus, it is less
evolved in spite of a higher replication rate.

> 3) Equating lines of genome code to software code and pricing it is 
> ridiculous.

Not if you understand information theory. However, one caution in my
$100 per line estimate is that DNA was not written using good software
engineering practices. It is not commented or documented, obviously.
We have to figure it out by doing experiments. It is sort of like
reverse engineering Windows when all you have is object code.

But it doesn't really matter because $30 billion is such an
insignificant fraction of the total cost that we can just ignore it.
We will likely spend much more than that to trade off hardware and
data collection costs.

> 4) The Landauer estimate needs to be updated, somehow.

Landauer was only able to measure long term memory capacity for words,
pictures, and music. But they are all pretty consistent at a few bits
per second. We do need experiments to measure other kinds of memory
involved in perception and movement skills. His estimate of 10^9 bits
is at odds with the number of synapses (10^14), which according to the
Hopfield model should store 0.15 bits per synapse. But it is not
unusual in massively parallel systems to have massive duplication.
Google and Facebook have server farms with millions of processors,
each with identical copies of the Linux kernel loaded into memory.
Your body has 10^13 identical copies of your DNA.

> 5) Knowledge of every employee needn't be recreated there is huge redundancy.

I am assuming the redundancy is 99%, i.e. 10^10 people x 10^9 bits x
0.01 unique knowledge = 10^17 bits that we need to collect from human
minds through speech and writing. This will be the dominant cost once
Moore's Law makes the hardware cheap enough in the 2040's. How do we
know 99%? The US. Labor Dept. estimates it costs $15K to replace an
employee on average. That is 1% of lifetime earnings. I suspect the
fraction of what you know that isn't written down is really higher
than 1% but it is hard to measure. But we probably don't need all of
it either, at least not to automate labor at the current level of
productivity.

> 6)....
>
> This financial estimate is highly skewed to the upside I can't even estimate 
> how much it's off...

That's exactly the problem. Nobody can and nobody does. We think that
we can just sit down at our computer for a few months and write an
AGI. Never mind the thousands of past failures. We are implicitly
underestimating the cost by 10 orders of magnitude without realizing
we are doing it. We have a really bad track record of estimating
unknown costs. We originally thought the space station would cost $1
billion.

What do you think AGI will cost? How do you know? When I ask the
question I always get "I guess it will cost X, because I can only
afford X".

> Don't make the mistake that just because something is dark and mysterious, 
> like p-consciousness is, that you can't feasibly build it.

I don't think it is mysterious. If something acts like a human, we
will assume it is p-conscious. That's all we need to worry about. I
already explained why we evolved to believe in p-consciousness. It's
not hard to program the same belief into an AI once you understand
where the belief comes from. We only need to do this because it
wouldn't behave like a human if it didn't claim to believe it was
p-conscious.

> And don't assume that building a functional replica is more difficult to 
> build than the original, as in p-consciousness. A realistic p-zombie-like 
> estimation of behavior would not be a functional replica, it would be a 
> simulation.

My approach is described in http://mattmahoney.net/agi2.html
We need an economic model that rewards collection and publication of
human knowledge on a global scale. There is no feasible way to do this
without decades of global effort. The work and the ownership needs to
be distributed. The approach is not to build human minds, because
humans make lousy workers. The approach is to build billions of
specialists and a network that routes messages to the right experts
competing for attention and reputation in a hostile environment. I
wrote this proposal in 2008, before social networking became popular,
and we are already seeing this type of interaction between websites,
except using more complex and ad-hoc protocols.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to