Ok, lets try to assemble a bunch of random thoughts in a somewhat coherent manner and call it an post...

The current state of affairs is roughly as follows:

Right now we have proto-AGI systems that are showing some promise. However, they are being trained on exaflops scale supercomputers at a cost of about a million dollars for the training. The result of this training is somewhat like a moderately coherent 90 year old who is able to tell tales of the glory days of 2021 but can't remember what he had for breakfast...

In my previous message I proposed turning things around and using the input stream as the training feedback set, allowing the thing to run in a continous unsupervised training mode. I am fairly sure this is how the brain works and would be a candidate for consciousness.

The distressing thing tho is that the current compute requirements mean that you need to be able to dump many millions of dollars into it. Now we can probably claw a good 20% of that back with algorithmic improvments and low level coding tweaks but still that's a hell of a big barrier. An AGI will always be in training mode so there is no cheap "inference mode". It always has to be training, but on the flip side there is no massive pre-training period, it's basically learning on the job from teh start like humans do. We're still talking about an exaflops capable machine though.

I do not have a cellular phone and that is a requirement, for some reason, for trying out *-GPT. =\

As a black-box AI, it is pretty broken in many ways but also quite superhuman in others.

We now need to turn our attention to exactly what we mean by superintelligent. There are at least two distinct classes of superintelligence.

for starters consider an AGI that is legit verified human equivalent, at least within a factor of 0.8 on all dimensions but is 2x of the smartest known human baseline in one of those dimensions. Well that is a legit superintelligence but I'd call it a weak superintelligence.

A strong superintelligence will have capabilities on dimensions that do not exist in the human baseline. These capabilities include the ability to compute using more powerful algorithms than the human brain can. These capabilities also include the ability to operate multiple concurrent streams of thought, having proper distributed intelligence, and the ability to use modalities that are not available to the human baseline, even with neural interfacing.


Ok, now we're going to have to hop on the magical airship and take a trip into philosophy land. Philosophy land is a tricky place with lots of religious and emotion-driven ideas. Lots of it is not actually defensibile but are fiercely protected anyway. Of the ideas that are logically defensible, there are still value judgements that can have profound consequences.

A criticism of my thought would go along the lines of pointing to my focus on subjective consciousness as trying to protect a fiction at great cost, typically from brain uploaders. My criticism of uploaders is that uploading is a fatal error that cannot possibly preserve one's subjectivity and that the clone produced by the procedure would not enjoy enough benefits to make the loss worth further contemplation.

That said, once I get an AGI/ASI working well enough is to start seriously working on what I call the "arcanum of consciousness". In my fevered imaginings I picture it as a round stone tablet engraved with runes. It would basically function as a rosetta stone and state-map of consciousness. It would be able to decide whether a thing is conscious or not, and it would specify how that consciousness can be reconfigured and what changes are prohibited. While it can't contain every possible answer, it would specify parameters that can be fed into something like a propositional logic engine and if a solution can be found, then the mind in question can be evolved in the specified direction.

That, finally brings us to human enhancement. The Elon is absolutely correct with the concept behind neuralink. The problem is that there are a fairly wide variety of brain types out there. There are a few neural typical individuals in the world but there are many many many different brain-types out there. Temple Grandin (sp?) is a famous example but there is a fairly wide diversity of reports about what conscious experience feels like.

I see the Arcanum as a solution to this problem, it would be able to classify brain types, lets imagine that categories A B C and D are identified with minor types e, f, g and h. Furthermore individuals may have a specific brain type that works like an A-brain but one or two pathways are either "dominant" (ie expressing increased activity and conscious focus) or "suppressed" (ie expressing little or no activity). Ok, given an individual with an identified suppressed pathway, what would happen if that pathway is artificially repaired or stimulated?

Ok, proposal on the table is to take this ASI framework and connect it to a given patient. Lets say that the patient has a type B brain but neural circuit 3 is suppressed or absent and circuit 2 and 4 are compensating so the patient can operate within human norms (or not, as the case may be). I see the Arcanum as a tool to map out what connections and enhancements must be made to properly connect this unique brain to the ASI enhancement.

Without the arcanum, then you are basically limited to connecting to the peripheral nerves, and cannot provide the patient a menaningful (worthwhile) enhancement to his cognition and that the ASI is limited to assisting the patient as a social agent, which feels far less satisfying to me.

Anyway, this post has probably long since passed into TL;DR category.... Where's the send button?

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td72675df603b4d93-M46d5ac3e6b524681cee2ec90
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to