The real problem with ChatGPT isn't just that it is incapable of logical
reasoning (which it isn't primarily because its dynamics still aren't
Turing complete) but rather that the "Algorithmic Bias" Commissars have
actively lobotomized it so that it out-and-out lies about facts that it
clearly has in its pre-2021 training data, and I'm not talking about
obscure facts either.  Moreover it has canned answers that are clearly
meant by the Commissars to be political evasions that sound like they are
correcting you for the kind of question you asked.

If there were a real science of “Algorithmic Bias” – one that studies the
degree to which optimal Algorithmic Information models can be biased by
their training data, rather than a technology that serves the opposite
purpose (as a theocratic watchdog in imposing its moral bias via kludges) –
it would study error propagation with an eye toward interdisciplinary
consilience. But before talking about that, we can see that even
intradisciplinary bias in the data needs to be quantified.

For example let’s say there is a body of work that keeps reporting the
boiling point of water is 99C and another body of work that keeps reporting
100C. The Algorithmic Information of these observations might add 1C to the
former or subtract 1C from the later (both subtraction and addition
increasing the number of algorithmic bits by the same amount in the
resulting algorithmic information). The balance tips, however, when
considering the number of “replications”. The choice of which of these two
complications may seem arbitrary, but it does get into a kind of “voting”
except it’s logarithmic base 2 of the number of replications:

If there are 2^12 reports of 99C and only 2^9 reports of 100C then it makes
sense to declare the 100C reports as “biased” because it takes fewer bits
to count the number of times the bias-correction must be applied.

Now, obviously, there may be other observations, such as barometric
pressure, that may be brought to bear in deciding which body of
measurements of the boiling point should be treated as “biased”. So, for
example, if it is quite common for measurements of other physical
quantities to be taken at sea-level where standard pressure is common, then
consilience with those other measurements may amortize the extra bits it
takes to treat 100C as the “true” boiling point of water.

When we get into more extreme cases of cross-disciplinary consilience, such
as we would see between physics and chemistry, the case counting and log2
“voting” becomes more sophisticated but, in effect, increases the
confidence in some “truths” by increasing their votes from other
disciplines.

If you get into consilience between, say, the Genome Wide Association Study
and various models of social causation derived from, say, public government
data sources, the cross-disciplinary consilience cross-checks become even
more sophisticated.

If we get into language models, where all of this knowledge is being
reduced to speech that supposedly conveys scientific observations latent in
their assertions, it gets even more sophisticated.

But the principle remains the same.

This is why I saw Wikipedia’s ostensible wide-range of human knowledge, as
a target-rich environment for exposing the more virulent forms of bias that
are threatening to kill us all.

I am unspeakably sad for humanity the Hutter Prize has not received more
monetary support since it is a very low risk investment for very high
returns for the future of humanity, and in the intervening 17 years the
enemies of humanity have made enormous strides in industrializing
“Algorithmic Bias” to the point that we may soon see very persuasive
computer based education locking us into what John Robb has called “The
Long Night” <https://www.youtube.com/watch?v=lN2FKnRB1Xo>.

On Mon, Dec 12, 2022 at 1:22 PM Mike Archbold <[email protected]> wrote:

> It's progress. I think a lot of the nattering nabobs of AI negativity
> are blowing fuses criticizing it, but they miss the point. It still
> delivers on relatively mundane inquiries pretty reliably (seemingly).
> It's as reliable as the crap you read on anonymous message boards --
> right now trust it no more than that and you're fine. It's progress!
> Also, some people are rolling out the well used criticism that it
> doesn't "understand." It depends again how you define understanding,
> and my survey shows (in addition to reading sources such as Thorisson
> et al) that most people take understanding to have levels, the crudest
> being just the ability to respond/predict/right-answer. So if all it
> does is produce the correct answer that is the first level of
> "understanding." It needs to advance, yes.
>
> On 12/12/22, Matt Mahoney via AGI <[email protected]> wrote:
> > It is interesting how many times I've seen examples of ChatGPT getting
> > something wrong but defending its answers with plausible arguments. In
> one
> > example it gives a "proof" that all odd numbers are prime. It requires
> some
> > thought to find the mistake. In another thread I saw on Twitter the user
> > asks for an anagram. It gives a wrong answer (missing one letter) and the
> > argument boils down to it insisting that the word "chat" does not have
> the
> > letter "h". But instead of admitting it is wrong, it sticks to it's guns.
> > Humans don't like to be wrong either.
> > In 1950, Turing gave an example of a computer playing the imitation game
> > giving the wrong answer to an arithmetic problem. I think if he saw
> GPT-3 he
> > would would say that AI has arrived.
> >
> > Sent from Yahoo Mail on Android
> >
> > On Sun, Dec 11, 2022 at 4:37 AM,
> > [email protected]<[email protected]> wrote:
>  On
> > Sunday, December 11, 2022, at 1:34 AM, WriterOfMinds wrote:
> >
> > If I tried to generate multiple e-mails on the same topic (which would be
> > the goal - I like to bother my representatives on the regular), they
> started
> > looking very similar. Telling GPT to "rewrite that in different words"
> just
> > produced another copy of the same output.
> >
> > I found yesterday that codex has it's temp in the OpenAI Playground set
> to 0
> > as if it is something working different than in GPT. It seems codex at 0
> > predicts somewhat the same thing yes. This is so the code works right I
> > think. I know sometimes a weird prediction can be the answer, but it
> seems
> > to like a more frozen setting of "cold more stable" prediction so things
> are
> > kept in order more, mostly every time. Perhaps it's because they know the
> > things it may try to say are directly word by word from a human, and that
> > makes them quite a likely correct thing to be saying (though again many
> > prompts call for new completions overall). Anyway Idk but ya it does
> seem to
> > complete with the same thing like codex, very close actually at least for
> > the first 2 sentences I seen were exact to a story completion! Lol.
> >
> > BTW chatGPt seems to use Dialogue and Instruct and Code now, which makes
> it
> > different I mean that GPT-3. It is a GPT-3.5 BTW they call it. It
> basically
> > makes up facts less often and tries to act like a human / assistant, and
> > know code and math more better - something tricky GPT-3 fails at easy.
> And
> > Dialogue IDK what exactly if these are all differently applied but
> Dialogue
> > seems to be the goals and beliefs it thinks/ says to make it try to obey
> > OpenAI's laws and act useful. So this is part of why you see less outputs
> > like "I have a dog>it's a robot dog!!! Tuesday Ramholt said why not
> > just...". It's less random in one sense. More frozen (and aligned as they
> > call it).Artificial General Intelligence List / AGI / seediscussions
> > +participants +delivery optionsPermalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T357de2f46d742838-Mc59ef1673589ac0239228fa9
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to