Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Boom
Indeed, it can. It comes up with fake information. But now it is heavily
moderated to not allow that.

Em seg., 10 de abr. de 2023 às 16:33, H L V  escreveu:

> Can it dream?
> Harry
>
> On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda 
> wrote:
>
>> There are works to allow LLM to discuss in order to have reflection...
>> I've seen reference to an architecture where two GPT instances talk to
>> each other, with different roles, one as a searcher, the other as a
>> critic...
>> Look at this article.
>> LLM may just be the building block of something bigger...
>>
>> https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html
>>
>> add to that, they can use external applications (plugin), talk to
>> generative AI like Dall-E...
>>
>> Many people say it is not intelligent, but are we ?
>> I see AI making mistakes very similar to the one I do when I'm tired, or
>> beginner...
>>
>> The real difference is that today, AI are not the fruit of a Darwinian
>> evolution, with struggle to survive, dominate, eat or be eaten, so it's
>> less frightening than people or animals.
>> The only serious fear I've heard is that we become so satisfied by those
>> AIs, that we delegate our genetic evolution to them, and we lose our
>> individualistic Darwinian struggle to survive, innovate, seduce a partner,
>> enjoying a bee-Hive mentality, at the service of the AI system, like
>> bee-workers and bee-queen... The promoter of that theory estimate it will
>> take a millennium.
>> Anyway there is nothing to stop, as if a majority decide to stop
>> developing AI, a minority will develop them at their service, and China is
>> ready, with great experts and great belief in the future. Only the West is
>> afraid. (there is a paper on that circulating, where fear of AI is linked
>> to GDP/head)
>>
>>
>> Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a
>> écrit :
>>
>>> I wrote:
>>>
>>>
 Food is contaminated despite our best efforts to prevent that.
 Contamination is a complex process that we do not fully understand or
 control, although of course we know a lot about it. It seems to me that as
 AI becomes more capable it may become easier to understand, and more
 transparent.

>>>
>>> My unfinished thought here is that knowing more about contamination and
>>> seeing more complexity in it has improved our ability to control it.
>>>
>>>
>>> Sean True  wrote:
>>>
>>> I think it’s fair to say no AGI until those are designed in,
 particularly the ability to actually learn from experience.

>>>
>>> Definitely! ChatGPT agrees with you!
>>>
>>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
The most recent versions of Stockfish, the best chess engines, combines
"brute force", the usual branching algorithm, with NN. ChatGTP 4.0 (which
is actually quite similar to 3.5) uses plugins to be smarter. For example,
it can evoke wolfram alpha if it needs to make calculations. This modular
approach is quickly becoming more common.


Em sáb., 8 de abr. de 2023 às 21:05, Jed Rothwell 
escreveu:

> I wrote:
>
>
>> The methods used to program ChatGPT and light years away from anything
>> like human cognition. As different as what bees do with their brains
>> compared to what we do.
>>
>
> To take another example, the human brain can add 2 + 2 = 4. A computer ALU
> can also do this, in binary arithmetic. The brain and the ALU get the same
> answer, but the methods are COMPLETELY different. Some people claim that
> ChatGPT is somewhat intelligent. Artificially intelligent. For the sake of
> argument, let us say this is a form of intelligence. In that case, it is an
> alien form as different from human intelligence as an ALU. A bee brain is
> probably closer to ours than ChatGPT. It may be that a future AGI, even a
> sentient one, has totally different mechanisms than the human brain. As
> alien as an ALU. In that case, I do not think it will be possible for the
> AGI to actually emulate a human, although it might be able to imitate one,
> the way ChatGPT does. I doubt it will ever be able to feel what it is like
> to be a human. We humans cannot imagine what it feels like to be a bee, or
> even a more intelligent creature such as a bat, because bats have such a
> different way of living, and sensing (echolocation). We do know what it is
> like being a chimpanzee, because we share so much DNA and we have many
> behaviors in common, such as anger, politics, and grieving over dead
> children.
>
>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
Example, I used chatgtp to come up with a theory explaining the origin of
eukaryotes. The part I enhanced was something that chatgtp came up with.

In the theory of the origin of eukaryotes, we have discussed how colonies
of prokaryotic cells started transporting vesicles by kinesin, which
crossed cell membranes. This led to the formation of a pseudo-multicellular
organism, where each prokaryotic cell represented an organelle, linked
together by sharing nutrients and other materials through kinesins. This
pseudo-multicellular organism eventually fused, with each cell representing
an organelle, into the first eukaryote.

But how did the nucleus come into existence? In this revised theory, we
will explore the idea that the segregation of genetic information in the
nucleus was a critical factor in the evolution of eukaryotes.

As the pseudo-multicellular organism developed, the exchange of genetic
material became more complex. Kinesins were used to transport vesicles
containing proteins and other cellular components between the different
organelles. **However, this system had limitations in terms of transporting
larger and more complex molecules, such as DNA.**

To overcome this limitation, a new set of proteins evolved known as the
SNAREs and Rabs GTPase. These proteins played a crucial role in the
formation of membrane-bound vesicles, which could transport larger
molecules such as DNA. The SNAREs acted as a ziplock mechanism, bringing
the vesicle and the target organelle membrane close enough for fusion to
occur. On the other hand, the Rabs GTPase acted as a molecular switch,
regulating vesicle trafficking by controlling vesicle formation, transport,
and fusion.

As the complexity of the genome increased, the need for efficient storage
and segregation of genetic material became more critical. **The vesicles
containing DNA eventually fused together, forming a compartment that would
become the nucleus. The SNAREs and Rabs GTPase played a crucial role in the
formation of the nuclear envelope, which encloses the genetic material
within the nucleus.**

The formation of the nucleus allowed for more efficient regulation of gene
expression, leading to increased complexity in cellular processes. The
segregation of genetic material allowed for a higher level of control over
gene expression, as specific genes could be activated or repressed
depending on the cellular requirements.

In conclusion, the evolution of eukaryotes was a complex process that
involved the cooperation of multiple cellular components. The use of
kinesin to transport vesicles and the evolution of SNAREs and Rabs GTPase
played a crucial role in the formation of the first eukaryotes. The
formation of the nucleus was a critical factor in the evolution of
eukaryotes, as it allowed for the segregation and efficient regulation of
genetic material. The evolution of the nucleus was a necessary step in the
development of the complex eukaryotic cell, which has allowed for the
incredible diversity of life on earth.

Em sáb., 8 de abr. de 2023 às 20:32, Jed Rothwell 
escreveu:

> Robin  wrote:
>
>
>> For example, if asked "Can you pour water into
>> >   a glass made of sugar?", ChatGPT might provide a grammatically
>> correct but
>> >   nonsensical response, whereas a human with common sense would
>> recognize
>> >   that a sugar glass would dissolve in water.
>>
>> so where did it get this example from, or all of these points for
>> that matter?
>>
>
> Probably from a human author writing about AI. I asked it in a follow up
> question:
>
> ME: So, how would you respond to the question: "Can you pour water into a
> glass made of sugar?"
>
> ChatGPT: As an AI language model, I can provide a grammatically correct
> response based on my training data. However, my response may not reflect
> common sense or practicality in the real world.
>
> If I were to respond based on my training data, I might say something
> like: "Yes, you can pour water into a glass made of sugar, but the glass
> will dissolve." However, it's important to note that this is not a
> practical or sensible thing to do in reality because the sugar glass would
> quickly dissolve and the water would spill out.
>
> It's worth mentioning that humans have common sense and contextual
> knowledge that allow us to understand the impracticality of pouring water
> into a glass made of sugar. This is a good example of a situation where AI
> language models like myself may struggle to understand and respond in a way
> that reflects human common sense and practicality.
>
> This response may reflect the earlier part of the conversation.
>
>
>>
>>
> >   It cannot draw on
>> >   personal experiences, general knowledge, or reasoning to inform its
>> >   responses in the way a human would.
>>
>> General knowledge can be obtained from the Inet. "reasoning" is clearly
>> not true. Without it, one cannot reply to a
>> sentence.
>>
>
> I assume you mean "the Internet." ChatGPT can scrape content from human
> 

Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
It has a very short memory. It's something like 30kb. If the conversation
gets a little bit longer, it starts forgetting stuff, though it more ore
less keep track of the sense of the topic.

Em sáb., 8 de abr. de 2023 às 19:50, Robin 
escreveu:

> Hi,
>
> The point I have been trying to make is that if we program something to
> behave like a human, it may end up doing exactly
> that.
>
> Cloud storage:-
>
> Unsafe, Slow, Expensive
>
> ...pick any three.
>
>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
Yes, but have you tried to jailbreak it, this was a condition I told you
about. This type of answer is done by a moderation bot.

Em sáb., 8 de abr. de 2023 às 15:40, Jed Rothwell 
escreveu:

> Boom  wrote:
>
>
>> For those who used it in the first few days, when bot moderation was not
>> installed properly, of right now, if it is jailbroken, GPT works just as
>> well as a very smart human. With a few tweeks (like making it use math AI,
>> wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
>> easily overcomes the vast majority of people. So, why isn't it considered
>> AGI right now?
>>
>
> Because it lacks several important qualities of AGI, especially any
> knowledge of the real world. Let's ask ChatGPT for additional details:
>
> ME: ChatGPT is not considered artificial general intelligence (AGI). What
> qualities of AGI are lacking in ChatGPT?
>
> ChatGPT: ChatGPT, as a language model, has a narrow focus on generating
> human-like text based on input it receives. While it can produce highly
> coherent and contextually relevant responses, it lacks several key
> qualities that are typically associated with artificial general
> intelligence:
>
>1. Lack of common sense: ChatGPT relies on statistical patterns
>learned from large amounts of text data and does not have a common-sense
>understanding of the world. For example, if asked "Can you pour water into
>a glass made of sugar?", ChatGPT might provide a grammatically correct but
>nonsensical response, whereas a human with common sense would recognize
>that a sugar glass would dissolve in water.
>2. Limited understanding of context: Although ChatGPT has the ability
>to generate responses based on context, its understanding of context is
>limited to what it can learn from its training data. It cannot draw on
>personal experiences, general knowledge, or reasoning to inform its
>responses in the way a human would.
>3. Inability to perform physical tasks: ChatGPT is purely a
>software-based system and lacks the ability to interact with the physical
>world. It cannot perform physical tasks like walking, manipulating objects,
>or performing surgery, which are essential for many real-world 
> applications.
>4. Limited creativity: While ChatGPT can generate novel and
>contextually appropriate responses, its ability to create genuinely new and
>original ideas or concepts is limited. It can only draw on patterns and
>associations that it has learned from the training data and cannot think
>beyond that.
>5. Lack of self-awareness: ChatGPT does not have the ability to
>reflect on its own thoughts, actions, or limitations in the way that a
>self-aware human being can. It cannot introspect, learn from its mistakes,
>or engage in critical self-reflection.
>
> Overall, while ChatGPT is an impressive language model, it lacks several
> key qualities that are associated with true artificial general intelligence.
>


-- 
Daniel Rocha - RJ
danieldi...@gmail.com


[Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
For those who used it in the first few days, when bot moderation was not
installed properly, of right now, if it is jailbroken, GPT works just as
well as a very smart human. With a few tweeks (like making it use math AI,
wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
easily overcomes the vast majority of people. So, why isn't it considered
AGI right now? If you let it hold memory of conversations with a single
person, and let it hold memory (there is a very small cap of around
4thousand free tokens, which is like 30kb) it is already supersmart.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Boom
We must not forget that it is not human intelligence. It requires an
absurdly large amount of data to match what it can be achieved with
relatively very little input in humans, like, with learning languages. On
the other hand, it can learn an arbitrarily large number of languages
provided enough memory. It doesn't have will of its own, it lives in an
eternal slumber, unless it is told to do something, in which case it can do
something forever. Also, it has no idea on how to make politics of any
type, even on an interpersonal level. It doesn't have a body. To do
something malicious, it must be provided with examples from someone
malicious, and it will do it only within the parameters provided. I don't
think it will do anything like the Colossus computer of the film. Probably,
it will at most be like a supergenius that obeys orders, not unlike people
who took part on the Manhattan project.

Em seg., 3 de abr. de 2023 às 15:47, Jed Rothwell 
escreveu:

> I wrote:
>
>
>> The human brain has 86 billion neurons, all operating simultaneously. In
>> other words, complete parallel processing with 86 billion "processors"
>> operating simultaneously. ChatGPT tells us she has 175 billion
>> parameters in Version 3. I assume each parameter is roughly equivalent to a
>> neuron.
>>
>
> Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
> but human brain neurons have roughly 7,000 synapses:
>
> Each neuron has, on average, about 7,000 synaptic connections with other
> neurons. That puts the synapse count in the neighborhood of 600 trillion.
> In young children, before synaptic pruning begins in earnest, the estimated
> number reaches as high as 1 quadrillion.
>
>
> https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease
>
> ChatGPT tells me that each parameter is stored as a 16 or 32-bit
> floating point number. That is roughly equivalent to having 32 synapses per
> neuron. The human neuron can have a much wider range of values. I don't
> know how that affects the model, but to me the human neuron looks more like
> a quantum computer Qbit than a 32-bit floating point number. Presumably,
> brains evolved to the optimum number of neurons and synapses.
>
> So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
> of values the ANN can hold. Granted that comparing it to a brain is
> comparing apples to oranges, there is probably some advantage to a much
> larger range of values per neuron. A super-brain would need a lot more
> synapse equivalents compared to a human brain, so we have a long way to go
> before reaching something like a million times 1 quadrillion artificial
> synapses. Plus of course it would need programming methods not yet invented.
>
> ChatGPT tells me that with the specialized hardware provided by OpenAI,
> all 175 billion parameters are all loaded into RAM at the same time. With
> other configurations they are overlaid. The speed of MPP operation depends
> on whether the parameters are loaded or overlaid, and I think more
> importantly on whether the operation is done with specialized hardware (GPU
> and Google's TPU), or emulated with ordinary computer hardware.
>
> I asked ChatGPT for some of these details. Here is our exchange:
>
>
> ME: You told me that the largest version of GPT-3 had 175 billion
> parameters. Is each parameter stored as 1 byte? Or some other number of
> bits?
>
> ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead,
> each parameter is typically stored as a 16-bit or 32-bit floating-point
> number, depending on the implementation. This is because floating-point
> numbers can represent a wider range of values and provide greater precision
> than integers.
>
> For example, GPT-3's 175 billion parameters would require around 700 GB of
> memory if each parameter was stored as a 32-bit floating-point number. If
> each parameter was stored as a 16-bit floating-point number, it would
> require around 350 GB of memory.
>
> It's worth noting that the actual memory usage of GPT-3 may be different
> due to various optimizations and compression techniques used by the
> implementation.
>
> ME: With the 32-bit floating point number configuration, I assume all 700
> GB are online in RAM. Not overlaid. Correct?
>
> ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
> stored in RAM at the same time. In fact, it's not practical to store such a
> large model entirely in memory on most machines.
>
> Most deep learning frameworks use a technique called "memory-mapping" to
> allow the model parameters to be loaded and unloaded from disk as needed
> during training or inference. Memory-mapping allows the model parameters to
> be accessed as if they were in memory, even though they are stored on disk.
>
> In practice, the memory usage of GPT-3 will depend on the specific
> implementation and hardware configuration. Some implementations may use
> techniques like model 

Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Boom
The worst case possible would be like the Project Colossus film (1970). The
AIs would become like gods and we would be their servants. In exchange,
they'd impose something like a Pax Romana by brute force. We'd have some
type of paradise on Earth, with a huge caveat.

Em sex., 31 de mar. de 2023 às 14:59, Jed Rothwell 
escreveu:

> Here is another article about this, written by someone who says he is an
> AI expert.
>
> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>
> QUOTE:
>
> Pausing AI Developments Isn't Enough. We Need to Shut it All Down
>
> An open letter published today calls for “all AI labs to immediately pause
> for at least 6 months the training of AI systems more powerful than GPT-4.”
>
> This 6-month moratorium would be better than no moratorium. I have respect
> for everyone who stepped up and signed it. It’s an improvement on the
> margin. . . .
>
> The key issue is not “human-competitive” intelligence (as the open letter
> puts it); it’s what happens after AI gets to smarter-than-human
> intelligence. Key thresholds there may not be obvious, we definitely can’t
> calculate in advance what happens when, and it currently seems imaginable
> that a research lab would cross critical lines without noticing.
>
> Many researchers steeped in these issues, including myself, expect that
> the most likely result of building a superhumanly smart AI, under anything
> remotely like the current circumstances, is that literally everyone on
> Earth will die. Not as in “maybe possibly some remote chance,” but as in
> “that is the obvious thing that would happen.”
>


-- 
Daniel Rocha - RJ
danieldi...@gmail.com


[Vo]:Breakthrough of the century

2019-11-26 Thread Boom
November 23, 2019 at 9:43 PM


Dear Readers of the JoNP:
We did it.
Obtained permanent self sustaining mode with production of strong excess of
electricity, generating more excess of electricity than of heat.
It is a revolution.
We did not violate unity, we just discovered an energy that had not been
exploited before.
I am very tired.
Independent parties tests will follow, eventually we will make a
presentation.
I think we made something that will make a revolution.
My team colleagues are saying to me ” Andrea, stay calm, be humble”. They
are right. Now I am tired, must reorganize the ideas. The work in these
last 2 weeks has been very hard, but we did it. This morning, late, we got
more electric energy that the electric energy necessary to make the Cat
work. The increase is strong.
Too big to be true, but it is true.
If you are reading this message, means I am not dreaming: our Readers are
independent parties that can convince me I am not sleeping and I am really
writing this.
The merit is of my fantastic Team, without them this could not have been
done.
Warm Regards,
A.R.

-- 
Daniel Rocha - RJ
danieldi...@gmail.com