Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread John Rose
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote:
> It's not easy to prove new theorems in category theory or categorical 
> logic... though one open problem may be the formulation of fuzzy toposes.

Or perhaps neutrosophic topos, Florentin Smarandache has written much 
interesting work in this area.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mdb56eae8d4bc3eeff6b6e40c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread James Bowery
On Thu, May 2, 2024 at 9:56 AM Matt Mahoney  wrote:

> ...
> Prediction measures intelligence. Compression measures prediction.
>

Beautiful Aphorism!

The aphorism captures both of AIXI's components:  AIT (Compression) and SDT
(Prediction).

The only specious quibble left for the anti-intelligence sophists (other
than the standard go-to cope of "arbitrary" UTM choice -- which has now
been nuked by NiNOR complexity) to exploit about the word "intelligence" is
the unspecified utility function of Sequential Decision Theory's aspect of
AIXI.  Otherwise it is a poetic "compression" of AIXI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M416f6444203fa55f18b79183
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Matt Mahoney
Could your ideas be used to improve text compression? Current LLMs are just
predicting text tokens on huge neural networks, but I think any new
theories could be tested on a smaller scale, something like the Hutter
prize or large text benchmark. The current leaders are based on context
mixing, combining many different independent predictions of the next but or
token. Your predictor could be tested either independently or mixed with
existing models to show an incremental improvement. You don't need to win
the prize to show a positive result.

The problem with current LLMs is that they require far more training text
than a human and they require separate training and prediction steps. We
know they are on the right track because they make the same kind of math
and coding errors as humans, and of course passing the Turing test and
equivalent academic tests. Can we do this on 1 GB of text and a
corresponding reduction in computation? Any new prediction algorithm would
be a step in this direction.

Yes, it's work. But experimental research always is. The current Hutter
prize entries are based on decades of research starting with my PAQ based
compressors.

Prediction measures intelligence. Compression measures prediction.

On Thu, May 2, 2024, 5:31 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Thu, May 2, 2024 at 6:02 PM YKY (Yan King Yin, 甄景贤) <
> generic.intellige...@gmail.com> wrote:
>
>> The basic idea that runs through all this (ie, the neural-symbolic
>> approach) is "inductive bias" and it is an important foundational concept
>> and may be demonstrable through some experiments... some of which has
>> already been done (ie, invariant neural networks).  If you believe it in
>> principle then the approach can accelerate LLMs, which is a
>> multi-billion-dollar business now.
>>
>
> PS:  this is a hypothesis, it's a scientific hypothesis, is falsifiable,
> can be proven or disproven, but it's very costly to prove directly given
> current resources.  Nevertheless it can be *indirectly* supported by
> experiments.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M8cfd835dac738d597562d6ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Yan King Yin, 甄景贤
On Thu, May 2, 2024 at 6:02 PM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> The basic idea that runs through all this (ie, the neural-symbolic
> approach) is "inductive bias" and it is an important foundational concept
> and may be demonstrable through some experiments... some of which has
> already been done (ie, invariant neural networks).  If you believe it in
> principle then the approach can accelerate LLMs, which is a
> multi-billion-dollar business now.
>

PS:  this is a hypothesis, it's a scientific hypothesis, is falsifiable,
can be proven or disproven, but it's very costly to prove directly given
current resources.  Nevertheless it can be *indirectly* supported by
experiments.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M2d41b288933ff9183512c4b7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Yan King Yin, 甄景贤
On Wed, May 1, 2024 at 10:29 PM Matt Mahoney 
wrote:

> Where are you submitting the paper? Usually they want an experimental
> results section. A math journal would want a new proof and some motivation
> on why the the theorem is important.
>
> You have a lot of ideas on how to apply math to AGI but what empirical
> results do you have that show the ideas would work? Symbolic approaches
> have been a failure for 70 years so I doubt that anything short of a
> demonstration matching LLMs on established benchmarks would be sufficient.
>

It's for AGI-2024 in Seattle.

It's not easy to prove new theorems in category theory or categorical
logic... though one open problem may be the formulation of fuzzy toposes.

The "novelty" of my paper, if any, is just to show some connection between
category theory and AGI, which may be obscure to other researchers
unfamiliar with the subject.

The basic idea that runs through all this (ie, the neural-symbolic
approach) is "inductive bias" and it is an important foundational concept
and may be demonstrable through some experiments... some of which has
already been done (ie, invariant neural networks).  If you believe it in
principle then the approach can accelerate LLMs, which is a
multi-billion-dollar business now.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mbb3b8550b9ee9532c3d32a08
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-01 Thread Matt Mahoney
Where are you submitting the paper? Usually they want an experimental
results section. A math journal would want a new proof and some motivation
on why the the theorem is important.

You have a lot of ideas on how to apply math to AGI but what empirical
results do you have that show the ideas would work? Symbolic approaches
have been a failure for 70 years so I doubt that anything short of a
demonstration matching LLMs on established benchmarks would be sufficient.

On Sun, Apr 28, 2024, 6:13 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> Hi friends,
>
> This is my latest paper.  I have uploaded some minor revisions past the
> official deadline, not sure if they would be considered by the referees 😆
>
> In a sense this paper is still on-going research, inasmuch as AGI is still
> on-going research.  But it won't remain that way for long 😆
>
> I am also starting a DAO to develop and commercialize AGI.  I hope some
> people will start to join it.  Right now I'm alone in this world.  It seems
> that everyone are still uncomfortable with global collaboration (which
> implies competition, that may be the thing that hurts) and they want to
> stay in their old racist mode for a little while longer.
>
> To be able to lie, and force others to accept lies, confers a lot of
> political power.  Our current world order is still based on a lot of lies.
> North Korea doesn't allow their citizens to get on the internet for fear
> they will discover the truth about the outside world.  Lies are intricately
> tied to institutions and people tend to support powerful institutions,
> which is why it is so difficult to break away from old tradition.
>
> --
> YKY
> *"The ultimate goal of mathematics is to eliminate any need for
> intelligent thought"* -- Alfred North Whitehead
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M1e62850f24476efceea666cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-01 Thread James Bowery
This reminds me of:

   - an argument I had with the authors of the Mercury logic programming
   language about "types" (that they were unnecessary kludges atop first order
   logic)
   
   - the claim that Tarski's "model theory" obviates the attempt by Russell
   and Whitehead to develop "relation arithmetic" as a theory of empirical
   structure
   
   - Quine dispensing with "names" as mere syntactic sugars within first
   order logic
   

   - Tom Etter's use of relative (Quine) Identities to obviate set theory
   within first order logic
   

   - The claim that "category theory develops its own take on first-order
   logic — it would be a wasted effort (and somewhat counter-philosophical) to
   study the subject in the traditional set-oriented version of logic
   ".

Look, I've been looking for the proper foundation for programming languages
all of my professional life and throughout those decades there has been
this claim that category theory is it -- but it really reminds me of the
way Witten did violence to physics with string theory.

On Wed, May 1, 2024 at 1:12 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Tue, Apr 30, 2024 at 3:35 AM Mike Archbold  wrote:
>
>> It looks tantalizingly interesting but to help me, somewhat more of an
>> intuitive narrative would help me unless you are just aiming at a narrow
>> audience.
>>
>
> Sorry that's not my style usually but I find that my level of math is also
> lagging quite a bit behind the category-theory experts 😆
> I will write an easier tutorial on this stuff...  most of the material is
> already covered in the 1984 book "Topoi" by Robert Goldblatt,
> it really unbelievable (from my perspective) that so much of categorical
> logic is already well-developed at that time... and I'm
> still struggling to understand that book 😆 ... which is not a very
> friendly book for beginners.  I doubt if there's a good beginners'
> introduction to categorical logic... but most importantly, I'd like the
> readers to see what this theory may offer to AGI development...
>
> Maths is very fascinating... but it may not be super useful and may be
> even quite disappointing...  but it's not useless either...
> and it's hard for anyone to judge its potential...  This reminds me of the
> invention of back-prop  it was re-discovered a
> couple times by different researchers...  the original formulation
> required some tedious derivations...  but some people worked
> through them anyway...  it was hard to see the value of a discovery until
> much later.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M191ee1ba5f0ba9ed1f08c9d7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-30 Thread Yan King Yin, 甄景贤
On Tue, Apr 30, 2024 at 3:35 AM Mike Archbold  wrote:

> It looks tantalizingly interesting but to help me, somewhat more of an
> intuitive narrative would help me unless you are just aiming at a narrow
> audience.
>

Sorry that's not my style usually but I find that my level of math is also
lagging quite a bit behind the category-theory experts 😆
I will write an easier tutorial on this stuff...  most of the material is
already covered in the 1984 book "Topoi" by Robert Goldblatt,
it really unbelievable (from my perspective) that so much of categorical
logic is already well-developed at that time... and I'm
still struggling to understand that book 😆 ... which is not a very
friendly book for beginners.  I doubt if there's a good beginners'
introduction to categorical logic... but most importantly, I'd like the
readers to see what this theory may offer to AGI development...

Maths is very fascinating... but it may not be super useful and may be even
quite disappointing...  but it's not useless either...
and it's hard for anyone to judge its potential...  This reminds me of the
invention of back-prop  it was re-discovered a
couple times by different researchers...  the original formulation required
some tedious derivations...  but some people worked
through them anyway...  it was hard to see the value of a discovery until
much later.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mc4b0da15ec5d9cd95f956762
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-29 Thread Mike Archbold
I know you have long been interested in fusing logic and neural networks,
which is very interesting and would help transparency a lot if successful.

On Mon, Apr 29, 2024 at 12:35 PM Mike Archbold  wrote:

> It looks tantalizingly interesting but to help me, somewhat more of an
> intuitive narrative would help me unless you are just aiming at a narrow
> audience.
>
> On Sun, Apr 28, 2024 at 5:10 AM YKY (Yan King Yin, 甄景贤) <
> generic.intellige...@gmail.com> wrote:
>
>> Hi friends,
>>
>> This is my latest paper.  I have uploaded some minor revisions past the
>> official deadline, not sure if they would be considered by the referees 😆
>>
>> In a sense this paper is still on-going research, inasmuch as AGI is
>> still on-going research.  But it won't remain that way for long 😆
>>
>> I am also starting a DAO to develop and commercialize AGI.  I hope some
>> people will start to join it.  Right now I'm alone in this world.  It seems
>> that everyone are still uncomfortable with global collaboration (which
>> implies competition, that may be the thing that hurts) and they want to
>> stay in their old racist mode for a little while longer.
>>
>> To be able to lie, and force others to accept lies, confers a lot of
>> political power.  Our current world order is still based on a lot of lies.
>> North Korea doesn't allow their citizens to get on the internet for fear
>> they will discover the truth about the outside world.  Lies are intricately
>> tied to institutions and people tend to support powerful institutions,
>> which is why it is so difficult to break away from old tradition.
>>
>> --
>> YKY
>> *"The ultimate goal of mathematics is to eliminate any need for
>> intelligent thought"* -- Alfred North Whitehead
>>
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mc2dc87e07a5ab1f6e2eeefcf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-29 Thread Mike Archbold
It looks tantalizingly interesting but to help me, somewhat more of an
intuitive narrative would help me unless you are just aiming at a narrow
audience.

On Sun, Apr 28, 2024 at 5:10 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> Hi friends,
>
> This is my latest paper.  I have uploaded some minor revisions past the
> official deadline, not sure if they would be considered by the referees 😆
>
> In a sense this paper is still on-going research, inasmuch as AGI is still
> on-going research.  But it won't remain that way for long 😆
>
> I am also starting a DAO to develop and commercialize AGI.  I hope some
> people will start to join it.  Right now I'm alone in this world.  It seems
> that everyone are still uncomfortable with global collaboration (which
> implies competition, that may be the thing that hurts) and they want to
> stay in their old racist mode for a little while longer.
>
> To be able to lie, and force others to accept lies, confers a lot of
> political power.  Our current world order is still based on a lot of lies.
> North Korea doesn't allow their citizens to get on the internet for fear
> they will discover the truth about the outside world.  Lies are intricately
> tied to institutions and people tend to support powerful institutions,
> which is why it is so difficult to break away from old tradition.
>
> --
> YKY
> *"The ultimate goal of mathematics is to eliminate any need for
> intelligent thought"* -- Alfred North Whitehead
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M4c8ca2050c87e4c85c8588f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread James Bowery
The boundaryinstitute.org domain name has been taken over but it's archived:

https://web.archive.org/web/20060927064137/http://www.boundaryinstitute.org/articles/Dynamical_Markov.pdf

On Sun, Apr 28, 2024 at 10:00 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Sun, Apr 28, 2024 at 10:34 PM James Bowery  wrote:
>
>> See "Digram Boxes to the Rescue" in:
>>
>> http://www.boundaryinstitute.org/articles/Dynamical_Markov.pd
>> 
>>
>
> link to that article seems broken
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M4dad3eec293b9164cd1fb263
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread Yan King Yin, 甄景贤
On Sun, Apr 28, 2024 at 10:34 PM James Bowery  wrote:

> See "Digram Boxes to the Rescue" in:
>
> http://www.boundaryinstitute.org/articles/Dynamical_Markov.pd
> 
>

link to that article seems broken

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mdcd8bca4bd16ed2d6da936ca
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread James Bowery
See "Digram Boxes to the Rescue" in:

http://www.boundaryinstitute.org/articles/Dynamical_Markov.pdf

"Digram box linking, which is based on the mathematics of relations rather
than of functions..."

Set-valued maps strike me as a premature degeneration of relations.  While
I understand the importance of such degeneration in computer systems (since
computers are deterministic state machines as you obviously recognize) I'm
not so sure it is necessary to abandon relations in your formulation so
early in your project by degenerating them into set-valued maps (ie: set
valued functions).

It's widely recognized that von Neumann screwed up quantum logic but there
has been little success in reformulating it in such a manner as to permit
information theory to contribute to non-deterministic  systems.

Specifically, what I was trying to do by hiring Tom Etter at the HP e-speak
project

was revisit the foundation of programming languages, and more generally
logic programming languages in terms that would encompass so-called quantum
computing and more generally quantum logic in terms of "general Markov
process" that, quite naturally, exhibit two-way flow of information ala
constraint logic programming where abstract processes (ala Whitehead) get
spawned by virtue of the non-deterministic relations.  Aggregating those
*processes* as "set values" is necessary only when treating them as
probability distributions to be sampled.  Of course that *is* necessary in
any deterministic computer system but one should not get ahead of one's
self in the formulation.



On Sun, Apr 28, 2024 at 7:13 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> Hi friends,
>
> This is my latest paper.  I have uploaded some minor revisions past the
> official deadline, not sure if they would be considered by the referees 😆
>
> In a sense this paper is still on-going research, inasmuch as AGI is still
> on-going research.  But it won't remain that way for long 😆
>
> I am also starting a DAO to develop and commercialize AGI.  I hope some
> people will start to join it.  Right now I'm alone in this world.  It seems
> that everyone are still uncomfortable with global collaboration (which
> implies competition, that may be the thing that hurts) and they want to
> stay in their old racist mode for a little while longer.
>
> To be able to lie, and force others to accept lies, confers a lot of
> political power.  Our current world order is still based on a lot of lies.
> North Korea doesn't allow their citizens to get on the internet for fear
> they will discover the truth about the outside world.  Lies are intricately
> tied to institutions and people tend to support powerful institutions,
> which is why it is so difficult to break away from old tradition.
>
> --
> YKY
> *"The ultimate goal of mathematics is to eliminate any need for
> intelligent thought"* -- Alfred North Whitehead
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mdeee59d0bb36a9c114ae3078
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread Yan King Yin, 甄景贤
On Sun, Apr 28, 2024 at 9:24 PM James Bowery  wrote:

> Correction: not the abstract but just as bad, in the first paragraph.
>

LOL... the figure circulating on the web is $700K, I don't know why I made
that typo

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M87c5df632c9019920e3cfdc4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread James Bowery
Correction: not the abstract but just as bad, in the first paragraph.

On Sun, Apr 28, 2024 at 8:22 AM James Bowery  wrote:

> "The daily cost of training GPT-4 was rumored to be $100M by Sam Altman."
>
> That is a reckless statement that unfortunately appears in a position
> (abstract) which derails your thesis from the outset.  I'm not
> dissuade from reading your paper by this but you can rest assured others
> will be and quite likely among them will be those you would like to reach.
>
> On Sun, Apr 28, 2024 at 7:13 AM YKY (Yan King Yin, 甄景贤) <
> generic.intellige...@gmail.com> wrote:
>
>> Hi friends,
>>
>> This is my latest paper.  I have uploaded some minor revisions past the
>> official deadline, not sure if they would be considered by the referees 😆
>>
>> In a sense this paper is still on-going research, inasmuch as AGI is
>> still on-going research.  But it won't remain that way for long 😆
>>
>> I am also starting a DAO to develop and commercialize AGI.  I hope some
>> people will start to join it.  Right now I'm alone in this world.  It seems
>> that everyone are still uncomfortable with global collaboration (which
>> implies competition, that may be the thing that hurts) and they want to
>> stay in their old racist mode for a little while longer.
>>
>> To be able to lie, and force others to accept lies, confers a lot of
>> political power.  Our current world order is still based on a lot of lies.
>> North Korea doesn't allow their citizens to get on the internet for fear
>> they will discover the truth about the outside world.  Lies are intricately
>> tied to institutions and people tend to support powerful institutions,
>> which is why it is so difficult to break away from old tradition.
>>
>> --
>> YKY
>> *"The ultimate goal of mathematics is to eliminate any need for
>> intelligent thought"* -- Alfred North Whitehead
>>
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M546055cd5c8504be5846db41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread James Bowery
"The daily cost of training GPT-4 was rumored to be $100M by Sam Altman."

That is a reckless statement that unfortunately appears in a position
(abstract) which derails your thesis from the outset.  I'm not
dissuade from reading your paper by this but you can rest assured others
will be and quite likely among them will be those you would like to reach.

On Sun, Apr 28, 2024 at 7:13 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> Hi friends,
>
> This is my latest paper.  I have uploaded some minor revisions past the
> official deadline, not sure if they would be considered by the referees 😆
>
> In a sense this paper is still on-going research, inasmuch as AGI is still
> on-going research.  But it won't remain that way for long 😆
>
> I am also starting a DAO to develop and commercialize AGI.  I hope some
> people will start to join it.  Right now I'm alone in this world.  It seems
> that everyone are still uncomfortable with global collaboration (which
> implies competition, that may be the thing that hurts) and they want to
> stay in their old racist mode for a little while longer.
>
> To be able to lie, and force others to accept lies, confers a lot of
> political power.  Our current world order is still based on a lot of lies.
> North Korea doesn't allow their citizens to get on the internet for fear
> they will discover the truth about the outside world.  Lies are intricately
> tied to institutions and people tend to support powerful institutions,
> which is why it is so difficult to break away from old tradition.
>
> --
> YKY
> *"The ultimate goal of mathematics is to eliminate any need for
> intelligent thought"* -- Alfred North Whitehead
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M2167dcf873d4c929522ad861
Delivery options: https://agi.topicbox.com/groups/agi/subscription