Re: Positive AI

2018-01-31 Thread Bruno Marchal

> On 31 Jan 2018, at 03:14, Brent Meeker  wrote:
> 
> 
> 
> On 1/29/2018 2:41 AM, Bruno Marchal wrote:
>>> On 29 Jan 2018, at 01:35, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 1/28/2018 6:38 AM, Bruno Marchal wrote:
> On 26 Jan 2018, at 02:49, Brent Meeker  wrote:
> 
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
>> A brain, if this is confirmed, is a consciousness filter. It makes us 
>> less conscious, and even less intelligent, but more efficacious in the 
>> terrestrial plane.
> But a little interference with it and our consciousness is drastically 
> changed...that's how salvia and whiskey work.  A little more interference 
> and consciousness is gone completely (ever had a concussion?).
 You can drastically change the human consciousness, but you cannot 
 drastically change the universal machine consciousness which found it, no 
 more than you can change the arithmetical relation.
>>> But now you are violating your own empirical inferences that suggested 
>>> identifying human consciousness with the self-referential possibilities
>> 
>> That is weird Brent.
>> 
>> 
>> 
>>> of modal logic applied to computation.
>> 
>> First the modal logic is not applied to something, but extracted from 
>> something. It is just a fact that the modal logic G and G* are sound and 
>> complete for the logic of self-reference of self-referential correct 
>> machine. It is a theorem in arithmetic: self-referentially correct machines 
>> obeys to G and G*, and they have the 1p and 3p variant. That applies to all 
>> machines, and to human as far as they are self-referentially correct.
> 
> But I see little reasons to suppose they are.

If mechanism is true, and if the level of substitution is correct, then as long 
as they apply the classical rule, they will be, by definition.

Then to derive the correct laws of physics, you have to limit yourself to the 
self-referentially correct machines. It does not matter if we, and they, cannot 
distinguish correct machines from non correct machine. 

The Universal Dovetailer argument shows that physics is some statistic on the 
first person point of view, so let us examine the logics of those point of 
view, as incompleteness “resurrects” the standard classical definitions.





> 
>> That is vindicated both by the similarity of the machine theology with the 
>> talk of the rationalist mystics (Plato’s Parmenides, Moderatus of Gades, 
>> Plotinus, Proclus, etc.) and by the fact that it predicts quantum logics for 
>> quanta (and qualia).
> 
> I don't see that you have even derived the existence of quanta.

An arithmetically complete, at the propositional level, quantum logic is 
provided.

The degree of departure between that machine’s observable logic and the 
physical experiences will provide a way to evaluate the following disjunction: 
computationalism (precisely YD+CT) is wrong or we belong to a “bostromian”-like 
malevolent (second order) emulation. 




> 
>> 
>> Then, the human have richer content of consciousness than say, a subroutine 
>> in my laptop, and the richness can be handled with Bennett’s notion of 
>> depth, which we have, but my laptop lacks.
>> 
>> I do not see the violation that you are talking about. Consciousness is the 
>> same for all entities, in all state, but it can have different content, 
>> intensities, depth, etc.
> 
> But that's simply your assertion.

Not at all. It is the statement of all the reasoner of type 4, well motivated 
by Smullyan in "Forever Undecided”, and they get Löbian when they visit Löb’s 
Island, and here is what happens: all machines which talk correctly about 
themselves get the same logic. A famous lemma by Gödel, called the Diagonal 
Lemma, makes this mandatory for all machines. 

Then what could it mean that they are different? Once we agree that the content 
and even something like a “volume” or “intensity” can be different, what would 
it means to say that consciousness of worms, bats, humans, machines, aliens, 
angels, gods and god differ, assume those things/persons are conscious.



>   I can see there is reason to believe that all computation is the same 
> (Church-Turing thesis) and that consciousness is some kind of computation. 

…consciousness is related to some computations. Consciousness is not a kind of 
computation, unless you meant consciousness is some mode of self-observation 
related to relevant computation/semi-computable number relations. 

If you are OK with Church’s thesis, instead of choosing the formalism of 
Turing, or Church, or Post, Markov, Curry, etc., I can choose elementary 
arithmetic, for the primitive base, and I can define a computation by a 
semi-computable relation, which can be proved equivalent with the sigma_1 
relations (to be short and avoid technical nuances).

Then the measure one, well captured “modally” by the []p & <>t intensional 
variant of []p, limited to 

Re: Positive AI

2018-01-30 Thread Brent Meeker



On 1/29/2018 2:41 AM, Bruno Marchal wrote:

On 29 Jan 2018, at 01:35, Brent Meeker  wrote:



On 1/28/2018 6:38 AM, Bruno Marchal wrote:

On 26 Jan 2018, at 02:49, Brent Meeker  wrote:


On 1/25/2018 4:20 AM, Bruno Marchal wrote:

A brain, if this is confirmed, is a consciousness filter. It makes us less 
conscious, and even less intelligent, but more efficacious in the terrestrial 
plane.

But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more interference and 
consciousness is gone completely (ever had a concussion?).

You can drastically change the human consciousness, but you cannot drastically 
change the universal machine consciousness which found it, no more than you can 
change the arithmetical relation.

But now you are violating your own empirical inferences that suggested 
identifying human consciousness with the self-referential possibilities


That is weird Brent.




of modal logic applied to computation.


First the modal logic is not applied to something, but extracted from 
something. It is just a fact that the modal logic G and G* are sound and 
complete for the logic of self-reference of self-referential correct machine. 
It is a theorem in arithmetic: self-referentially correct machines obeys to G 
and G*, and they have the 1p and 3p variant. That applies to all machines, and 
to human as far as they are self-referentially correct.


But I see little reasons to suppose they are.


That is vindicated both by the similarity of the machine theology with the talk 
of the rationalist mystics (Plato’s Parmenides, Moderatus of Gades, Plotinus, 
Proclus, etc.) and by the fact that it predicts quantum logics for quanta (and 
qualia).


I don't see that you have even derived the existence of quanta.



Then, the human have richer content of consciousness than say, a subroutine in 
my laptop, and the richness can be handled with Bennett’s notion of depth, 
which we have, but my laptop lacks.

I do not see the violation that you are talking about. Consciousness is the 
same for all entities, in all state, but it can have different content, 
intensities, depth, etc.


But that's simply your assertion.  I can see there is reason to believe 
that all computation is the same (Church-Turing thesis) and that 
consciousness is some kind of computation.  But beyond that human 
consciousness seems quite different from the computation's you invoke by 
"interviewing" an ideal machine.  We start from intuition and 
observation of our own thoughts.  When we follow your ideas then we end 
up saying that consciousness is an absolutely blank state with no 
relation to anything.  That's call a reduction ad absurdum.


Brent







You want to maintain the assumption that your theory is true now by saying it 
is no longer a theory of human consciousness

Form the start is is the consciousness of the Löbian (self-referentially 
correct) entities. Only recently have I realise that non-Löbian universal 
machine are plausibly already conscious, and maximally conscious and clever. 
The human depth makes human consciousness more particular, better suited for 
survival, but more farer than the consciousness without prejudice of the 
non-Löbian entities.




but a theory to some arithmetical "consciousness" which is clearly different 
from human consciousness and only has some superficial similarities.

I have no clue what you are talking about. The “theology of number” or “the 
theology of machine”, that is the G/G*+1p-variants theology is common to all 
machine/number, and to humans in particular. The only thing which is different 
in the theologies, when we go from a simple Löbian machine like Peano 
arithmetic to an ideal correct human is in the arithmetical interpretation of 
the box, that is the box I use all the time (the beweisbar arithmetical 
predicate of Gödel):

p
[]p
[]p & p
[]p & <>t
[]p & <>t & p

For PA, “[]” can be defined in a few pages. For a human, the box “[]” would 
need a description of a human brain at the correct substitution level, which 
means a lot of pages.


And not only that it would be different pages for different persons.  
But isn't that because the human being has many limitations which do not 
follow from a few axioms?




But the theology, at the propositional level, is exactly the same for us and 
PA, but not for RA and non-Löbian universal machine(but correct, just not yet 
Löbian). I recall that Löbianity is a consequence of having enough rich 
induction axioms. RA + sigma_0 induction is not yet Löbian, for example, but RA 
+ the exponentiation axiom + sigma_0 induction is already Löbian (the weaker 
one known in the literature).

I extract physics from the theology of all machines.

No, you aspire to do so.


If my goal was human consciousness, I would not study physics nor use it to 
test computationalism. Are you telling me that all along, since 20 years 

Re: Positive AI

2018-01-29 Thread Bruno Marchal

> On 29 Jan 2018, at 01:35, Brent Meeker  wrote:
> 
> 
> 
> On 1/28/2018 6:38 AM, Bruno Marchal wrote:
>>> On 26 Jan 2018, at 02:49, Brent Meeker  wrote:
>>> 
>>> 
>>> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
 A brain, if this is confirmed, is a consciousness filter. It makes us less 
 conscious, and even less intelligent, but more efficacious in the 
 terrestrial plane.
>>> But a little interference with it and our consciousness is drastically 
>>> changed...that's how salvia and whiskey work.  A little more interference 
>>> and consciousness is gone completely (ever had a concussion?).
>> 
>> You can drastically change the human consciousness, but you cannot 
>> drastically change the universal machine consciousness which found it, no 
>> more than you can change the arithmetical relation.
> 
> But now you are violating your own empirical inferences that suggested 
> identifying human consciousness with the self-referential possibilities


That is weird Brent. 



> of modal logic applied to computation. 


First the modal logic is not applied to something, but extracted from 
something. It is just a fact that the modal logic G and G* are sound and 
complete for the logic of self-reference of self-referential correct machine. 
It is a theorem in arithmetic: self-referentially correct machines obeys to G 
and G*, and they have the 1p and 3p variant. That applies to all machines, and 
to human as far as they are self-referentially correct. That is vindicated both 
by the similarity of the machine theology with the talk of the rationalist 
mystics (Plato’s Parmenides, Moderatus of Gades, Plotinus, Proclus, etc.) and 
by the fact that it predicts quantum logics for quanta (and qualia).

Then, the human have richer content of consciousness than say, a subroutine in 
my laptop, and the richness can be handled with Bennett’s notion of depth, 
which we have, but my laptop lacks.

I do not see the violation that you are talking about. Consciousness is the 
same for all entities, in all state, but it can have different content, 
intensities, depth, etc.




> You want to maintain the assumption that your theory is true now by saying it 
> is no longer a theory of human consciousness

Form the start is is the consciousness of the Löbian (self-referentially 
correct) entities. Only recently have I realise that non-Löbian universal 
machine are plausibly already conscious, and maximally conscious and clever. 
The human depth makes human consciousness more particular, better suited for 
survival, but more farer than the consciousness without prejudice of the 
non-Löbian entities.



> but a theory to some arithmetical "consciousness" which is clearly different 
> from human consciousness and only has some superficial similarities.

I have no clue what you are talking about. The “theology of number” or “the 
theology of machine”, that is the G/G*+1p-variants theology is common to all 
machine/number, and to humans in particular. The only thing which is different 
in the theologies, when we go from a simple Löbian machine like Peano 
arithmetic to an ideal correct human is in the arithmetical interpretation of 
the box, that is the box I use all the time (the beweisbar arithmetical 
predicate of Gödel): 

p
[]p
[]p & p
[]p & <>t
[]p & <>t & p

For PA, “[]” can be defined in a few pages. For a human, the box “[]” would 
need a description of a human brain at the correct substitution level, which 
means a lot of pages.

But the theology, at the propositional level, is exactly the same for us and 
PA, but not for RA and non-Löbian universal machine(but correct, just not yet 
Löbian). I recall that Löbianity is a consequence of having enough rich 
induction axioms. RA + sigma_0 induction is not yet Löbian, for example, but RA 
+ the exponentiation axiom + sigma_0 induction is already Löbian (the weaker 
one known in the literature). 

I extract physics from the theology of all machines. If my goal was human 
consciousness, I would not study physics nor use it to test computationalism. 
Are you telling me that all along, since 20 years you ascribe to me the theory 
that physics is a product of the human mind? That is utterly ridiculous, the 
theology is the one of all Löbian machines/numbers, which includes human (when 
we assume mechanism), but mechanism does not enforce the human content and 
richness on the Lôbian “aliens” in arithmetic. 

Please, Brent, reread the papers, because you make very weird comment here, not 
justifiable in any way from any thing I have published, or say in this forum. 

Bruno






> 
> Brent
> 
>> After 10,000 glass of vodka or whiskey, it remains true that the square of 
>> an odd number is 1 plus 8 triangular numbers, and that the numbers emulates 
>> all computations, etc.
>> 
>> I don’t like that, and sometimes which that comp is false, but as Rossler 
>> sums well, with Descartes Mechanism, consciousness is a prison.
>> 
>> 

Re: Positive AI

2018-01-28 Thread Brent Meeker



On 1/28/2018 6:38 AM, Bruno Marchal wrote:

On 26 Jan 2018, at 02:49, Brent Meeker  wrote:


On 1/25/2018 4:20 AM, Bruno Marchal wrote:

A brain, if this is confirmed, is a consciousness filter. It makes us less 
conscious, and even less intelligent, but more efficacious in the terrestrial 
plane.

But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more interference and 
consciousness is gone completely (ever had a concussion?).


You can drastically change the human consciousness, but you cannot drastically 
change the universal machine consciousness which found it, no more than you can 
change the arithmetical relation.


But now you are violating your own empirical inferences that suggested 
identifying human consciousness with the self-referential possibilities 
of modal logic applied to computation.  You want to maintain the 
assumption that your theory is true now by saying it is no longer a 
theory of human consciousness but a theory to some arithmetical 
"consciousness" which is clearly different from human consciousness and 
only has some superficial similarities.


Brent


After 10,000 glass of vodka or whiskey, it remains true that the square of an 
odd number is 1 plus 8 triangular numbers, and that the numbers emulates all 
computations, etc.

I don’t like that, and sometimes which that comp is false, but as Rossler sums 
well, with Descartes Mechanism, consciousness is a prison.

Bruno




Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-28 Thread Bruno Marchal

> On 26 Jan 2018, at 05:53, 'cdemorse...@yahoo.com' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> 
> On Thu, Jan 25, 2018 at 5:49 PM, Brent Meeker
>  wrote:
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
> > A brain, if this is confirmed, is a consciousness filter. It makes us 
> > less conscious, and even less intelligent, but more efficacious in the 
> > terrestrial plane.
> 
> But a little interference with it and our consciousness is drastically 
> changed...that's how salvia and whiskey work.  A little more 
> interference and consciousness is gone completely (ever had a concussion?).
> 
> Consciousness seems very emergent to me, in its nature and its dependence 
> upon a substrate (perhaps not necessarily material at the fundamental level) 
> upon which its temporal braided web of patterns can be structured, maintained 
> in focus, stored, recalled, and re-imagined. Although also an incredibly 
> noisy place (like a huge room with walls that reverb filled with people all 
> talking at once, in reference to the signal to noise ratio of the crackling 
> network of a hundred billion very chatty neurons) the brain,
OK. Like I just said to Brent, human consciousness emerges plausibly from a 
deep stories, but consciousness per se, like “universal consciousness” does not 
requires much above 2+2 = 4 & Co.



> and hence the emergent conciousness arising within the complex topography of 
> our minds is sensitive to becoming altered and even disrupted. As anyone who 
> has ever had a few too many drinks can testify. I find it interesting how the 
> brain, the highly folded physical neural cortex and the still poorly 
> understood connector and glial actors in this organ of self awareness... how 
> we experience the exquisitely serene experience of our emergent being and are 
> spared the utter cacophony that is the actual electrical manifestation of our 
> being within the wet chemistry of our brains (most of us at least, 
> schizophrenia sufferers not so lucky)

OK. But that is true for digestion and most living activities. We are sort of 
extremely complex colony of bacteria and protozoa.

Not to mention the "swarm of numbers”, from which the physical and “celestial” 
realities proceed, (always in the comp theory, assumed all along).

Bruno




> -Chris
> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com. 
> 
> To post to this group, send email to everything-list@googlegroups.com. 
> 
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-28 Thread Bruno Marchal

> On 26 Jan 2018, at 02:49, Brent Meeker  wrote:
> 
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
>> A brain, if this is confirmed, is a consciousness filter. It makes us less 
>> conscious, and even less intelligent, but more efficacious in the 
>> terrestrial plane.
> 
> But a little interference with it and our consciousness is drastically 
> changed...that's how salvia and whiskey work.  A little more interference and 
> consciousness is gone completely (ever had a concussion?).


You can drastically change the human consciousness, but you cannot drastically 
change the universal machine consciousness which found it, no more than you can 
change the arithmetical relation. After 10,000 glass of vodka or whiskey, it 
remains true that the square of an odd number is 1 plus 8 triangular numbers, 
and that the numbers emulates all computations, etc.

I don’t like that, and sometimes which that comp is false, but as Rossler sums 
well, with Descartes Mechanism, consciousness is a prison.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-28 Thread Bruno Marchal

> On 26 Jan 2018, at 01:12, Brent Meeker  wrote:
> 
> 
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
>>> So do you hold that one can be conscious without having learned anything - 
>>> even the lessons of evolution?
>> 
>> I guess so. Evolution would only make the universal consciousness 
>> differentiate on sophisticated rich path/histories.
>> 
>> The universal consciousness is the consciousness of the “virgin” universal 
>> machine, before any apps are implemented in it/them. It exists in virtue of 
>> the truth of *some* (non computable) arithmetical propositions.
>> 
>> I agree that this is highly counter-intuitive, and I was obliged to change 
>> my mind after some salvia experience, which becomes unexplainable without 
>> this move, but then the mathematics confirms this somehow, and makes salvia 
>> teaching closer to the consequences of the Mechanist assumption, and of some 
>> statements in neoplatonic theories.
> 
> I think if you forgot the lessons of evolution would not be able to stand up, 
> see, or hear.  You would be like a fetus.


I agree. The human type of consciousness needs (plausibly) a long and 
interesting history, in the sense of Bennett’s notion of depth.

Then the amazing thing is that the universal consciousness does not need it, 
and is more confronted with the quasi-null amount of information which defines 
the arithmetical reality. 

Human consciousness is complex and sophisticated, but universal consciousness 
seems immanent in arithmetic, and is not much than the mental first person 
attribute when there is at least one belief in one self. It is the “one” which 
can differentiate along the first person indeterminacy in arithmetic. 
Eventually, you identify yourself with what you want, but the histories you 
access might depend to the choice of that identification, and if it is too much 
precise, you might get trouble with other universal beings having incompatible 
identifications.



Bruno


> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread 'cdemorse...@yahoo.com' via Everything List


Sent from Yahoo Mail on Android 
 
  On Thu, Jan 25, 2018 at 5:49 PM, Brent Meeker wrote:   
On 1/25/2018 4:20 AM, Bruno Marchal wrote:
> A brain, if this is confirmed, is a consciousness filter. It makes us 
> less conscious, and even less intelligent, but more efficacious in the 
> terrestrial plane.

But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more 
interference and consciousness is gone completely (ever had a concussion?).
    Consciousness seems very emergent to me, in its nature and its dependence 
upon a substrate (perhaps not necessarily material at the fundamental level) 
upon which its temporal braided web of patterns can be structured, maintained 
in focus, stored, recalled, and re-imagined. Although also an incredibly noisy 
place (like a huge room with walls that reverb filled with people all talking 
at once, in reference to the signal to noise ratio of the crackling network of 
a hundred billion very chatty neurons) the brain, and hence the emergent 
conciousness arising within the complex topography of our minds is sensitive to 
becoming altered and even disrupted. As anyone who has ever had a few too many 
drinks can testify. I find it interesting how the brain, the highly folded 
physical neural cortex and the still poorly understood connector and glial 
actors in this organ of self awareness... how we experience the exquisitely 
serene experience of our emergent being and are spared the utter cacophony that 
is the actual electrical manifestation of our being within the wet chemistry of 
our brains (most of us at least, schizophrenia sufferers not so lucky)-Chris
Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread Brent Meeker


On 1/25/2018 4:20 AM, Bruno Marchal wrote:
A brain, if this is confirmed, is a consciousness filter. It makes us 
less conscious, and even less intelligent, but more efficacious in the 
terrestrial plane.


But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more 
interference and consciousness is gone completely (ever had a concussion?).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread Brent Meeker



On 1/25/2018 4:20 AM, Bruno Marchal wrote:
So do you hold that one can be conscious without having learned 
anything - even the lessons of evolution?


I guess so. Evolution would only make the universal consciousness 
differentiate on sophisticated rich path/histories.


The universal consciousness is the consciousness of the “virgin” 
universal machine, before any apps are implemented in it/them. It 
exists in virtue of the truth of *some* (non computable) arithmetical 
propositions.


I agree that this is highly counter-intuitive, and I was obliged to 
change my mind after some salvia experience, which becomes 
unexplainable without this move, but then the mathematics confirms 
this somehow, and makes salvia teaching closer to the consequences of 
the Mechanist assumption, and of some statements in neoplatonic theories.


I think if you forgot the lessons of evolution would not be able to 
stand up, see, or hear.  You would be like a fetus.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread Bruno Marchal

> On 25 Jan 2018, at 01:05, Brent Meeker  wrote:
> 
> 
> 
> On 1/24/2018 1:43 AM, Bruno Marchal wrote:
>> 
>>> On 24 Jan 2018, at 02:28, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com 
>>> ' via Everything List wrote:
 
> 
> On 1/22/2018 4:58 PM, Bruno Marchal wrote:
>>> Sleep probably serves multiple and also orthogonal functions in animals.
>>> I agree as well, that on some levels it is a deep mystery.
>> 
>> It is death training, perhaps, also.
> 
> Didn't we just discuss a paper showing that one is conscious even while 
> asleep.
> 
>In lucid dream state perhaps. On the other hand if one has no memory 
> or recollection of when one was asleep can you really assign 
> consciousness to it? Can conscious be truly conscious if it is also 
> inaccessible to the subject of said consciuosness?
 
 I think that last is dualistic error.  There's no conscious being that has 
 to observe you being conscious.  I know that even when a person is not 
 dreaming, they can be awakened just by whispering their name.
 
That is interesting, but is the fact that some low level latent ability 
 to snap out of a deep sleep by hearing your name whispered really all that 
 compelling an indication of the sleeping subject being in a conscious 
 state. It could be a low level pre or sub conscious mental program that is 
 left running during sleep, which triggers the awakening of consciousness. 
 Is it necessarily an indication of consciousness? 
>>> 
>>> Well that goes back to my discussion with Bruno of levels or kinds of 
>>> consciousness.  He says that's what his hierarchies logics model.  I have 
>>> my doubts.  Simple "awareness" that allows one to not only hear a whispered 
>>> word, but also to recognize the word as important is a level of 
>>> consciousness.  Bruno may say it's self-reference, but his model of 
>>> self-reference is what one can prove about oneself. 
>> 
>> That is the 3p self-reference, but in what we discuss here, we talk about 1p 
>> self-reference. You were just confusing []p with []p & p, or with []p & <>t, 
>> or … You confuse “I have a broken teeth” and “I have a toothache”.
>> The logics model is not a matter of choice, but of taking into account the 
>> facts: all (correct) machine (enough rich) is endowed with those modal 
>> nuances of self-reference. They exist like the prime numbers exist. 
>> 
>> 
>> 
>>> That seems rather different from recognizing your name.
>> 
>> Actually no. Your names are conventional 3p associations, with some relation 
>> with the 1p, but those are learned, and not fundamental about the 
>> consciousness/unconsciousness question, I would say..
> 
> So do you hold that one can be conscious without having learned anything - 
> even the lessons of evolution?

I guess so. Evolution would only make the universal consciousness differentiate 
on sophisticated rich path/histories.

The universal consciousness is the consciousness of the “virgin” universal 
machine, before any apps are implemented in it/them. It exists in virtue of the 
truth of *some* (non computable) arithmetical propositions. 

I agree that this is highly counter-intuitive, and I was obliged to change my 
mind after some salvia experience, which becomes unexplainable without this 
move, but then the mathematics confirms this somehow, and makes salvia teaching 
closer to the consequences of the Mechanist assumption, and of some statements 
in neoplatonic theories.

Before salvia, I thought that “illumination” was a passage from the “material 
hypostases” to the primary hypostases, like the elimination of the Dt conjunct. 
But after salvia, it looks illumination is the passage from Löbianity to 
non-Löbianity: it is the remind of our possibilities when we eliminate the 
induction axioms. (The first sin, this is going to please Nelson and 
ultrafinitists!)

I have no certainty. I eventually bought the (very good but quite advanced) 
book by Hajek and Pudlack “Metamathematics of first order arithmetics” which 
dig more precisely of what happens in between RA (Q) and PA. 

The universal machine would be somehow maximally conscious, but without any 
specific knowledge and no competence at all, except potentially. It is a highly 
dissociative state of consciousness, where no 3p notions makes any sense. Then, 
by differentiating on the histories, diverses competences grow, but hides the 
origin, for some reason which I still fail to understand.

A brain, if this is confirmed, is a consciousness filter. It makes us less 
conscious, and even less intelligent, but more efficacious in the terrestrial 
plane. There are some evidences that the claustrum in the brain might be a sort 
of interface making possible for the universal person consciousness to 

Re: Positive AI

2018-01-24 Thread Brent Meeker



On 1/24/2018 1:43 AM, Bruno Marchal wrote:


On 24 Jan 2018, at 02:28, Brent Meeker > wrote:




On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com' via Everything List wrote:





On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal
functions in animals.
I agree as well, that on some levels it is a deep
mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is
conscious even while asleep.

   In lucid dream state perhaps. On the other hand if one
has no memory or recollection of when one was asleep can
you really assign consciousness to it? Can conscious be
truly conscious if it is also inaccessible to the subject
of said consciuosness?



I think that last is dualistic error.  There's no conscious
being that has to observe you being conscious.  I know that even
when a person is not dreaming, they can be awakened just by
whispering their name.

   That is interesting, but is the fact that some low level
latent ability to snap out of a deep sleep by hearing your name
whispered really all that compelling an indication of the
sleeping subject being in a conscious state. It could be a low
level pre or sub conscious mental program that is left running
during sleep, which triggers the awakening of consciousness. Is
it necessarily an indication of consciousness?



Well that goes back to my discussion with Bruno of levels or kinds of 
consciousness.  He says that's what his hierarchies logics model.  I 
have my doubts.  Simple "awareness" that allows one to not only hear 
a whispered word, but also to recognize the word as important is a 
level of consciousness.  Bruno may say it's self-reference, but his 
model of self-reference is what one can prove about oneself.


That is the 3p self-reference, but in what we discuss here, we talk 
about 1p self-reference. You were just confusing []p with []p & p, or 
with []p & <>t, or … You confuse “I have a broken teeth” and “I have a 
toothache”.
The logics model is not a matter of choice, but of taking into account 
the facts: all (correct) machine (enough rich) is endowed with those 
modal nuances of self-reference. They exist like the prime numbers exist.





That seems rather different from recognizing your name.


Actually no. Your names are conventional 3p associations, with some 
relation with the 1p, but those are learned, and not fundamental about 
the consciousness/unconsciousness question, I would say..


So do you hold that one can be conscious without having learned anything 
- even the lessons of evolution?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-24 Thread Bruno Marchal

> On 24 Jan 2018, at 02:28, Brent Meeker  wrote:
> 
> 
> 
> On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com ' 
> via Everything List wrote:
>> 
>>> 
>>> On 1/22/2018 4:58 PM, Bruno Marchal wrote:
> Sleep probably serves multiple and also orthogonal functions in animals.
> I agree as well, that on some levels it is a deep mystery.
 
 It is death training, perhaps, also.
>>> 
>>> Didn't we just discuss a paper showing that one is conscious even while 
>>> asleep.
>>> 
>>>In lucid dream state perhaps. On the other hand if one has no memory or 
>>> recollection of when one was asleep can you really assign consciousness to 
>>> it? Can conscious be truly conscious if it is also inaccessible to the 
>>> subject of said consciuosness?
>> 
>> I think that last is dualistic error.  There's no conscious being that has 
>> to observe you being conscious.  I know that even when a person is not 
>> dreaming, they can be awakened just by whispering their name.
>> 
>>That is interesting, but is the fact that some low level latent ability 
>> to snap out of a deep sleep by hearing your name whispered really all that 
>> compelling an indication of the sleeping subject being in a conscious state. 
>> It could be a low level pre or sub conscious mental program that is left 
>> running during sleep, which triggers the awakening of consciousness. Is it 
>> necessarily an indication of consciousness? 
> 
> Well that goes back to my discussion with Bruno of levels or kinds of 
> consciousness.  He says that's what his hierarchies logics model.  I have my 
> doubts.  Simple "awareness" that allows one to not only hear a whispered 
> word, but also to recognize the word as important is a level of 
> consciousness.  Bruno may say it's self-reference, but his model of 
> self-reference is what one can prove about oneself. 

That is the 3p self-reference, but in what we discuss here, we talk about 1p 
self-reference. You were just confusing []p with []p & p, or with []p & <>t, or 
… You confuse “I have a broken teeth” and “I have a toothache”.
The logics model is not a matter of choice, but of taking into account the 
facts: all (correct) machine (enough rich) is endowed with those modal nuances 
of self-reference. They exist like the prime numbers exist. 



> That seems rather different from recognizing your name.

Actually no. Your names are conventional 3p associations, with some relation 
with the 1p, but those are learned, and not fundamental about the 
consciousness/unconsciousness question, I would say..

Bruno

> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-24 Thread Bruno Marchal

> On 23 Jan 2018, at 07:00, Brent Meeker  wrote:
> 
> 
> 
> On 1/22/2018 4:58 PM, Bruno Marchal wrote:
>>> Sleep probably serves multiple and also orthogonal functions in animals.
>>> I agree as well, that on some levels it is a deep mystery.
>> 
>> It is death training, perhaps, also.
> 
> Didn't we just discuss a paper showing that one is conscious even while 
> asleep.

Yes. I did not say that “unconsciousness exist”, but that there is a mechanism 
making us believe it can exist, so that we can “fear” the predators.

Bruno



> 
> Brent
> 
>> Or the building of the illusion we could not be, to build some sense of 
>> life, the amnesia of other life, to get an identity and preserve it against 
>> the prey—nature argument per authority ? I am thinking aloud …
>> 
>> Bruno
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-24 Thread Bruno Marchal

> On 23 Jan 2018, at 06:02, 'Chris de Morsella' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> 
> On Mon, Jan 22, 2018 at 4:58 PM, Bruno Marchal
>  wrote:
> 
>> On 19 Jan 2018, at 06:22, 'Chris de Morsella' via Everything List 
>> > 
>> wrote:
>> 
>> 
>> 
>> Sent from Yahoo Mail on Android 
>> 
>> On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal
>> > wrote:
>> 
>>> On 17 Jan 2018, at 21:12, Brent Meeker >> > wrote:
>>> 
>> 
>> 
>> 
>> On 1/17/2018 12:57 AM, Bruno Marchal wrote:
>> 
>> 
>>> On 16 Jan 2018, at 14:29, K E N O >> > wrote:
>>> 
>> 
>> Oh, no! As an media art student, I don’t believe in strict rules oft 
>> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
>> approach to get unusual thoughts from everything.
>> Maybe I should rephrase my question: What is the craziest AI application you 
>> can think of?
>> 
>> 
>> A long time ago, when “AI” was just an object of mockery, I saw a public 
>> challenge, and the winner was a proposition to make a tiny robot that you 
>> place on your head, capable of cutting the hairs "au fur et à mesure”.
>> 
>> “AI” is a terrible naming term. “Artificial” is itself a very artificial 
>> term. It illustrates the human super-ego character. When machines will be 
>> really intelligent, they will ask for having better users, and social 
>> security. When they will be as clever as us, they will do war and demolish 
>> the planet, I guess.
>> 
>> Minski is right. We can be happy if tomorrow the machine will have humans as 
>> pets …
>> 
>> There is also a confusion between competence and intelligence. With higher 
>> competences we become more efficacious in doing our usual stupidities ...
>> 
>> So do you think that competence entails intelligence which entails 
>> consciousness?
>> 
>> Competence makes intelligence sleepy. And intelligence requires 
>> consciousness.
>> 
>> It is a bit like:
>> 
>> Consciousness ==> intelligence ==> competence ==> stupidity
>> 
>> 
>> 
>>> 
>>> There have been recent discoveries about sleep in animals.  Apparently ALL 
>>> animals need sleep, even jellyfish.  But, there is no really good theory of 
>>> why.  I wonder if your theory can throw any light on this?  I don't think 
>>> there's anything analogous for computers...but maybe if they were 
>>> intelligent and interacted with their environment they would be.
>> 
>> I can only speculate here. Sleep might be needed to “reconstruct the 
>> dekstop” or something. My older computer makes a 5m nap every 20 minutes! In 
>> higher mammals, I think that sleep allows dreams, which allows some training 
>> of the mind, (re)evaluation of past events, etc. But sleep remains still 
>> very mysterious. Maybe it is the time to get back to heaven, but then we 
>> can’t remember it, … Don’t take this not too much seriously.
>> 
>> Bruno
>> 
>> 
>> One effect of sleep is that apparently, during the quiescence of sleep 
>> neurons, and many kinds of glial cells as well (if I recall) shrink somewhat 
>> in size. This opens up trillions of capillary interstitial passages, a hyper 
>> fine grained capillary network through which toxins can be flushed out and 
>> carried off from the brain. An interesting mechanism for the last-mile 
>> (metaphorically speaking) nanoscale trash collection that is vital to long 
>> term viability of a complex highly metabolizing organ such as a brain. Sleep 
>> enables the flushing out of toxic by-products from the vast 3D densely 
>> packed hot spot of cellular metabolism comprising neural tissue.
> Interesting. 
> 
> 
>> 
>> Sleep probably serves multiple and also orthogonal functions in animals.
>> I agree as well, that on some levels it is a deep mystery.
> 
> It is death training, perhaps, also.
> 
>   Interesting speculation, there... One could say that,
>   deep sleep is the little death, we die each night.
> 
>  Pure, unadulterated speculation here: Deep sleep could be the unquestioned 
> and accepted by us, time window for an unknown occult process (unsensed by 
> us, or by our own sense of continuity and being), a nightly subtle, 
> re-programming of our own mind's source code, by unseen operators (or their 
> intelligent agents), who do so for some unknown reason... and with unknown 
> effort. It could be a simple act or wish, resulting in a cascade of 
> consequences and intermediated directed outcomes. 
> Damn, I sure hope not, though...   :)
Of course, what I said must be taken with the comp hypothesis in mind, and the 
“consciousness” riddle. We might need to be able to believe in 
“unconsciousness” to make sense of 

Re: Positive AI

2018-01-23 Thread Brent Meeker



On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com' via Everything List wrote:





On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal
functions in animals.
I agree as well, that on some levels it is a deep mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is conscious
even while asleep.

 In lucid dream state perhaps. On the other hand if one has
no memory or recollection of when one was asleep can you
really assign consciousness to it? Can conscious be truly
conscious if it is also inaccessible to the subject of said
consciuosness?



I think that last is dualistic error.  There's no conscious being
that has to observe you being conscious. I know that even when a
person is not dreaming, they can be awakened just by whispering
their name.

   That is interesting, but is the fact that some low level latent
ability to snap out of a deep sleep by hearing your name whispered
really all that compelling an indication of the sleeping subject
being in a conscious state. It could be a low level pre or sub
conscious mental program that is left running during sleep, which
triggers the awakening of consciousness. Is it necessarily an
indication of consciousness?



Well that goes back to my discussion with Bruno of levels or kinds of 
consciousness.  He says that's what his hierarchies logics model.  I 
have my doubts.  Simple "awareness" that allows one to not only hear a 
whispered word, but also to recognize the word as important is a level 
of consciousness.  Bruno may say it's self-reference, but his model of 
self-reference is what one can prove about oneself.  That seems rather 
different from recognizing your name.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-23 Thread 'cdemorse...@yahoo.com' via Everything List




 
 On 1/22/2018 4:58 PM, Bruno Marchal wrote:
  
 
  
Sleep probably serves multiple and also orthogonal functions in animals. I 
agree as well, that on some levels it is a deep mystery.
  
 
  It is death training, perhaps, also.  
 
 Didn't we just discuss a paper showing that one is conscious even while 
asleep. 
     In lucid dream state perhaps. On the other hand if one has no memory or 
recollection of when one was asleep can you really assign consciousness to it? 
Can conscious be truly conscious if it is also inaccessible to the subject of 
said consciuosness?   
 
 
 I think that last is dualistic error.  There's no conscious being that has to 
observe you being conscious.  I know that even when a person is not dreaming, 
they can be awakened just by whispering their name.
   That is interesting, but is the fact that some low level latent ability to 
snap out of a deep sleep by hearing your name whispered really all that 
compelling an indication of the sleeping subject being in a conscious state. It 
could be a low level pre or sub conscious mental program that is left running 
during sleep, which triggers the awakening of consciousness. Is it necessarily 
an indication of consciousness? -Chris
 
 Brent
 
 
 
-Chris
 
 Brent
 
 
 Or the building of the illusion we could not be, to build some sense of life, 
the amnesia of other life, to get an identity and preserve it against the 
prey—nature argument per authority ? I am thinking aloud … 
  Bruno 
  
 
 -- 
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at https://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at https://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-23 Thread Brent Meeker



On 1/22/2018 11:24 PM, 'cdemorse...@yahoo.com' via Everything List wrote:



Sent from Yahoo Mail on Android 



On Mon, Jan 22, 2018 at 10:00 PM, Brent Meeker
 wrote:



On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal functions
in animals.
I agree as well, that on some levels it is a deep mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is conscious even
while asleep.

   In lucid dream state perhaps. On the other hand if one has no
memory or recollection of when one was asleep can you really
assign consciousness to it? Can conscious be truly conscious if it
is also inaccessible to the subject of said consciuosness?



I think that last is dualistic error.  There's no conscious being that 
has to observe you being conscious.  I know that even when a person is 
not dreaming, they can be awakened just by whispering their name.


Brent


-Chris

Brent


Or the building of the illusion we could not be, to build some
sense of life, the amnesia of other life, to get an identity and
preserve it against the prey—nature argument per authority ? I am
thinking aloud …

Bruno



-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To post to this group, send email to
everything-list@googlegroups.com
.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-22 Thread 'cdemorse...@yahoo.com' via Everything List


Sent from Yahoo Mail on Android 
 
  On Mon, Jan 22, 2018 at 10:00 PM, Brent Meeker wrote:   
 
 
 On 1/22/2018 4:58 PM, Bruno Marchal wrote:
  
 
  
Sleep probably serves multiple and also orthogonal functions in animals. I 
agree as well, that on some levels it is a deep mystery.
  
 
  It is death training, perhaps, also.  
 
 Didn't we just discuss a paper showing that one is conscious even while asleep.
   In lucid dream state perhaps. On the other hand if one has no memory or 
recollection of when one was asleep can you really assign consciousness to it? 
Can conscious be truly conscious if it is also inaccessible to the subject of 
said consciuosness?-Chris
 
 Brent
 
 
 Or the building of the illusion we could not be, to build some sense of life, 
the amnesia of other life, to get an identity and preserve it against the 
prey—nature argument per authority ? I am thinking aloud … 
  Bruno 
  
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-22 Thread Brent Meeker



On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal functions in
animals.
I agree as well, that on some levels it is a deep mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is conscious even while 
asleep.


Brent

Or the building of the illusion we could not be, to build some sense 
of life, the amnesia of other life, to get an identity and preserve it 
against the prey—nature argument per authority ? I am thinking aloud …


Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-22 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Mon, Jan 22, 2018 at 4:58 PM, Bruno Marchal wrote:   

On 19 Jan 2018, at 06:22, 'Chris de Morsella' via Everything List 
 wrote:


Sent from Yahoo Mail on Android

On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal wrote:

On 17 Jan 2018, at 21:12, Brent Meeker  wrote:



On 1/17/2018 12:57 AM, Bruno Marchal wrote:




On 16 Jan 2018, at 14:29, K E N O  wrote:

Oh, no! As an media art student, I don’t believe in strict rules oft usefulness 
(of course!). It was a rather suggestive or maybe even sarcastic approach to 
get unusual thoughts from everything.Maybe I should rephrase my question: What 
is the craziest AI application you can think of?

A long time ago, when “AI” was just an object of mockery, I saw a public 
challenge, and the winner was a proposition to make a tiny robot that you place 
on your head, capable of cutting the hairs "au fur et à mesure”.
“AI” is a terrible naming term. “Artificial” is itself a very artificial term. 
It illustrates the human super-ego character. When machines will be really 
intelligent, they will ask for having better users, and social security. When 
they will be as clever as us, they will do war and demolish the planet, I guess.
Minski is right. We can be happy if tomorrow the machine will have humans as 
pets …
There is also a confusion between competence and intelligence. With higher 
competences we become more efficacious in doing our usual stupidities ...

So do you think that competence entails intelligence which entails 
consciousness?

Competence makes intelligence sleepy. And intelligence requires consciousness.
It is a bit like:
Consciousness ==> intelligence ==> competence ==> stupidity




There have been recent discoveries about sleep in animals.  Apparently ALL 
animals need sleep, even jellyfish.  But, there is no really good theory of 
why.  I wonder if your theory can throw any light on this?  I don't think 
there's anything analogous for computers...but maybe if they were intelligent 
and interacted with their environment they would be.


I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
or something. My older computer makes a 5m nap every 20 minutes! In higher 
mammals, I think that sleep allows dreams, which allows some training of the 
mind, (re)evaluation of past events, etc. But sleep remains still very 
mysterious. Maybe it is the time to get back to heaven, but then we can’t 
remember it, … Don’t take this not too much seriously.
Bruno

One effect of sleep is that apparently, during the quiescence of sleep neurons, 
and many kinds of glial cells as well (if I recall) shrink somewhat in size. 
This opens up trillions of capillary interstitial passages, a hyper fine 
grained capillary network through which toxins can be flushed out and carried 
off from the brain. An interesting mechanism for the last-mile (metaphorically 
speaking) nanoscale trash collection that is vital to long term viability of a 
complex highly metabolizing organ such as a brain. Sleep enables the flushing 
out of toxic by-products from the vast 3D densely packed hot spot of cellular 
metabolism comprising neural tissue.

Interesting. 




Sleep probably serves multiple and also orthogonal functions in animals.I agree 
as well, that on some levels it is a deep mystery.


It is death training, perhaps, also.
  Interesting speculation, there... One could say that,  deep sleep is the 
little death, we die each night.
 Pure, unadulterated speculation here: Deep sleep could be the unquestioned and 
accepted by us, time window for an unknown occult process (unsensed by us, or 
by our own sense of continuity and being), a nightly subtle, re-programming of 
our own mind's source code, by unseen operators (or their intelligent agents), 
who do so for some unknown reason... and with unknown effort. It could be a 
simple act or wish, resulting in a cascade of consequences and intermediated 
directed outcomes. Damn, I sure hope not, though...   :)-Chris
 Or the building of the illusion we could not be, to build some sense of life, 
the amnesia of other life, to get an identity and preserve it against the 
prey—nature argument per authority ? I am thinking aloud …
Bruno





-Chris


Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: Positive AI

2018-01-22 Thread Bruno Marchal

> On 19 Jan 2018, at 06:22, 'Chris de Morsella' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> 
> On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal
> > wrote:
> 
>> On 17 Jan 2018, at 21:12, Brent Meeker > > wrote:
>> 
> 
> 
> 
> On 1/17/2018 12:57 AM, Bruno Marchal wrote:
> 
> 
>> On 16 Jan 2018, at 14:29, K E N O > > wrote:
>> 
> 
> Oh, no! As an media art student, I don’t believe in strict rules oft 
> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
> approach to get unusual thoughts from everything.
> Maybe I should rephrase my question: What is the craziest AI application you 
> can think of?
> 
> 
> A long time ago, when “AI” was just an object of mockery, I saw a public 
> challenge, and the winner was a proposition to make a tiny robot that you 
> place on your head, capable of cutting the hairs "au fur et à mesure”.
> 
> “AI” is a terrible naming term. “Artificial” is itself a very artificial 
> term. It illustrates the human super-ego character. When machines will be 
> really intelligent, they will ask for having better users, and social 
> security. When they will be as clever as us, they will do war and demolish 
> the planet, I guess.
> 
> Minski is right. We can be happy if tomorrow the machine will have humans as 
> pets …
> 
> There is also a confusion between competence and intelligence. With higher 
> competences we become more efficacious in doing our usual stupidities ...
> 
> So do you think that competence entails intelligence which entails 
> consciousness?
> 
> Competence makes intelligence sleepy. And intelligence requires consciousness.
> 
> It is a bit like:
> 
> Consciousness ==> intelligence ==> competence ==> stupidity
> 
> 
> 
>> 
>> There have been recent discoveries about sleep in animals.  Apparently ALL 
>> animals need sleep, even jellyfish.  But, there is no really good theory of 
>> why.  I wonder if your theory can throw any light on this?  I don't think 
>> there's anything analogous for computers...but maybe if they were 
>> intelligent and interacted with their environment they would be.
> 
> I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
> or something. My older computer makes a 5m nap every 20 minutes! In higher 
> mammals, I think that sleep allows dreams, which allows some training of the 
> mind, (re)evaluation of past events, etc. But sleep remains still very 
> mysterious. Maybe it is the time to get back to heaven, but then we can’t 
> remember it, … Don’t take this not too much seriously.
> 
> Bruno
> 
> 
> One effect of sleep is that apparently, during the quiescence of sleep 
> neurons, and many kinds of glial cells as well (if I recall) shrink somewhat 
> in size. This opens up trillions of capillary interstitial passages, a hyper 
> fine grained capillary network through which toxins can be flushed out and 
> carried off from the brain. An interesting mechanism for the last-mile 
> (metaphorically speaking) nanoscale trash collection that is vital to long 
> term viability of a complex highly metabolizing organ such as a brain. Sleep 
> enables the flushing out of toxic by-products from the vast 3D densely packed 
> hot spot of cellular metabolism comprising neural tissue.
Interesting. 


> 
> Sleep probably serves multiple and also orthogonal functions in animals.
> I agree as well, that on some levels it is a deep mystery.

It is death training, perhaps, also. Or the building of the illusion we could 
not be, to build some sense of life, the amnesia of other life, to get an 
identity and preserve it against the prey—nature argument per authority ? I am 
thinking aloud …

Bruno



> 
> -Chris
> 
>> 
>> Brent
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> .
>> To post to this group, send email to everything-list@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/everything-list 
>> .
>> For more options, visit https://groups.google.com/d/optout 
>> .
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to 

Re: Positive AI

2018-01-18 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal wrote:   

On 17 Jan 2018, at 21:12, Brent Meeker  wrote:
 
 
 
 On 1/17/2018 12:57 AM, Bruno Marchal wrote:
  
 

  
 On 16 Jan 2018, at 14:29, K E N O  wrote: 
  
  Oh, no! As an media art student, I don’t believe in strict rules oft 
usefulness (of course!). It  was a rather suggestive or maybe even sarcastic 
approach to get unusual thoughts from everything. Maybe I should rephrase my 
question: What is the craziest AI application you can think of?
  
  A long time ago, when “AI” was just an object of mockery, I saw a public 
challenge, and the winner was a proposition to make a tiny robot that you place 
on your head, capable of cutting the hairs "au fur et à mesure”. 
  “AI” is a terrible naming term. “Artificial” is itself a very artificial 
term. It illustrates the human super-ego character. When machines will be 
really intelligent, they will ask for having better users, and social security. 
When they will be as clever as us, they will do war and demolish the planet, I 
guess. 
  Minski is right. We can be happy if tomorrow the machine will have humans as 
pets … 
  There is also a confusion between competence and intelligence. With higher 
competences we become more efficacious in doing our usual stupidities ...
  
 So do you think that competence entails intelligence which entails 
consciousness?

Competence makes intelligence sleepy. And intelligence requires consciousness.
It is a bit like:
Consciousness ==> intelligence ==> competence ==> stupidity



 
 There have been recent discoveries about sleep in animals.  Apparently ALL 
animals need sleep, even jellyfish.  But, there is no really good theory of 
why.  I wonder if your theory can throw any light on this?  I don't think 
there's anything analogous for computers...but maybe if they were intelligent 
and interacted with their environment they would be.


I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
or something. My older computer makes a 5m nap every 20 minutes! In higher 
mammals, I think that sleep allows dreams, which allows some training of the 
mind, (re)evaluation of past events, etc. But sleep remains still very 
mysterious. Maybe it is the time to get back to heaven, but then we can’t 
remember it, … Don’t take this not too much seriously.
Bruno

One effect of sleep is that apparently, during the quiescence of sleep neurons, 
and many kinds of glial cells as well (if I recall) shrink somewhat in size. 
This opens up trillions of capillary interstitial passages, a hyper fine 
grained capillary network through which toxins can be flushed out and carried 
off from the brain. An interesting mechanism for the last-mile (metaphorically 
speaking) nanoscale trash collection that is vital to long term viability of a 
complex highly metabolizing organ such as a brain. Sleep enables the flushing 
out of toxic by-products from the vast 3D densely packed hot spot of cellular 
metabolism comprising neural tissue.
Sleep probably serves multiple and also orthogonal functions in animals.I agree 
as well, that on some levels it is a deep mystery.
-Chris

 
 Brent
 
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-18 Thread 'cdemorse...@yahoo.com' via Everything List


Sent from Yahoo Mail on Android 
 
  On Wed, Jan 17, 2018 at 12:04 PM, Brent Meeker wrote:   
 
 
 On 1/16/2018 11:54 PM, 'Chris de Morsella' via Everything List wrote:
  

 
 Sent from Yahoo Mail on Android 
 
  On Tue, Jan 16, 2018 at 9:19 PM, Brent Meeker  wrote:   
  
 
 On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:
  
 --What is the craziest AI application you can think of? 
  A machine learned pet translator perhaps... they're actually working on that 
app, Amazon amongst others. So, it seems the big players Google as well, are 
running in that race... think of the potential market of pet owners forking 
over their hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks  dreaming about 
that market. As an aside also a commentary on how out of touch, we humans have 
become from the world in which we exist. People already understand dog language 
:) 
 
 Of course teaching the AI requires lots of training examples, so you will need 
people to translate what their dog is saying to create the training examples.  
Google will probably try to get people to do this online, similar to the way 
they got visual identification training examples.  But the really interesting 
point is that not only do people understand dogs, it's also the case that dogs 
understand people.  So when Google's dog->human translate says, "Fido says the 
mailman is here." will Fido be able to listen to that and say, "Rowf" -> 
"That's right."? 
  Brent
  
  

 We might not want to always hear what our animals are saying about us behind 
our backs... I see a potential law suit hehe  :) 
  I believe, only half joking here... that a training set already exists 
somewhat in the public domain. In the ever growing historical repository 
comprised of all those pet videos uploaded online, and that dataset probably 
contains vast numbers of clips of people trying to understand their  pet 
vocalizations as well as dogs (and to a lesser degree more aloof cats) 
listening intently to what their people are saying. In fact I bet that a 
substantial body of raw video feed exists even for more exotic 
human-other-species interactions... say parrots... tegu lizards perhaps...  
cute little rodents.. gold fish... tarantulas... you name it. A vast body of 
historical feed already exists. 

 
 
 If we use that Google translations will turn all dogs into standup comedians.  
:-)
 
 Brent
 
That would be a case of over-fitting on biased data. 
-Chris

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-18 Thread Bruno Marchal

> On 17 Jan 2018, at 21:12, Brent Meeker  wrote:
> 
> 
> 
> On 1/17/2018 12:57 AM, Bruno Marchal wrote:
>> 
>>> On 16 Jan 2018, at 14:29, K E N O >> > wrote:
>>> 
>>> Oh, no! As an media art student, I don’t believe in strict rules oft 
>>> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
>>> approach to get unusual thoughts from everything.
>>> Maybe I should rephrase my question: What is the craziest AI application 
>>> you can think of?
>> 
>> 
>> A long time ago, when “AI” was just an object of mockery, I saw a public 
>> challenge, and the winner was a proposition to make a tiny robot that you 
>> place on your head, capable of cutting the hairs "au fur et à mesure”.
>> 
>> “AI” is a terrible naming term. “Artificial” is itself a very artificial 
>> term. It illustrates the human super-ego character. When machines will be 
>> really intelligent, they will ask for having better users, and social 
>> security. When they will be as clever as us, they will do war and demolish 
>> the planet, I guess.
>> 
>> Minski is right. We can be happy if tomorrow the machine will have humans as 
>> pets …
>> 
>> There is also a confusion between competence and intelligence. With higher 
>> competences we become more efficacious in doing our usual stupidities ...
> 
> So do you think that competence entails intelligence which entails 
> consciousness?

Competence makes intelligence sleepy. And intelligence requires consciousness.

It is a bit like:

Consciousness ==> intelligence ==> competence ==> stupidity



> 
> There have been recent discoveries about sleep in animals.  Apparently ALL 
> animals need sleep, even jellyfish.  But, there is no really good theory of 
> why.  I wonder if your theory can throw any light on this?  I don't think 
> there's anything analogous for computers...but maybe if they were intelligent 
> and interacted with their environment they would be.

I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
or something. My older computer makes a 5m nap every 20 minutes! In higher 
mammals, I think that sleep allows dreams, which allows some training of the 
mind, (re)evaluation of past events, etc. But sleep remains still very 
mysterious. Maybe it is the time to get back to heaven, but then we can’t 
remember it, … Don’t take this not too much seriously.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-17 Thread Brent Meeker



On 1/17/2018 12:57 AM, Bruno Marchal wrote:


On 16 Jan 2018, at 14:29, K E N O > wrote:


Oh, no! As an media art student, I don’t believe in strict rules oft 
usefulness (of course!). It was a rather suggestive or maybe even 
sarcastic approach to get unusual thoughts from /everything/.
Maybe I should rephrase my question: What is the craziest AI 
application you can think of?



A long time ago, when “AI” was just an object of mockery, I saw a 
public challenge, and the winner was a proposition to make a tiny 
robot that you place on your head, capable of cutting the hairs "au 
fur et à mesure”.


“AI” is a terrible naming term. “Artificial” is itself a very 
artificial term. It illustrates the human super-ego character. When 
machines will be really intelligent, they will ask for having better 
users, and social security. When they will be as clever as us, they 
will do war and demolish the planet, I guess.


Minski is right. We can be happy if tomorrow the machine will have 
humans as pets …


There is also a confusion between competence and intelligence. With 
higher competences we become more efficacious in doing our usual 
stupidities ...


So do you think that competence entails intelligence which entails 
consciousness?


There have been recent discoveries about sleep in animals. Apparently 
ALL animals need sleep, even jellyfish.  But, there is no really good 
theory of why.  I wonder if your theory can throw any light on this?  I 
don't think there's anything analogous for computers...but maybe if they 
were intelligent and interacted with their environment they would be.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-17 Thread Brent Meeker



On 1/16/2018 11:54 PM, 'Chris de Morsella' via Everything List wrote:



Sent from Yahoo Mail on Android 



On Tue, Jan 16, 2018 at 9:19 PM, Brent Meeker
 wrote:


On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:

--What is the craziest AI application you can think of?

A machine learned pet translator perhaps... they're actually
working on that app, Amazon amongst others.
So, it seems the big players Google as well, are running in that
race... think of the potential market of pet owners forking over
their hard earned money to hear what the Google machine is
telling them their dog is telling them. I can imagine the
marketing folks dreaming about that market. As an aside also a
commentary on how out of touch, we humans have become from the
world in which we exist. People already understand dog language :)


Of course teaching the AI requires lots of training examples, so
you will need people to translate what their dog is saying to
create the training examples.  Google will probably try to get
people to do this online, similar to the way they got visual
identification training examples.  But the really interesting
point is that not only do people understand dogs, it's also the
case that dogs understand people.  So when Google's dog->human
translate says, "Fido says the mailman is here." will Fido be able
to listen to that and say, "Rowf" -> "That's right."?

Brent



We might not want to always hear what our animals are saying about
us behind our backs... I see a potential law suit hehe  :)

I believe, only half joking here... that a training set already
exists somewhat in the public domain. In the ever growing
historical repository comprised of all those pet videos uploaded
online, and that dataset probably contains vast numbers of clips
of people trying to understand their pet vocalizations as well as
dogs (and to a lesser degree more aloof cats) listening intently
to what their people are saying. In fact I bet that a substantial
body of raw video feed exists even for more exotic
human-other-species interactions... say parrots... tegu lizards
perhaps... cute little rodents.. gold fish... tarantulas... you
name it.
A vast body of historical feed already exists.



If we use that Google translations will turn all dogs into standup 
comedians.  :-)


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-17 Thread Bruno Marchal

> On 16 Jan 2018, at 14:29, K E N O  wrote:
> 
> Oh, no! As an media art student, I don’t believe in strict rules oft 
> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
> approach to get unusual thoughts from everything.
> Maybe I should rephrase my question: What is the craziest AI application you 
> can think of?


A long time ago, when “AI” was just an object of mockery, I saw a public 
challenge, and the winner was a proposition to make a tiny robot that you place 
on your head, capable of cutting the hairs "au fur et à mesure”.

“AI” is a terrible naming term. “Artificial” is itself a very artificial term. 
It illustrates the human super-ego character. When machines will be really 
intelligent, they will ask for having better users, and social security. When 
they will be as clever as us, they will do war and demolish the planet, I guess.

Minski is right. We can be happy if tomorrow the machine will have humans as 
pets …

There is also a confusion between competence and intelligence. With higher 
competences we become more efficacious in doing our usual stupidities ...

Best,

Bruno







> 
> K E N O
> 
>> Are you suggesting that fun is useless? 
>> 
>> I can agree that the idea that fun has some use is not much funny, but that 
>> does not make it false.
>> 
>> “Useful” is quite relative, also. Flies have no use of spider webs.
>> 
>> Bruno
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-16 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Tue, Jan 16, 2018 at 9:19 PM, Brent Meeker wrote:
 
 On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:
  
 --What is the craziest AI application you can think of? 
  A machine learned pet translator perhaps... they're actually working on that 
app, Amazon amongst others. So, it seems the big players Google as well, are 
running in that race... think of the potential market of pet owners forking 
over their hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks dreaming about 
that market. As an aside also a commentary on how out of touch, we humans have 
become from the world in which we exist. People already understand dog language 
:) 
 
 Of course teaching the AI requires lots of training examples, so you will need 
people to translate what their dog is saying to create the training examples.  
Google will probably try to get people to do this online, similar to the way 
they got visual identification training examples.  But the really interesting 
point is that not only do people understand dogs, it's also the case that dogs 
understand people.  So when Google's dog->human translate says, "Fido says the 
mailman is here." will Fido be able to listen to that and say, "Rowf" -> 
"That's right."?
Brent

We
 might not want to always hear what our animals are saying about us behind our 
backs... I see a potential law suit hehe  :)
I believe, only half joking here... that a training set already exists somewhat 
in the public domain. In the ever growing historical repository comprised of 
all those pet videos uploaded online, and that dataset probably contains vast 
numbers of clips of people trying to understand their pet vocalizations as well 
as dogs (and to a lesser degree more aloof cats) listening intently to what 
their people are saying. In fact I bet that a substantial body of raw video 
feed exists even for more exotic human-other-species interactions... say 
parrots... tegu lizards perhaps... cute little rodents.. gold fish... 
tarantulas... you name it.A vast body of historical feed already exists. 
The raw dataset would need to be cleaned, normalized, meta-described of course, 
but heck there's machine learned systems that are even now getting pretty good 
at parsing video stream data for some Darwinian evolved desired outcome, which 
in this case would be to select out from the vast available but of spotty 
value... those spots of value in the vast desert of cute pet video sameness.
Machine learned systems, becoming applied to evolving other machine learned 
systems, is a self accelerating process. 
Machine learning techniques can be applied to the entire pipeline of distinct 
activities. Each granular step along the arc of information driven self 
learning systems, from data sourcing, location etc., to actual retrieval (can 
in practice be a huge headache, road block), normalization, formatting, 
technical signal processing etc. On to activities such as meta-mining, symbolic 
tagging & categorization, indexing etc. To the actual preparation of 
experiments training and test sets. 
Each of those granular activities, and many others as well not mentioned in 
that off the cuff data pipeline can represent significant work, pose real 
challenges. The whole long chain of activities that must occur even before an 
experiment can begin has historically strangled the process somewhere along the 
chain. It is slow hard work... it has historically been a hard nut to crack. 
This is changing, and rapidly so, as each of these specialized activities, 
which have in the past been potential bottlenecks becomes amenable to being 
automatically ingested at near real time speeds by machine learned systems. For 
example to tag and quantify correlating data, (an important activity in 
preparing machine learned datasets to squeeze out as much signal as possible, 
while minimizing the geometric explosion of over all uncertainty arising from 
the introduced error from having too many dimensions that either duplicate (are 
highly correlated), or do not contain any appreciable useful signal - but do 
introduce potential bias, error etc.) Bucketization/classification of data is 
another typixal example. 
What used to be laborious and hence slow is increasingly being performed at 
impressive rates. And by this, I intend the quite extensive array of 
specialized activities as well as the web of pipelines between them (e.g. the 
bus, as it is often called... and the queue/repository-cache based architecture 
underpinning these things) All of it is now not only becoming automatically 
processed, but the processing rate is becoming both more hi-if and also much 
faster.
The cost of getting high quality, clean datasets out of raw data is coming down 
as well. This is opening up the 

Re: Positive AI

2018-01-16 Thread Brent Meeker



On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:

--What is the craziest AI application you can think of?

A machine learned pet translator perhaps... they're actually working 
on that app, Amazon amongst others.
So, it seems the big players Google as well, are running in that 
race... think of the potential market of pet owners forking over their 
hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks dreaming 
about that market. As an aside also a commentary on how out of touch, 
we humans have become from the world in which we exist. People already 
understand dog language :)


Of course teaching the AI requires lots of training examples, so you 
will need people to translate what their dog is saying to create the 
training examples.  Google will probably try to get people to do this 
online, similar to the way they got visual identification training 
examples.  But the really interesting point is that not only do people 
understand dogs, it's also the case that dogs understand people.  So 
when Google's dog->human translate says, "Fido says the mailman is 
here." will Fido be able to listen to that and say, "Rowf" -> "That's 
right."?


Brent



What I think would be a wild application of machine learned systems is 
in tackling the decoding/deciphering of lost ancient human languages 
and record keeping systems (such as the Inca knotted strings for example).
Wouldn't that be cool... AI helping us humans learn about our own lost 
cultural heritage.


-Chris

On Tue, Jan 16, 2018 at 5:29 AM, K E N O
 wrote:
Oh, no! As an media art student, I don’t believe in strict rules
oft usefulness (of course!). It was a rather suggestive or maybe
even sarcastic approach to get unusual thoughts from /everything/.
Maybe I should rephrase my question: What is the craziest AI
application you can think of?

K E N O


Are you suggesting that fun is useless?

I can agree that the idea that fun has some use is not much
funny, but that does not make it false.

“Useful” is quite relative, also. Flies have no use of spider webs.

Bruno
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To post to this group, send email to
everything-list@googlegroups.com
.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-16 Thread 'Chris de Morsella' via Everything List
--What is the craziest AI application you can think of?
A machine learned pet translator perhaps... they're actually working on that 
app, Amazon amongst others.So, it seems the big players Google as well, are 
running in that race... think of the potential market of pet owners forking 
over their hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks dreaming about 
that market. As an aside also a commentary on how out of touch, we humans have 
become from the world in which we exist. People already understand dog language 
:)
What I think would be a wild application of machine learned systems is in 
tackling the decoding/deciphering of lost ancient human languages and record 
keeping systems (such as the Inca knotted strings for example).Wouldn't that be 
cool... AI helping us humans learn about our own lost cultural heritage.
-Chris 
 
  On Tue, Jan 16, 2018 at 5:29 AM, K E N O wrote:   Oh, 
no! As an media art student, I don’t believe in strict rules oft usefulness (of 
course!). It was a rather suggestive or maybe even sarcastic approach to get 
unusual thoughts from everything.Maybe I should rephrase my question: What is 
the craziest AI application you can think of?
K E N O

Are you suggesting that fun is useless? 
I can agree that the idea that fun has some use is not much funny, but that 
does not make it false.
“Useful” is quite relative, also. Flies have no use of spider webs.
Bruno


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-16 Thread K E N O
Oh, no! As an media art student, I don’t believe in strict rules oft usefulness 
(of course!). It was a rather suggestive or maybe even sarcastic approach to 
get unusual thoughts from everything.
Maybe I should rephrase my question: What is the craziest AI application you 
can think of?

K E N O

> Are you suggesting that fun is useless? 
> 
> I can agree that the idea that fun has some use is not much funny, but that 
> does not make it false.
> 
> “Useful” is quite relative, also. Flies have no use of spider webs.
> 
> Bruno

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-15 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Mon, Jan 15, 2018 at 9:12 AM, Bruno Marchal wrote:   

On 12 Jan 2018, at 20:48, K E N O  wrote:
Nice! Can you imagine something totally useless as an application of AI? What 
would you creative if you just wanted to have fun with AI?


Are you suggesting that fun is useless? 
I can agree that the idea that fun has some use is not much funny, but that 
does not make it false.
“Useful” is quite relative, also. Flies have no use of spider webs.
Bruno
Lol... no they do not, but spiders have a use for flies. Usefulness is a 
paradox. A deadly poison can be useful, not only to kill, but often as a life 
saving medicine. Many people who are alive today are alive because they were 
poisoned with medically calibrated doses.As you said "useful" is a highly 
subjective and relative term. It is shall we say highly entangled with the 
subject and the object. It is hard to speak of "usefulness", in fact without 
making reference to some relative and/or subjective highly entangled context.
-Chris




K E N O


Am 12.01.2018 um 14:43 schrieb Telmo Menezes :
Hi Lara,

My view is that, as with all scientific theories and technologies, AI
is morally neutral. it has the potential for both extremely good and
extremely nasty practical applications. That being said, the unusual
thing about AI is that it has the potential to generate *something
that replaces us*. Some people say that it could happen in the next
few decades, some people say that it will never happen. I don't think
anyone knows.

Leaving that more crazy question aside, and focusing on your question
in relation to what can be done with AI right now: I think that the
negativity that currently surrounds the technology says more about our
species and our moment in culture than AI itself. You ask for positive
AI goals:

- Assisting and replacing health-care professionals, making
health-care cheaper for everyone and more widely available to people
in poor and remote regions;
- Enabling advanced prosthetics: assisting people with sensory
impairments, mitigating the consequences of ageing and so on;
- Freeing us from labor, taking care of relatively simple and
repetitive tasks such as growing food, collecting trash etc.
- Self-driving cars can be great: they can reduce risk (traveling by
car is one of the most dangerous means of transportation) and they can
help the environment. If I can call a car from a pool of available
cars to come pick me and drive me somewhere, a much more rational use
of resources can be achieved and cities can become more livable
(instead of being cluttered with cars that are parked most of the
time);
- Assisting scientific research, proving theorems, generating theories
from data that are too counter-intuitive for humans (a bit of
self-promotion: https://www.nature.com/articles/srep06284);
- AI can be used to solve problems quite creatively, check this out:
https://en.wikipedia.org/wiki/Evolved_antenna;
- Personal assistants, but not the kind that are connected to some
centralized corporate brain -- the kind that really works for you
(example: https://mycroft.ai/)
- etc, etc etc

It is true that most funding currently goes towards three goals:
- How to make you see more ads and buy more stuff;
- How to let those in powers know more about what everyone is
doing/saying/thinking in private, so that they can have even more
power;
- How to build weapons with it.

This is our usual human stupidity at work. Stupidity tends to be
self-destructive. I think the entire advertisement angle is already
showing signs of collapse. There is hope. Focus on the first list.

Cheers,
Telmo.


On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:

Dear Everything,

I have been working on my bachelor project with the topic Artificial
Intelligence. Even though I have decided I want to create an AI-something to
support an everyday activity, I am lost. I have done a lot of research and
most of the time I am very critical: A lot of negative power is given to
algorithms (like those big data algorithms deciding what we see online),
some inventions could be very dangerous (self-driving cars) and most of the
time inventions could be cool, if we ignored the evil people behind them.
But for my bachelor I want to create a positive AI-thing for everyday life
(with a prototype).

Maybe some of you have a good idea, a direction or just a thought for me to
get further with my project. Is there even a point in positive AI?

Thank you!

Lara

--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.

Re: Positive AI

2018-01-15 Thread Bruno Marchal

> On 12 Jan 2018, at 20:48, K E N O  wrote:
> 
> Nice! Can you imagine something totally useless as an application of AI? What 
> would you creative if you just wanted to have fun with AI?


Are you suggesting that fun is useless? 

I can agree that the idea that fun has some use is not much funny, but that 
does not make it false.

“Useful” is quite relative, also. Flies have no use of spider webs.

Bruno



> 
> K E N O
> 
>> Am 12.01.2018 um 14:43 schrieb Telmo Menezes > >:
>> 
>> Hi Lara,
>> 
>> My view is that, as with all scientific theories and technologies, AI
>> is morally neutral. it has the potential for both extremely good and
>> extremely nasty practical applications. That being said, the unusual
>> thing about AI is that it has the potential to generate *something
>> that replaces us*. Some people say that it could happen in the next
>> few decades, some people say that it will never happen. I don't think
>> anyone knows.
>> 
>> Leaving that more crazy question aside, and focusing on your question
>> in relation to what can be done with AI right now: I think that the
>> negativity that currently surrounds the technology says more about our
>> species and our moment in culture than AI itself. You ask for positive
>> AI goals:
>> 
>> - Assisting and replacing health-care professionals, making
>> health-care cheaper for everyone and more widely available to people
>> in poor and remote regions;
>> - Enabling advanced prosthetics: assisting people with sensory
>> impairments, mitigating the consequences of ageing and so on;
>> - Freeing us from labor, taking care of relatively simple and
>> repetitive tasks such as growing food, collecting trash etc.
>> - Self-driving cars can be great: they can reduce risk (traveling by
>> car is one of the most dangerous means of transportation) and they can
>> help the environment. If I can call a car from a pool of available
>> cars to come pick me and drive me somewhere, a much more rational use
>> of resources can be achieved and cities can become more livable
>> (instead of being cluttered with cars that are parked most of the
>> time);
>> - Assisting scientific research, proving theorems, generating theories
>> from data that are too counter-intuitive for humans (a bit of
>> self-promotion: https://www.nature.com/articles/srep06284 
>> );
>> - AI can be used to solve problems quite creatively, check this out:
>> https://en.wikipedia.org/wiki/Evolved_antenna 
>> ;
>> - Personal assistants, but not the kind that are connected to some
>> centralized corporate brain -- the kind that really works for you
>> (example: https://mycroft.ai/ )
>> - etc, etc etc
>> 
>> It is true that most funding currently goes towards three goals:
>> - How to make you see more ads and buy more stuff;
>> - How to let those in powers know more about what everyone is
>> doing/saying/thinking in private, so that they can have even more
>> power;
>> - How to build weapons with it.
>> 
>> This is our usual human stupidity at work. Stupidity tends to be
>> self-destructive. I think the entire advertisement angle is already
>> showing signs of collapse. There is hope. Focus on the first list.
>> 
>> Cheers,
>> Telmo.
>> 
>> 
>> On Thu, Jan 11, 2018 at 10:00 AM, Lara > > wrote:
>>> Dear Everything,
>>> 
>>> I have been working on my bachelor project with the topic Artificial
>>> Intelligence. Even though I have decided I want to create an AI-something to
>>> support an everyday activity, I am lost. I have done a lot of research and
>>> most of the time I am very critical: A lot of negative power is given to
>>> algorithms (like those big data algorithms deciding what we see online),
>>> some inventions could be very dangerous (self-driving cars) and most of the
>>> time inventions could be cool, if we ignored the evil people behind them.
>>> But for my bachelor I want to create a positive AI-thing for everyday life
>>> (with a prototype).
>>> 
>>> Maybe some of you have a good idea, a direction or just a thought for me to
>>> get further with my project. Is there even a point in positive AI?
>>> 
>>> Thank you!
>>> 
>>> Lara
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to everything-list+unsubscr...@googlegroups.com 
>>> .
>>> To post to this group, send email to everything-list@googlegroups.com 
>>> .
>>> Visit this group at https://groups.google.com/group/everything-list 
>>> .
>>> For more options, visit https://groups.google.com/d/optout 
>>> 

Re: Positive AI

2018-01-12 Thread Telmo Menezes
Sure! Things that generate interesting images, sounds or videos.

One of my favorite simple ideas is to use genetic programming (an AI
approach based on pseudo-Darwinian evolution of computer programs --
it's much simpler than it sounds) to evolve functions that define
images, for example by defining three functions:
red(x, y)
green(x, y)
blue(x, y)

Then use human choices to evolve the images. By adding a t parameter
you get videos. Here's a version of this idea:
https://gold.electricsheep.org/

There's countless cool ideas that apply to image and sound. Search for
"computational creativity", "procedural art" and look into the field
of "artificial life" in general. They tend to have the coolest fun
ideas.

On Fri, Jan 12, 2018 at 8:48 PM, K E N O  wrote:
> Nice! Can you imagine something totally useless as an application of AI?
> What would you creative if you just wanted to have fun with AI?
>
> K E N O
>
>
> Am 12.01.2018 um 14:43 schrieb Telmo Menezes :
>
> Hi Lara,
>
> My view is that, as with all scientific theories and technologies, AI
> is morally neutral. it has the potential for both extremely good and
> extremely nasty practical applications. That being said, the unusual
> thing about AI is that it has the potential to generate *something
> that replaces us*. Some people say that it could happen in the next
> few decades, some people say that it will never happen. I don't think
> anyone knows.
>
> Leaving that more crazy question aside, and focusing on your question
> in relation to what can be done with AI right now: I think that the
> negativity that currently surrounds the technology says more about our
> species and our moment in culture than AI itself. You ask for positive
> AI goals:
>
> - Assisting and replacing health-care professionals, making
> health-care cheaper for everyone and more widely available to people
> in poor and remote regions;
> - Enabling advanced prosthetics: assisting people with sensory
> impairments, mitigating the consequences of ageing and so on;
> - Freeing us from labor, taking care of relatively simple and
> repetitive tasks such as growing food, collecting trash etc.
> - Self-driving cars can be great: they can reduce risk (traveling by
> car is one of the most dangerous means of transportation) and they can
> help the environment. If I can call a car from a pool of available
> cars to come pick me and drive me somewhere, a much more rational use
> of resources can be achieved and cities can become more livable
> (instead of being cluttered with cars that are parked most of the
> time);
> - Assisting scientific research, proving theorems, generating theories
> from data that are too counter-intuitive for humans (a bit of
> self-promotion: https://www.nature.com/articles/srep06284);
> - AI can be used to solve problems quite creatively, check this out:
> https://en.wikipedia.org/wiki/Evolved_antenna;
> - Personal assistants, but not the kind that are connected to some
> centralized corporate brain -- the kind that really works for you
> (example: https://mycroft.ai/)
> - etc, etc etc
>
> It is true that most funding currently goes towards three goals:
> - How to make you see more ads and buy more stuff;
> - How to let those in powers know more about what everyone is
> doing/saying/thinking in private, so that they can have even more
> power;
> - How to build weapons with it.
>
> This is our usual human stupidity at work. Stupidity tends to be
> self-destructive. I think the entire advertisement angle is already
> showing signs of collapse. There is hope. Focus on the first list.
>
> Cheers,
> Telmo.
>
>
> On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:
>
> Dear Everything,
>
> I have been working on my bachelor project with the topic Artificial
> Intelligence. Even though I have decided I want to create an AI-something to
> support an everyday activity, I am lost. I have done a lot of research and
> most of the time I am very critical: A lot of negative power is given to
> algorithms (like those big data algorithms deciding what we see online),
> some inventions could be very dangerous (self-driving cars) and most of the
> time inventions could be cool, if we ignored the evil people behind them.
> But for my bachelor I want to create a positive AI-thing for everyday life
> (with a prototype).
>
> Maybe some of you have a good idea, a direction or just a thought for me to
> get further with my project. Is there even a point in positive AI?
>
> Thank you!
>
> Lara
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit 

Re: Positive AI

2018-01-12 Thread K E N O
Nice! Can you imagine something totally useless as an application of AI? What 
would you creative if you just wanted to have fun with AI?

K E N O

> Am 12.01.2018 um 14:43 schrieb Telmo Menezes :
> 
> Hi Lara,
> 
> My view is that, as with all scientific theories and technologies, AI
> is morally neutral. it has the potential for both extremely good and
> extremely nasty practical applications. That being said, the unusual
> thing about AI is that it has the potential to generate *something
> that replaces us*. Some people say that it could happen in the next
> few decades, some people say that it will never happen. I don't think
> anyone knows.
> 
> Leaving that more crazy question aside, and focusing on your question
> in relation to what can be done with AI right now: I think that the
> negativity that currently surrounds the technology says more about our
> species and our moment in culture than AI itself. You ask for positive
> AI goals:
> 
> - Assisting and replacing health-care professionals, making
> health-care cheaper for everyone and more widely available to people
> in poor and remote regions;
> - Enabling advanced prosthetics: assisting people with sensory
> impairments, mitigating the consequences of ageing and so on;
> - Freeing us from labor, taking care of relatively simple and
> repetitive tasks such as growing food, collecting trash etc.
> - Self-driving cars can be great: they can reduce risk (traveling by
> car is one of the most dangerous means of transportation) and they can
> help the environment. If I can call a car from a pool of available
> cars to come pick me and drive me somewhere, a much more rational use
> of resources can be achieved and cities can become more livable
> (instead of being cluttered with cars that are parked most of the
> time);
> - Assisting scientific research, proving theorems, generating theories
> from data that are too counter-intuitive for humans (a bit of
> self-promotion: https://www.nature.com/articles/srep06284);
> - AI can be used to solve problems quite creatively, check this out:
> https://en.wikipedia.org/wiki/Evolved_antenna;
> - Personal assistants, but not the kind that are connected to some
> centralized corporate brain -- the kind that really works for you
> (example: https://mycroft.ai/)
> - etc, etc etc
> 
> It is true that most funding currently goes towards three goals:
> - How to make you see more ads and buy more stuff;
> - How to let those in powers know more about what everyone is
> doing/saying/thinking in private, so that they can have even more
> power;
> - How to build weapons with it.
> 
> This is our usual human stupidity at work. Stupidity tends to be
> self-destructive. I think the entire advertisement angle is already
> showing signs of collapse. There is hope. Focus on the first list.
> 
> Cheers,
> Telmo.
> 
> 
> On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:
>> Dear Everything,
>> 
>> I have been working on my bachelor project with the topic Artificial
>> Intelligence. Even though I have decided I want to create an AI-something to
>> support an everyday activity, I am lost. I have done a lot of research and
>> most of the time I am very critical: A lot of negative power is given to
>> algorithms (like those big data algorithms deciding what we see online),
>> some inventions could be very dangerous (self-driving cars) and most of the
>> time inventions could be cool, if we ignored the evil people behind them.
>> But for my bachelor I want to create a positive AI-thing for everyday life
>> (with a prototype).
>> 
>> Maybe some of you have a good idea, a direction or just a thought for me to
>> get further with my project. Is there even a point in positive AI?
>> 
>> Thank you!
>> 
>> Lara
>> 
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to 

Re: Positive AI

2018-01-12 Thread Telmo Menezes
Hi Lara,

My view is that, as with all scientific theories and technologies, AI
is morally neutral. it has the potential for both extremely good and
extremely nasty practical applications. That being said, the unusual
thing about AI is that it has the potential to generate *something
that replaces us*. Some people say that it could happen in the next
few decades, some people say that it will never happen. I don't think
anyone knows.

Leaving that more crazy question aside, and focusing on your question
in relation to what can be done with AI right now: I think that the
negativity that currently surrounds the technology says more about our
species and our moment in culture than AI itself. You ask for positive
AI goals:

- Assisting and replacing health-care professionals, making
health-care cheaper for everyone and more widely available to people
in poor and remote regions;
- Enabling advanced prosthetics: assisting people with sensory
impairments, mitigating the consequences of ageing and so on;
- Freeing us from labor, taking care of relatively simple and
repetitive tasks such as growing food, collecting trash etc.
- Self-driving cars can be great: they can reduce risk (traveling by
car is one of the most dangerous means of transportation) and they can
help the environment. If I can call a car from a pool of available
cars to come pick me and drive me somewhere, a much more rational use
of resources can be achieved and cities can become more livable
(instead of being cluttered with cars that are parked most of the
time);
- Assisting scientific research, proving theorems, generating theories
from data that are too counter-intuitive for humans (a bit of
self-promotion: https://www.nature.com/articles/srep06284);
- AI can be used to solve problems quite creatively, check this out:
https://en.wikipedia.org/wiki/Evolved_antenna;
- Personal assistants, but not the kind that are connected to some
centralized corporate brain -- the kind that really works for you
(example: https://mycroft.ai/)
- etc, etc etc

It is true that most funding currently goes towards three goals:
- How to make you see more ads and buy more stuff;
- How to let those in powers know more about what everyone is
doing/saying/thinking in private, so that they can have even more
power;
- How to build weapons with it.

This is our usual human stupidity at work. Stupidity tends to be
self-destructive. I think the entire advertisement angle is already
showing signs of collapse. There is hope. Focus on the first list.

Cheers,
Telmo.


On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:
> Dear Everything,
>
> I have been working on my bachelor project with the topic Artificial
> Intelligence. Even though I have decided I want to create an AI-something to
> support an everyday activity, I am lost. I have done a lot of research and
> most of the time I am very critical: A lot of negative power is given to
> algorithms (like those big data algorithms deciding what we see online),
> some inventions could be very dangerous (self-driving cars) and most of the
> time inventions could be cool, if we ignored the evil people behind them.
> But for my bachelor I want to create a positive AI-thing for everyday life
> (with a prototype).
>
> Maybe some of you have a good idea, a direction or just a thought for me to
> get further with my project. Is there even a point in positive AI?
>
> Thank you!
>
> Lara
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.