Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
Maybe I am a bot. Beep. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M6d9db2f4c62c4fd55464177f Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread Nanograte Knowledge Technologies
Usually, the acid test for this crow's nest of conjecture is a good whack in the head. If it shouts "Ow!!!", it must surely exist. The end. See why I suspect you of being a bot? From: immortal.discover...@gmail.com Sent: Saturday, 09 November 2019 22:04 To: AGI

Re: [agi] Re: Missing Data

2019-11-09 Thread John Rose
There will often be loss in perceptual lossless for some humans. Some humans see in the dark and some have hypersensitize hearing so the perceptual lossless will be perceptual lossy to them. But it depends, do they want perceptual lossless with unperceptual lossy or perceptual lossy with unperce

Re: [agi] Re: Missing Data

2019-11-09 Thread WriterOfMinds
Poor analogy. Suppose you receive a requirement from a customer, for a "lossy compressor," and you design them a compressor that delivers lossless results for some data sets.  No one will mind.  You have met the requirement. Suppose you receive a requirement from a customer, for a "lossless compr

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
Good one. It's all in my head and it's only me. Best way to cover up what I said isn't it. That it's just me thinking it all. But I can think using what I learnt. And what I learnt is what I see. It is that everything is matter and so am I. I see my desktop right now, and part of my body, it's a

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread WriterOfMinds
What? You finally figured out "I think, therefore I am," sort of? It's about time. I'm perfectly happy to consider myself to be a ghost, or observer, or whatever you want to call it. I can't objectively measure/detect/verify the existence of *any other* consciousness.  I agree with Matt that fa

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
*I have something shocking to tell yous*. Strap your seat belt in. If the whole universe is just a bunch of particles and everything is a machine made of machines and nothing is alive or conscious and can all be moved, metaled, squished, or rotated (as we do to hamburgers when they enter our mou

Re: [agi] arXiv endorsement request from Basile Starynkevitch for RefPerSys (a symbolic AGI project - design draft)

2019-11-09 Thread Matt Mahoney
Again skipping the requirements straight to design. Exactly what problem are you trying to solve? On Sat, Nov 9, 2019, 10:07 AM Mike Archbold wrote: > The abstract is a bit deflating. Why not take out "hobby"? If you're > this serious I wouldn't call it a hobby maybe just say "early > stage

Re: [agi] Re: Missing Data

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 1:30 PM, WriterOfMinds wrote: >> Re: John Rose: "It might be effectively lossless it’s not guaranteed to be >> lossy." > True. But I think the usual procedure is that unless the algorithm guarantees > losslessness, you treat the compressed output as lossy.  Loss

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread John Rose
Perhaps we need definitions of stupidity. With all artificial intelligence there is artificial stupidity? Take the diff and correlate to bliss (ignorance). Blue pill me baby. Consumes less watts. More efficient? But survival is negentropy. So knowledge is potential energy. Causal entropic force?

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 11:34 PM, immortal.discoveries wrote: > "consciousness" isn't a real thing and can't be tested in a lab... hm... I don't know. It's kind of like doing generalized principle component analysis on white noise. Something has to do it. Something has to do the c

Re: [agi] arXiv endorsement request from Basile Starynkevitch for RefPerSys (a symbolic AGI project - design draft)

2019-11-09 Thread Mike Archbold
The abstract is a bit deflating. Why not take out "hobby"? If you're this serious I wouldn't call it a hobby maybe just say "early stages... embryonic... first milestone" etc. Mike A On 11/9/19, Basile Starynkevitch wrote: > Hello all, > > I would like to submit the draft > http://staryn

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
That worm coming out of the cricket was cringeworthy. Cymothoa exigua is another. It’s not the worm’s fault though it’s just living it’s joyful and pleasureful life to the fullest. And the cricket is being open and submissive. I think there are nonphysical parasites that effect human beings...

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread Bill Hibbard via AGI
> Philosophy is arguing about the meanings of words. For me, the great lesson of philosophy is that any language that is general enough to express all the ideas we need to express is able to express questions that do not have answers. For example, "Is there a god?" This may be related to the fact

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler
On 2019-11-08 15:58:PM, Matt Mahoney wrote: You can choose to model I/O peripherals as either part of the agent or part of the environment. Likewise for an input delay line. In one case it lowers intelligence and in the other case it doesn't. Thinking about it in computer science terms blurs t

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler
On 2019-11-08 17:53:PM, Matt Mahoney wrote: > we can approximate reward as dollars per hour over a set of > real environments of practical value. In that case, it does > matter how well you can see, hear, walk, and lift heavy objects. > Whether you think that's fair or not, it matters for AGI too

Re: [agi] Deviations from generality

2019-11-09 Thread TimTyler
On 2019-11-08 20:34:PM, rounce...@hotmail.com wrote: The thing about the adversary controlling the environment around the agent,  his brain is working with the same physics as your feet hitting the floor,  but its not simulatable in a physics system, because its not mechanical to start with, bu

[agi] arXiv endorsement request from Basile Starynkevitch for RefPerSys (a symbolic AGI project - design draft)

2019-11-09 Thread Basile Starynkevitch
Hello all, I would like to submit the draft http://starynkevitch.net/Basile/refpersys-design.pdf to arxiv. Basile Starynkevitch requests your endorsement to submit an article to the cs.AI section of arXiv. To tell us that you would (or would not) like to endorse this person, please visit th

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread Nanograte Knowledge Technologies
I use rational in the sense of being reasonable. To me, the phrase: "It stands to reason." = "It seems rational." The difference between my version of 'rational' and your version seems rather odd to me too. Being rational is not being sentient. An animal- when acting outside the scope of its in