On Monday, January 7, 2013 7:24:24 PM UTC-5, telmo_menezes wrote:
>
> Hi Craig,
>
>
> On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg 
> <whats...@gmail.com<javascript:>
> > wrote:
>
>>
>>
>> On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:
>>
>>>
>>>
>>>
>>> On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
>>>
>>>>  Hi Craig Weinberg 
>>>>  
>>>> Sorry, everybody, I was snookered into believing that they had really 
>>>> accomplished the impossible.
>>>>
>>>
>>> So you think this paper is fiction and the video is fabricated? Do 
>>> people here know something I don't about the authors?
>>>
>>
>> The paper doesn't claim that images from the brain have been decoded, 
>>
>
> Yes it does, right in the abstract:
> "To demonstrate the power of our approach, we also constructed a Bayesian 
> decoder [8] by combining estimated encoding models with a sampled natural 
> movie prior. The decoder provides remarkable reconstructions of the viewed 
> movies."
>

The Bayesian decoder is not literally decoding the BOLD and fMRI patterns 
into images, no more than listing the ingredients of a bag of chips in 
alphabetical order turns potatoes into words. The key is the 'sampled 
natural movie prior'. That means it is a figurative reconstruction. They 
are giving you a choice of selecting one video from hundreds, then looking 
at the common patterns in several people's brains when they choose the same 
video. They are not decoding the patterns into videos. By 'reconstructions' 
they are not saying that they literally recreated any part of the visual 
experience, but rather that they were able to make a composite video from 
the videos that they used by plugging the Bayesian probability into the 
data sets. The videos that you see are YouTube videos superimposed, *not in 
any way* a decoded translation of neural correlates.


> http://www.cell.com/current-biology/abstract/S0960-9822%2811%2900937-7
>
>  
>
>> but the sensational headlines imply that is what they did.
>>
>
> Starting with UC Berkeley itself:
> http://newscenter.berkeley.edu/2011/09/22/brain-movies/
>

Of course. Does that surprise you? University PR is notoriously hyped. 
Exciting the public is the stuff that endowments are made of.
 

>  
>
>> The video isn't supposed to be anything but fabricated.
>>
>
> ALL videos are fabricated in that sense.
>

Sure, but a video from a camera on the end of a wire in someone's esophagus 
is less of a fabrication than a collage of verbal descriptions about 
digestion. See what I'm driving at? The images are images they got off the 
internet superimposed over each other - not out of someone's brain activity 
being interpreted by a computer. The only thing being interpreted or 
decoded is cross referenced statistics. 

Try thinking about it this way. What would the video look like if they 
plugged the Bayesian decoder algorithm into the regions related to the 
memory of flavors? Show someone a picture of strawberries, let's you get a 
pattern in the olfactory-gustatory regions of the brain. Show someone else 
a bunch of pictures of tasty things, and lo and behold, through your 
statistical regression, you can match up a pictures of strawberry candy, 
strawberry ice cream, etc with the pictures of stawberries, pink milk, etc. 
You get a video of blurry pink stuff and proclaim that you have 
reconstructed the image of strawberry flavor. It's a neat bit of stage 
magic, but it has nothing at all to do with translating flavor into image. 
No more than searching strawberries on Google gives routers and servers a 
taste of strawberry.

 
>
>>  It's a muddle of YouTube videos superimposed upon each other according 
>> to a Bayesian probability reduction. 
>>
>
> Yes, and the images you see on your computer screen are just a matrix of 
> molecules artificially made to align in a certain way so that the light 
> being emitted behind them arrives at your eyes in a way that resembles the 
> light emitted by some real world scene that it is meant to be represented.
>

Photography is a direct optical analog. The pixels on a computer screen are 
a digitized analog of photography. The images 'reconstructed' are not 
analogs at all, they are wholly synthetic guesses which are reverse 
engineered purely from probability. What you see are not in fact images, 
but mechanically curated noise which remind us of images.
 

>  
>
>> Did you think that the video was coming from a brain feed like a TV 
>> broadcast? It is certainly not that at all.
>>
>
> Nice straw man + ad hominem you did there!
>

Sorry, I wasn't trying to do either, although I admit it was condescending. 
I was trying to point out that it seems like you were saying that brain 
activity was decoded into visual pixels. I'm not clear really on what your 
understanding of it is.

 
>
>>  
>>
>>>
>>> The hypothesis is that the brain has some encoding for images. 
>>>
>>
>> Where are the encoded images decoded into what we actually see?
>>
>
> In the computer that runs the Bayesian algorithm.
>

I'm talking about where in the brain are the images that we actually see 
'decoded'?
 

>  
>
>>  
>>
>>> These images can come from the optic nerve, they could be stored in 
>>> memory or they could be constructed by sophisticated cognitive processes 
>>> related to creativity, pattern matching and so on. But if you believe that 
>>> the brain's neural network is a computer responsible for our cognitive 
>>> processes, the information must be stores there, physically, somehow.
>>>
>>
>> That is the assumption, but it is not necessarily a good one. The problem 
>> is that information is only understandable in the context of some form of 
>> awareness - an experience of being informed. A machine with no user can 
>> only produce different kinds of noise as there is nothing ultimately to 
>> discern the difference between a signal and a non-signal.
>>
>
> Sure. That's why the algorithm has to be trained with known videos. So it 
> can learn which brain activity correlates with what 3p accessible images we 
> can all agree upon.
>

Images aren't 3p. Images are 1p visual experiences inferred through 3p 
optical presentations. The algorithm can't learn anything about images 
because it will never experience them in any way.
 

>  
>
>>
>>  
>>> It's horribly hard to decode what's going on in the brain.
>>>
>>
>> Yet every newborn baby learns to do it all by themselves, without any 
>> sign of any decoding theater.
>>
>
> Yes. The newborn baby comes with the genetic material that generates the 
> optimal decoder.
>  
>
>>  
>>
>>>
>>> These researchers thought of a clever shortcut. They expose people to a 
>>> lot of images and record come measures of brain activity in the visual 
>>> cortex. Then they use machine learning to match brain states to images. Of 
>>> course it's probabilistic and noisy. But then they got a video that 
>>> actually approximates the real images. 
>>>
>>
>> You might get the same result out of precisely mapping the movements of 
>> the eyes instead.
>>
>
> Maybe. That's not where they took the information from though. They took 
> it from the visual cortex.
>

That's what makes people jump to the conclusion that they are looking at 
something that came from a brain rather than YouTube + video editing + 
simple formula + data sets from experiments that have no particular 
relation to brains or consciousness.
 

>  
>
>> What they did may have absolutely nothing to do with how the brain 
>> encodes or experiences images, no more than your Google history can 
>> approximate the shape of your face.
>>
>
> Google history can only approximate the shape of my face if there is a 
> correlation between the two. In which case my Google history is, in fact, 
> also a description of the shape of my face.
>

Why would there by a correlation between your Google history and the shape 
of your face?
 

>  
>
>>  
>>
>>> So there must be some way to decode brain activity into images.
>>>
>>> The killer argument against that is that the brain has no sync signals 
>>>> to generate
>>>> the raster lines.
>>>>
>>>
>>> Neither does reality, but we somehow manage to show a representation of 
>>> it on tv, right?
>>>
>>
>> What human beings see on TV simulates one optical environment with 
>> another optical environment. You need to be a human being with a human 
>> visual system to be able to watch TV and mistake it for a representation of 
>> reality. Some household pets might be briefly fooled also, but mostly other 
>> species have no idea why we are staring at that flickering rectangle, or 
>> buzzing plastic sheet, or that large collection of liquid crystal flags. 
>> Representation is psychological, not material. The map is not the territory.
>>
>
> I agree. I never claimed this was an insight into 1p or anything to do 
> with consciousness. Just that you can extract information from human 
> brains, because that information is represented there somehow. But you're 
> only going to get 3p information.
>

The information being modeled here visually is not extracted from the human 
brain. Videos are matched to videos based on incidental correlations of 
brain activity. The same result could be achieved in many different ways 
having nothing to do with the brain at all. You could have people listen to 
one of several songs and draw a pictures of how the music makes them feel, 
and then write a program which figures out which song they most likely drew 
based on the statistics what known subjects drew - voila, you have a 
picture of music.

Craig.

 
>
>>
>>
>>
>>  
>>>
>>>>   
>>>>  
>>>> [Roger Clough], [rcl...@verizon.net]
>>>> 1/6/2013 
>>>> "Forever is a long time, especially near the end." - Woody Allen
>>>>
>>>> ----- Receiving the following content ----- 
>>>> *From:* Craig Weinberg 
>>>> *Receiver:* everything-list 
>>>> *Time:* 2013-01-05, 11:37:17
>>>> *Subject:* Re: Subjective states can be somehow extracted from brains 
>>>> via acomputer
>>>>
>>>>  
>>>>
>>>> On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote: 
>>>>>
>>>>>
>>>>> Subjective states can somehow be extracted from brains via a computer. 
>>>>>
>>>>
>>>> No, they can't.
>>>>  
>>>>
>>>>>
>>>>> The ingenius folks who were miraculously able to extract an image from 
>>>>> the brain 
>>>>> that we saw recently 
>>>>>
>>>>  
>>>>  
>>>>> http://gizmodo.com/5843117/**sci**entists-reconstruct-video-**clip**
>>>>> s-from-brain-activity<http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity>
>>>>>  
>>>>>
>>>>> somehow did it entirely through computation. How was that possible? 
>>>>>
>>>>
>>>> By passing off a weak Bayesian regression analysis as a terrific 
>>>> consciousness breakthrough. Look again at the image comparisons. There is 
>>>> nothing being reconstructed, there is only the visual noise of many 
>>>> superimposed shapes which least dis-resembles the test image. It's not 
>>>> even 
>>>> stage magic, it's just a search engine.
>>>>  
>>>>
>>>>>
>>>>> There are at least two imaginable theories, neither of which I can 
>>>>> explain step by step: 
>>>>>
>>>>
>>>>
>>>> What they did was take lots of images and correlate patterns in the V1 
>>>> region of the brain with those that corresponded V1 patterns in others who 
>>>> had viewed the known images. It's statistical guesswork and it is complete 
>>>> crap.
>>>>
>>>> "The computer analyzed 18 million seconds of random YouTube video, 
>>>> building a database of potential brain activity for each clip. From all 
>>>> these videos, the software picked the one hundred clips that caused a 
>>>> brain 
>>>> activity more similar to the ones the subject watched, combining them into 
>>>> one final movie"
>>>>
>>>> Crick and Koch found in their 1995 study that
>>>>
>>>> "The conscious visual representation is likely to be distributed over 
>>>>> more than one area of the cerebral cortex and possibly over certain 
>>>>> subcortical structures as well. We have argued (Crick and Koch, 1995a) 
>>>>> that 
>>>>> in primates, contrary to most received opinion, it is not located in 
>>>>> cortical area V1 (also called the striate cortex or area 17). Some of the 
>>>>> experimental evidence in support of this hypothesis is outlined below. 
>>>>> This 
>>>>> is not to say that what goes on in V1 is not important, and indeed may be 
>>>>> crucial, for most forms of vivid visual awareness. What we suggest is 
>>>>> that 
>>>>> the neural activity there is not directly correlated with what is seen."
>>>>>
>>>>
>>>> http://www.klab.caltech.edu/~**koch/crick-koch-cc-97.html<http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html>
>>>>
>>>> What was found in their study, through experiments which isolated the 
>>>> effects in the brain which are related to looking (i.e. directing your 
>>>> eyeballs to move around) from those related to seeing (the appearance of 
>>>> images, colors, etc) is that the activity in the V1 is exactly the same 
>>>> whether the person sees anything or not. 
>>>>
>>>> What the visual reconstruction is based on is the activity in the 
>>>> occipitotemporal visual cortex. (downstream of V1 
>>>> http://www.sciencedirect.com/**science/article/pii/**S0079612305490196<http://www.sciencedirect.com/science/article/pii/S0079612305490196>
>>>> )
>>>>
>>>> "Here we present a new motion-energy [10,
>>>>> 11] encoding model that largely overcomes this limitation.
>>>>> The model describes fast visual information and slow hemodynamics
>>>>> by separate components. We recorded BOLD
>>>>> signals in occipitotemporal visual cortex of human subjects
>>>>> who watched natural movies and fit the model separately
>>>>> to individual voxels." https://sites.google.com/site/**
>>>>> gallantlabucb/publications/**nishimoto-et-al-2011<https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011>
>>>>>
>>>>
>>>> So what they did is analogous to tracing the rectangle pattern that 
>>>> your eyes make when generally tracing the contrast boundary of a door-like 
>>>> image and then comparing that pattern to patterns made by other people's 
>>>> eyes tracing the known images of doors. It's really no closer to any 
>>>> direct 
>>>> access to your interior state than any data-mining advertiser gets by 
>>>> chasing after your web history to determine that you might buy prostate 
>>>> vitamins if you are watching a Rolling Stones YouTube.
>>>>
>>>> a) Computers are themselves conscious (which can neither be proven nor 
>>>>> disproven) 
>>>>>     and are therefore capable of perception. 
>>>>>
>>>>
>>>> Nothing can be considered conscious unless it has the capacity to act 
>>>> in its own interest. Computers, by virtue of their perpetual servitude to 
>>>> human will, are not conscious.
>>>>  
>>>>
>>>>>
>>>>>     or 
>>>>>
>>>>> 2) The flesh of the brain is simultaneously objective and subjective. 
>>>>>     Thus an ordinary (by which I mean not conscious) computer can work 
>>>>> on it 
>>>>>     objectively yet produce a subjective image by some manipulation of 
>>>>> the flesh 
>>>>>     of the brain. One perhaps might call this "milking" of the brain. 
>>>>>   
>>>>>
>>>>
>>>> The flesh of the brain is indeed simultaneously objective and 
>>>> subjective (as are all living cells and perhaps all molecules and atoms), 
>>>> but the noise comparisons being done in this experiment aren't milking 
>>>> anything but the hype machine of pop-sci neuro-fluff. It is cool that they 
>>>> are able to refine the matching of patterns in the brain to patterns which 
>>>> can be identify computationally, but without the expectation of a visual 
>>>> image corresponding to these patterns in the first place, it is 
>>>> meaningless 
>>>> as far as understanding consciousness. What it does do though is provide a 
>>>> new hunger for invasive neurological technologies to analyze the behavior 
>>>> of your brain and draw statistical conclusions from...something which 
>>>> promises nothing less than utopian/dystopian level developments. 
>>>>
>>>> Craig
>>>>  
>>>>
>>>>>
>>>>> [Roger Clough], [rcl...@verizon.net] 
>>>>> 1/5/2013   
>>>>> "Forever is a long time, especially near the end." - Woody Allen 
>>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Everything List" group.
>>>> To view this discussion on the web visit https://groups.google.com/d/**
>>>> msg/everything-list/-/Z_**D4nNG0oGUJ<https://groups.google.com/d/msg/everything-list/-/Z_D4nNG0oGUJ>
>>>> .
>>>> To post to this group, send email to everyth...@googlegroups.**com.
>>>> To unsubscribe from this group, send email to everything-li...@**
>>>> googlegroups.com.
>>>>
>>>> For more options, visit this group at http://groups.google.com/**
>>>> group/everything-list?hl=en<http://groups.google.com/group/everything-list?hl=en>
>>>> .
>>>>
>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Everything List" group.
>>>> To post to this group, send email to everyth...@googlegroups.**com.
>>>> To unsubscribe from this group, send email to everything-li...@**
>>>> googlegroups.com.
>>>>
>>>> For more options, visit this group at http://groups.google.com/**
>>>> group/everything-list?hl=en<http://groups.google.com/group/everything-list?hl=en>
>>>> .
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msg/everything-list/-/6hB08_ZTh9kJ.
>>
>> To post to this group, send email to everyth...@googlegroups.com<javascript:>
>> .
>> To unsubscribe from this group, send email to 
>> everything-li...@googlegroups.com <javascript:>.
>> For more options, visit this group at 
>> http://groups.google.com/group/everything-list?hl=en.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/1EFWCDGNhrEJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to