RE: OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-02 Thread Randall Lee Reetz
I can see how the word "revolution" in the context of this list has acquired so 
anemic and castrated a meaning.  I am sorry.  Next time, I will use a word that 
means all the way around, or when a king is replaced by a democracy. time.  

-Original Message-
From: Ian Wood 
Sent: Sunday, May 02, 2010 9:27 PM
To: How to use Revolution 
Subject: OT?: AI,   learning networks and pattern recognition (was: Apples 
actual   response to the Flash issue)

Now we're getting somewhere that actually has some vague relevance to  
the list.


On 2 May 2010, at 22:39, Randall Reetz wrote:

> I had assumed your questions were rhetorical.

If I ask the same questions multiple times you can be sure that  
they're not rhetorical.

> When I say that software hasn't changed I mean to say that it hasn't  
> jumped qualitative categories.  We are still living in a world where  
> computing exists as pre-written and compiled software that is  
> blindly executed by machines and stacked foundational code that has  
> no idea what it is processing, can only process linearly, all  
> semantics have been stripped, it doesn't learn from experience or  
> react to context unless this too has been pre-codified and frozen in  
> binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
> our little wrote tricks can be made more elaborate within the  
> substantial confines mentioned.  These same in-paradigm restrictions  
> apply to both the software users slog through and the software we  
> use to write software.
>
> As a result, these very plastic machines with mercurial potential  
> are reduced to simple players that react to user interrupts.  They  
> are sequencing systems, not unlike the lead type setting racks of  
> Guttenburg-era printing presses.  Sure we have taught them some  
> interesting seeming tricks – if you can represent something as  
> digital media, be it sound, video, multi-dimentional graph space,  
> markup – our sequencer doesn't know enough to care.

So for you, for something to be 'revolutionary' it has to involve a  
full paradigm shift? That's a more extreme definition than most people  
use.

> Current processors are capable of 6.5 million instructions per  
> second but are used less than a billionth of available cycles by the  
> standard users running standard software.

 From a pedantic, technical point of view, these days if the processor  
is being used that little then it will ramp down the clock speed,  
which has some environmental and practical benefits in itself. ;-)

> As regards photo editing software, anyone aware of the history of  
> image processing will recognize that most of the stuff seen in  
> photoshop and other programs was proposed and executed on systems  
> long before some guys in france democratized these algorithms for  
> consumer use and had their code acquired by adobe.  It used to be  
> called array arithmetic and applied smoothly to images divided up  
> into a grid of pixels.  None of these systems "see" an image for its  
> content except as an array of numbers that can be crunched  
> sequentially like a spread sheet.
>
> It was only when object recognition concepts were applied to photos  
> that any kind of compositional grammar could be extracted from an  
> image and compared as parts to other images similarly decomposed.   
> This is a form of semantic processing and has its parallels in other  
> media like text parsers and sound analysis software.

You haven't looked up what content-aware fill *is*, have you? It's  
based on the same basic concepts of pattern-matching/feature detection  
that facial recognition software is based on but with a different  
emphasis.

To paraphrase, it's not facial recognition that you think is the only  
revolutionary feature in photography in twenty years, it's pattern- 
matching/detection/eigenvectors. A lot of time and frustration would  
have been saved if you'd said that in the first place.

> Semantics opens the door to the building of systems that  
> "understand" the content they process.  That is the promised second  
> revolution in computation that really hasn't seen any practical  
> light of day as of yet.

You're jumping too many steps here - object recognition concepts are  
in *widespread* use in consumer software and devices, whether it's the  
aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
many different pieces of software, feature recognition in panoramic  
stitching software or even live stitching in some of the new Sony  
cameras.

Semantic processing of content doesn't magically enable a computer to  
initiate action.

> Data mining really isn't semantically mindful, simply uses  
> statistical reduction mechanisms to guess at the existence of the  
> location of pattern ( a good first step but missing the grammatical  
> hierarchy necessary to work towards a self optimized and domain  
> independent ability to detect and represent salience in the stacked  

RE: OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-02 Thread Randall Lee Reetz
Why don't you ask the guys at adobe if their content is really aware.

-Original Message-
From: Ian Wood 
Sent: Sunday, May 02, 2010 9:27 PM
To: How to use Revolution 
Subject: OT?: AI,   learning networks and pattern recognition (was: Apples 
actual   response to the Flash issue)

Now we're getting somewhere that actually has some vague relevance to  
the list.


On 2 May 2010, at 22:39, Randall Reetz wrote:

> I had assumed your questions were rhetorical.

If I ask the same questions multiple times you can be sure that  
they're not rhetorical.

> When I say that software hasn't changed I mean to say that it hasn't  
> jumped qualitative categories.  We are still living in a world where  
> computing exists as pre-written and compiled software that is  
> blindly executed by machines and stacked foundational code that has  
> no idea what it is processing, can only process linearly, all  
> semantics have been stripped, it doesn't learn from experience or  
> react to context unless this too has been pre-codified and frozen in  
> binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
> our little wrote tricks can be made more elaborate within the  
> substantial confines mentioned.  These same in-paradigm restrictions  
> apply to both the software users slog through and the software we  
> use to write software.
>
> As a result, these very plastic machines with mercurial potential  
> are reduced to simple players that react to user interrupts.  They  
> are sequencing systems, not unlike the lead type setting racks of  
> Guttenburg-era printing presses.  Sure we have taught them some  
> interesting seeming tricks – if you can represent something as  
> digital media, be it sound, video, multi-dimentional graph space,  
> markup – our sequencer doesn't know enough to care.

So for you, for something to be 'revolutionary' it has to involve a  
full paradigm shift? That's a more extreme definition than most people  
use.

> Current processors are capable of 6.5 million instructions per  
> second but are used less than a billionth of available cycles by the  
> standard users running standard software.

 From a pedantic, technical point of view, these days if the processor  
is being used that little then it will ramp down the clock speed,  
which has some environmental and practical benefits in itself. ;-)

> As regards photo editing software, anyone aware of the history of  
> image processing will recognize that most of the stuff seen in  
> photoshop and other programs was proposed and executed on systems  
> long before some guys in france democratized these algorithms for  
> consumer use and had their code acquired by adobe.  It used to be  
> called array arithmetic and applied smoothly to images divided up  
> into a grid of pixels.  None of these systems "see" an image for its  
> content except as an array of numbers that can be crunched  
> sequentially like a spread sheet.
>
> It was only when object recognition concepts were applied to photos  
> that any kind of compositional grammar could be extracted from an  
> image and compared as parts to other images similarly decomposed.   
> This is a form of semantic processing and has its parallels in other  
> media like text parsers and sound analysis software.

You haven't looked up what content-aware fill *is*, have you? It's  
based on the same basic concepts of pattern-matching/feature detection  
that facial recognition software is based on but with a different  
emphasis.

To paraphrase, it's not facial recognition that you think is the only  
revolutionary feature in photography in twenty years, it's pattern- 
matching/detection/eigenvectors. A lot of time and frustration would  
have been saved if you'd said that in the first place.

> Semantics opens the door to the building of systems that  
> "understand" the content they process.  That is the promised second  
> revolution in computation that really hasn't seen any practical  
> light of day as of yet.

You're jumping too many steps here - object recognition concepts are  
in *widespread* use in consumer software and devices, whether it's the  
aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
many different pieces of software, feature recognition in panoramic  
stitching software or even live stitching in some of the new Sony  
cameras.

Semantic processing of content doesn't magically enable a computer to  
initiate action.

> Data mining really isn't semantically mindful, simply uses  
> statistical reduction mechanisms to guess at the existence of the  
> location of pattern ( a good first step but missing the grammatical  
> hierarchy necessary to work towards a self optimized and domain  
> independent ability to detect and represent salience in the stacked  
> grammar that makes up any complex system.

Combining pattern-matching with adaptive systems, whether they be  
neural networks or something else is another matter - bu

Re: OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

2010-05-02 Thread Matthias Rebbe
Dear all,

i think it is all said. Please stop this annoying discussion.

This list is called  "use-revolution", so maybe we can come back to this again. 

Thank you!

Matthias
Am 03.05.2010 um 07:47 schrieb Randall Lee Reetz:

> Why don't you ask the guys at adobe if their content is really aware.
> 
> -Original Message-
> From: Ian Wood 
> Sent: Sunday, May 02, 2010 9:27 PM
> To: How to use Revolution 
> Subject: OT?: AI, learning networks and pattern recognition (was: Apples 
> actual   response to the Flash issue)
> 
> Now we're getting somewhere that actually has some vague relevance to  
> the list.
> 
> 
> On 2 May 2010, at 22:39, Randall Reetz wrote:
> 
>> I had assumed your questions were rhetorical.
> 
> If I ask the same questions multiple times you can be sure that  
> they're not rhetorical.
> 
>> When I say that software hasn't changed I mean to say that it hasn't  
>> jumped qualitative categories.  We are still living in a world where  
>> computing exists as pre-written and compiled software that is  
>> blindly executed by machines and stacked foundational code that has  
>> no idea what it is processing, can only process linearly, all  
>> semantics have been stripped, it doesn't learn from experience or  
>> react to context unless this too has been pre-codified and frozen in  
>> binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
>> our little wrote tricks can be made more elaborate within the  
>> substantial confines mentioned.  These same in-paradigm restrictions  
>> apply to both the software users slog through and the software we  
>> use to write software.
>> 
>> As a result, these very plastic machines with mercurial potential  
>> are reduced to simple players that react to user interrupts.  They  
>> are sequencing systems, not unlike the lead type setting racks of  
>> Guttenburg-era printing presses.  Sure we have taught them some  
>> interesting seeming tricks – if you can represent something as  
>> digital media, be it sound, video, multi-dimentional graph space,  
>> markup – our sequencer doesn't know enough to care.
> 
> So for you, for something to be 'revolutionary' it has to involve a  
> full paradigm shift? That's a more extreme definition than most people  
> use.
> 
>> Current processors are capable of 6.5 million instructions per  
>> second but are used less than a billionth of available cycles by the  
>> standard users running standard software.
> 
> From a pedantic, technical point of view, these days if the processor  
> is being used that little then it will ramp down the clock speed,  
> which has some environmental and practical benefits in itself. ;-)
> 
>> As regards photo editing software, anyone aware of the history of  
>> image processing will recognize that most of the stuff seen in  
>> photoshop and other programs was proposed and executed on systems  
>> long before some guys in france democratized these algorithms for  
>> consumer use and had their code acquired by adobe.  It used to be  
>> called array arithmetic and applied smoothly to images divided up  
>> into a grid of pixels.  None of these systems "see" an image for its  
>> content except as an array of numbers that can be crunched  
>> sequentially like a spread sheet.
>> 
>> It was only when object recognition concepts were applied to photos  
>> that any kind of compositional grammar could be extracted from an  
>> image and compared as parts to other images similarly decomposed.   
>> This is a form of semantic processing and has its parallels in other  
>> media like text parsers and sound analysis software.
> 
> You haven't looked up what content-aware fill *is*, have you? It's  
> based on the same basic concepts of pattern-matching/feature detection  
> that facial recognition software is based on but with a different  
> emphasis.
> 
> To paraphrase, it's not facial recognition that you think is the only  
> revolutionary feature in photography in twenty years, it's pattern- 
> matching/detection/eigenvectors. A lot of time and frustration would  
> have been saved if you'd said that in the first place.
> 
>> Semantics opens the door to the building of systems that  
>> "understand" the content they process.  That is the promised second  
>> revolution in computation that really hasn't seen any practical  
>> light of day as of yet.
> 
> You're jumping too many steps here - object recognition concepts are  
> in *widespread* use in consumer software and devices, whether it's the  
> aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
> many different pieces of software, feature recognition in panoramic  
> stitching software or even live stitching in some of the new Sony  
> cameras.
> 
> Semantic processing of content doesn't magically enable a computer to  
> initiate action.
> 
>> Data mining really isn't semantically mindful, simply uses  
>> statistical reduction mechanisms to guess at the existence of the  
>> loca