I didn't mention variation in brightness though. Say you have an image of a cat 
upside down, and is darker than have seen last time! The same trick handles 
this, we start off small features and see both the times they should occur and 
the difference between both's brightness are not far off. For single pixel in 
layer 1 it is just 1 pixel comparison.

So in conclusion, continuous needs to be quantized, We can get slight 
variations in the arrangement of parts, like horizontal flip, rotation, scale, 
stretch, brightness. All these new many inputs are actually just the same 
thing, that's how we recognize them. If text prediction was discrete, there 
wouldn't be any problem, but there is. Rearranged words is as continuous as 
change in brightness because relatively there is no difference so much, each 
feature of the sentence or image is only a little off from what features it 
should be beside or away from in brightness. In life there's these sad inputs 
that are brighter, or have rearranged words just a bit, but we can recognize 
them.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T181e21f4aba061f3-M554be66a9bee0badc1322099
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to