Here's an example of the average stupidity in AI / academy: How many functions (vector-scalar) must be implemented to realize a visual system similar to that of a primate?<https://www.researchgate.net/post/How_many_functions_vector-scalar_must_be_implemented_to_realize_a_visual_system_similar_to_that_of_a_primate> According to some authors, including T. Poggio, the visual pathway of the "what" that starts from areas V1 and goes down to the ventral areas.. IT (S5), contains about 800 million neurons. If we thought that each neuron computes a vector-scalar function (which takes as input a vector and produces as output a single scalar), then it would seem to need 800 million functions. However, according to various authors, including Kurzweil, the cortex is organized into columns (or mini-columns), consisting of a hundred neurons and dedicated to the recognition of a single structural feature, of a "single shape" , to use the language of Kurzweil. In this case it would seem to be sufficient for 8 million functions. (the calculation of 8 million vector-scalar functions may be within the reach of a single GPU...) [less]
Peter Husar said: Dear Devis, you can implement 8 million functions, e.g. on an GPU. You can implement 80 million functions, e.g. on more GPUs or FPGAs. Or you can implement 800 million functions on the european super computer. I fear not the hardware implementation but which method you would write into the function blocks. Do you have got that functions? After my experience that function will be non-linear in time and space, highly dynamic and extremely complex. Thus, let us suppose, you have up to 800 million functions. What will you fill them with? Please, let me know about your solution. Best, Peter ---- More like, if you knew the structural breakdown of the incoming input and have already processed it for the various layers emulating the visual system of the human, you'd already have the post processing structure setup that is required for the "functions" but you'd only need a matching algorithm just like the damn cortical columns that take in the various areas of the visual fields and the vectorized elements coming from the higher stage breakdowns... You end up with 800 million potential elements you can search for a match for, with the right data representation the most basic binary search function you can find your elements down any tree-branch and you'd end up with a few hundred required pattern matching elements you'd always be firing for... i.e. constantly validating the existing visual field for elements of change and identifying them but you'd never have to fire all 800 million unless you somehow shutdown and had to reboot the whole image -- and even then you'd really only have to the number of elements in the imagine... This is why I hate CS and AI people, they brute force everything when you can work from a post processing standpoint and at that point you only have rudimentary functions that can easily be parallelized in a GPU/APU even on commodity hardware. The only limitation is that you really have to understand the mesh of representation and functional form that allows you to do *knowledge* based processing without algorithmic complexity... Although, I will admit it is just trading representation processing to hold the complexity, but data storage is definitely cheaper than GPU power and a lot less energy hungry. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
