Ed Helfin wrote:

> "It's been some time
>since I looked at this, but I believe my conclusion was that it wasn't all
>that reliable, I.e. low % accuracy for correct POS identification?, etc.  I
>don't know if this gets you where you want to go, but it might be worth
>looking at."

I've looked at a number of different speech and text parsers for my project,
but haven't decided yet on any one solution.  I think in a couple years this
technology will have advanced to the point of being 'plug and play' so to
speak, where you can include it as a standard library within , say, C++.

thanks for the suggestion :)


> BTB, it seems a better, more forward looking approach to your architecture
> might be to implement audio parsing (AP - or speech recognition SR?),
> natural language parsing (NLP) and cognitive processing (CP) or cognition
as
> a coherent whole, not the other way around with separate and distinct
audio
> parsing (AP), natural language parsing (NLP), and cognitive processing
(CP)
> modules...as you suggest with your comments about an OO approach.

I've thought about this, and the conclusion I have come to is that depending
on how you approach AI, each architecture has its pros and cons.

This is why I feel that a functional, modular approach to sensory processing
is the easiest but certainly not the only correct way of doing it  --

If you show 10 people a simple object like a soda can or a pencil, and then
ask them to draw what they see without looking at the object, all ten
results are identifiably the same object.  This to me suggests that the
visual system itself is a highly reliable, predictable system.  Given the
same input, most individuals visual systems will (assuming no colorblindness
or other mutation) pass the same output to the concious levels of the mind.
Differences in perception exist to be sure, but the regularity of perception
among people is quite remarkable.


> In addition to the tremendous benefits of architecting something closer to
> real AGI, i.e. an obvious increase in the 'Goertzelian Real-AGI' level
;-),
> you would have the benefits of computational optimization, specifically,
> reduced # of ops to cognition, reduced object I/O, reduced latency,
reduced
> processing redundancy, etc. assuming, of course, your implementation of
the
> cognitive processing (CP) doesn't incur a tremendous overhead from the
> synthesis with the other two modules.

 This is a quite perceptive summary of the benefits of the approach you
suggest :)

I take a quite non-mainstream approach to AI, and more generally to computer
science as a whole.  For one, I am not at all interested in the CPU-centric
paradigm that permeates the computer industry.

Dedicated purpose hardware provides task specific performance orders of
magnitude higher than that of a general purpose CPU.  And task-specific
hardware need not be inordinately expensive.  Look at graphics and sound
boards as an example of this.

There is no reason you couldn't take every single deterministic, P algorithm
in the standard C++ libraries and implement it as hardware.  Most programs
would then be mostly written in assembly language, with constructions like

binarysearch[sorted_array x, search_target y] replacing   add a, mov y, etc
etc.

not only are you getting the efficiency boost of assembly language, but also
the speed boost of dedicated hardware!   I'm not suggesting eliminating
CPUs, just saying they should act as the conductor, not the conductor plus
the orchestra members plus the instruments plus the stage...

Also, software can be written in hardware.  Photoshop costs 500$, an entry
level computer from dell that will run PS quite well costs 400$.  This is
kinda nutty.  Put the fucker on a chip, with some flash ram to allow
patching, halve the price (who the hell pirates IC's?), and get at least an
order of magnitude increase in program speed *compared to current top of the
line Intel/AMD processors running software version of Photoshop*.  And this
speed would be more or less constant if you put the Photoshop chip in a 400$
PC or a 4000$ dollar pc.  (actually, the faster PC's could help out with
math-heavy stuff such as certain filters).

ok that was all rather off topic :)

anyway back to the topic on hand - I personally am not so much interested in
either imitating the brains architecture or designing a mind that is highly
efficient and 'smart' from the get go.  I'm trying to solve the problem of
general cognition, and hence I don't care if an AI based on my methods
starts out with the smarts of a mouse :).  As long as the general conceptual
basis is sound, and scaleable to human-level cognition or higher, I would be
a very, very happy person.

Ed, thanks for your insightful and thought provoking comments :)  they have
my brain going off in all sorts of directions as a result of writing this
response, and that is definitely a good thing.

J Standley

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to