Hi,

HTM "is a memory system, not a computer" says Jeff. I have a problem
with this purported division of computational systems (whatever the
umbrella term) into these two categories.

Computer science knows many examples that fall somewhere between the
two, leading to the suspicion that there is a continuum, and where there
is a continuum, maybe the distinction turns out to be an implementation
detail, or entirely superficial? Looking into this may not just be an
argument about semantics, it may help find areas of application for
today's early-stage HTMs, an important task. But more on this later.

Let's first look at examples such as:
- To trade precision for performance, a 'computer' might refrain from
computing the results of "sin x" for various values of x and instead
look them up for the value nearest to each x in a table of 1000
pre-computed values. Memory system or computer?
- For much added precision at little performance cost, the stored values
are looked up for the nearest higher and lower neighbors of x in the
table and then interpolated between the two. Memory system or computer?
- With memoization, computer programs compute the value of pure
functions -- especially the non-arithmetic type -- once for each set of
inputs, then store the outputs in a table for future lookup. Memory
system or computer?
- In "logic programming" (think Prolog) programs 'compute' results
through a series of lookups of stored facts and rules which are then
processed using some Boolean logic. Memory system or computer?
- A geospatial database may find the stored latitude/longitude points
closest to point x by computing the euclidean distance of each to X and
then selecting the n lowest distances. Memory system or computer?
- Last but not least, retrieval of data from RAM may be seen as a
massively parallel computation where each memory cell compares (a
computation!) its own address to that on the address bus and puts its
content onto the data bus if they match. This becomes more interesting
if the 'address' is of a more complex structure (such as SDRs), e.g. see
the book "Sparse Distributed Memory" (Pentti Kanerva) for this. This
also turns the common wisdom of a "storage vs. computation trade-off"
into a "serial vs. parallel computation" trade-off. Is RAM a memory
system or a computer?

Now let's look at another seeming distinction between HTM memory systems
and computers. Computers seem to perform computations in steps one after
another where a program counter is moving from one line of code to the
next. An HTM on the other hand is a set of connected autonomous elements
that pass messages between one another. This is assuming you really have
a hierarchy which the current version of CLA does not implement --
whether a CLA region in itself can be seen as a set of autonomous
message-passing elements is t.b.d.

So now at least at this higher 'architectural' level there is a
distinction? But the program counter view of computer programming has
been found to be a mistake now. It was probably never a good idea but
various forces including having to resort to multi-core and distributed
systems for ever more processing power and the need to express
conceptually independent temporal sequences, even if later computed
serially, have given rise to dataflow/reactive/actor-based/etc.
programming models -- same concept under different names --, that
implement the HTM view of autonomous communicating elements.

Memory system or computer, still looking different?

Last but not least: HTM 'computes' using SDRs, whereas computers use
other data types such as integers, floats, strings and records
(structs). But are computers justified in doing so? Should data in
computer programs be replaced by SDRs? In at least some cases that's not
so completely unreasonable. In the paper "Hyperdimensional Computing"
(Pentti Kanerva, again) shows how at least some ordinary types of data
and their corresponding computations can be turned into distributed
representations. This is usually trading off precision for operations in
constant space and time which may be appropriate for many operations
currently implemented using traditional data types.

So seeing that there is a sliding scale from memory systems to
computers, can we actually implement a metaphorical slider that I can
slide all the way from one end of the spectrum to the other? The above
mentioned principle of pre-computing values of say, sin x, for later
lookup can be implemented on a sliding scale from all lookup and no
computing (i.e. pre-computing *all* values for a desired precision) to
no lookup and all computing.

Can the other aspects of traditional computer programs as discussed
above be "HTM-ified", too, in incremental steps, according to
yet-to-be-found "HTM-ification recipes"? Such as adding memoization or
logic-type lookup, then transforming, piece by piece, the 'imperative'
program counter type code into a dataflow/reactive model, then replacing
some of the traditional data types with SDRs?

A historical precedent here is the arrival of digital signal processing
in an electronic world which before had only been analog. However no one
went and proclaimed an "analog-digital dichotomy" where the digital
proponents ignored existing analog devices and only interfaced with the
analog world at the edges. Instead, existing analog devices were
digitized step by step by replacing analog components and concepts with
digital ones as they were invented, analog and digital parts tightly
integrated and concepts coexisting peacefully. The TV industry is only
now throwing the last analog concepts out of TVs, such as "tuning" into
a channel, as TV signals are going all-digital.

The expectation of such an "HTM-ification" is that this would open up
possibilities to include HTM-like qualities such as learning and
prediction in traditional computer systems. Possibilities that were
there all along but not evident. Let's say, your hotel reservation
system become an occupancy prediction system. Finding applications for
today's toy-sized HTMs is a Hard Problem, both due to the counter
intuitive nature of the HTM as a tool (intelligent but not human-like)
and a suspected sort of Stockholm syndrome in software development where
the toolset available has shaped the mindset. HTM-ification of computer
programs would make these applications evident.

All of this is speculation at this point but I work on projects on a
day-to-day basis, mostly high-redundancy network applications, that are
implemented as reactive hierarchies, prefer constant memory/computation
cost over precision, work more on lookups than arithmetic computations,
cache ("memorize") data etc. and as such seem to be easy targets for
HTM-ification. This work is in progress.

Comments?

Rik

_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to