J has deterministic memory model. The data header is
very small. The space/time use of any primitive or
combination thereof of whole process can easily
be estimated. So it is easy to come up with very
optimal processing scenarios: chunk sizes, when to
load/unload, what to run, etc. J can operate data
in memory very predictably. You don't really need
a benchmark. It is even better to quickly make a
prototype for your own use cases.
Plus there are some bonus features which work very
nicely with huge data sizes: memory-mapped files,
64-bit processor support etc.
See the recent media/wav/viewer and run it over
an up to 2Gb wav file of your choice to see how
the lens zips through the whole file in real time
with adjustable window sizes.
--- Nick Kostirya <[EMAIL PROTECTED]> wrote:
> > Hello All.
> >
> > To estimate the situations I can use J for, I am looking for the
> > successful stories of processing data arrays with J.
>
> Huge data arrays are meant! :-)
>
> >
> > First of all I am interested in learning the data levels, the space
> > they occupied in the storage and the information regarding the
> > hardware used for computing. Besides, the knowledge regarding the
> > specificity of operations with those huge data arrays would be
> > desirable.
> >
> > Iâll be much obliged for the detailed information.
> >
> > All the best, Nick
____________________________________________________________________________________
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm