On Thursday, October 13, 2016 at 1:17:32 AM UTC, cdm wrote:
> do you have traditional main memory RAM in mind here ... ?
Yes (how big arrays people are working with; but also if bigger files, how
big), and no:
> with flash memory facilitating tremendous advances
> in (near) in-memory processing, the lines between
> traditional RAM and flash memory have become
> considerably blurred.
I know, and the distinction will I guess disappear in the future (but yes,
I'm thinking what you need to address, as RAM or looks like, including
virtual memory) at least with:
[already available] etc.
I'm thinking how big do pointers need to be, e.g. 64-bit seems to be
overkill.. or should I say indexes into arrays need not be.
Yes, there's also memory mapped I/O.
We had 64-bit file systems before 64-bit [x86] CPUs, so bitness of CPU
doesn't (didn't, yes better(?) fro memory mapped I/O..) have to align with
big files (and we already have 128-bit ZFS is 278 but individual files are
still limited to 64-bit).
> ~ cdm
> On Wednesday, October 12, 2016 at 3:23:58 PM UTC-7, Páll Haraldsson wrote:
>> I'm most concerned, about how much needs to fit in *RAM*, and curious
>> what is considered big, in RAM (or not..).