While doofenstein makes some good points especially about fragility with regard 
to adding new fields and old data files, there is some real efficiency charm to 
a "just memory map and go" pure binary native data approach. It's not always 
fun to be parsing data files again and again. So, I would also add another way 
to read the data. Note that you should probably use the `{.packed.}` pragma, 
note the use of `ptr UncheckedArray` which should really alert the reader to be 
careful, note the commented out inference of the number of objects in the file. 
My example code assumes you wrote out at least two objects from code like you 
had above (with a `{.packed.}` pragma). It will probably crash if the file is 
too short, etc. Just to give you a flavor. 
    
    
    import memfiles
    type
      MyType* {.packed.} = object
        somefield*: array[10, int16]
      MyTypes* = ptr UncheckedArray[MyType]
    var t: MyType
    var f = memfiles.open("/tmp/tmp.mytype")
    # infer n object from file size (may need packed); Do own range checking!
    # let n = f.size / sizeof(MyType)
    let myobs = cast[MyTypes](f.mem)
    echo myobs[0].somefield[1]
    echo myobs[1].somefield[0]
    

Reply via email to