The SPLIT_MEMORY option was removed since apparantly it was very slow and no one was using it (ref. https://github.com/emscripten-core/emscripten/pull/7465 and https://groups.google.com/forum/#!searchin/emscripten-discuss/SPLIT_MEMORY%7Csort:date/emscripten-discuss/x9hVnYytB6s/u8GWLJyuBQAJ). However, it seems to me that reading a file chunk by chunk is necessary in some situations. In my case, I have to read large video files using the html5 file interface and web assembly in order to extract gpmf <https://github.com/gopro/gpmf-parser> metadata with gps information from them. Ideally, I should be able to handle files up to multiple gigabytes.
What would be the best way to achieve this? I guess I could allocate the memory manually similar to how qt does it <https://codereview.qt-project.org/gitweb?p=qt/qtbase.git;a=blob;f=src/corelib/io/qhtml5file_html5.cpp;h=054b69d16c2a7627bee576b5f9af6d044dc8f7a2;hb=f0489acbced4eacbc80dd165b644ad19975f5278#l109>, and then I can convert that method to read the file chunk by chunk. But then I would need to convert any existing C code to use the memory directly instead of through the file api, and I would need to implement the chunk logic at the javascript side on my own instead of through an official api. Also, I guess this method would have similar, poor performance to the one used with the deprecated SPLIT_MEMORY option? If so, is there any way to get acceptable performance in an Emscripten wasm project that reads large files chunk by chunk? -- You received this message because you are subscribed to the Google Groups "emscripten-discuss" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
