Glad to hear :)

Am Montag, 27. Februar 2017 21:54:31 UTC+1 schrieb Robert Goulet:
>
> @Floh I actually went with your idea. Was pretty easy to implement after 
> all. We already used some sort of TOC file, and now we just provide the 
> hash (murmur64) for every entry.
>
> What I do is this:
>
>    1. Get TOC file from cache (no download) and populate map (if it 
>    didn't exist in cache, nothing is populated)
>    2. Get TOC file from remote (persist and always replace) and *update *map 
>    (that way we can compare if a file hash changed)
>    3. Now we know if we have to get files from remote (thus would mean 
>    replace in cache) or from cache
>
> Whenever we compile game data, we always produce this TOC file, so if any 
> file changed, its associated hash will change as well, keeping the TOC file 
> always up-to-date. Since we fetch the TOC file from both cache and remote, 
> we don't have to compute the hash at runtime, we can simply compare the 
> values.
>
> This turns out to be cleaner than I thought.
>
> Cheers!
>
> On Saturday, February 25, 2017 at 5:39:45 AM UTC-5, Floh wrote:
>>
>> For game assets we solve this problem in our own code, but not through 
>> time stamps, but md5 content hashes, the md5 hashes for the actual files 
>> lives in a table-of-content file which is never cashed (or... if you want 
>> to cache this too, transmit a hash for the TOC file first, for instance by 
>> asking a REST service directly.
>>
>> For the HTTP transfer, the md5 hash is added to the URL, either as a 
>> parameter (?ver=[md5]), or the md5 hash *is* the filename. This also makes 
>> sure that all caches inbetween (browser cache, proxies, CDNs,...) don't 
>> accidently deliver an old version.
>>
>> Not sure if this would solve your specific problem, but it works pretty 
>> well for game asset files, where some files change with each update and 
>> other files remain unchanged for a long time.
>>
>> Cheers,
>> -Floh.
>>
>> Am Freitag, 24. Februar 2017 21:31:14 UTC+1 schrieb Robert Goulet:
>>>
>>> Greetings!
>>>
>>> So I managed to upgrade our engine to start using emscripten_fetch API, 
>>> which is very nice! Good job! Being able to get the content of a file from 
>>> either local storage (IndexedDB) or from the remote using a single 
>>> emscripten_fetch call is wonderful.
>>>
>>> I'm currently using the EMSCRIPTEN_FETCH_LOAD_TO_MEMORY | 
>>> EMSCRIPTEN_FETCH_PERSIST_FILE | EMSCRIPTEN_FETCH_APPEND attributes, 
>>> which is nice to make sure to use local storage content if it already 
>>> exist. However this pose the problem that if the content changed on the 
>>> remote, the local storage will be used instead of the newer remote content. 
>>> I was hoping there would be a way to compare remote file timestamps with 
>>> local storage file timestamp, and replace local storage if remote file 
>>> timestamp is newer, but I couldn't find it.
>>>
>>> Should I be implementing my own versioning system somehow? A timestamp 
>>> based system would have been nice because it would allow to replace single 
>>> files instead of all files. Should I look into implementing a timestamp 
>>> replace option in Emscripten fetch API, or should this entire versioning 
>>> system stay outside of the API?
>>>
>>> /discuss
>>>
>>> Thanks!
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to