I should breath before I type, but you probably already got that I meant 
redundant writes (not reads)...

Anyway.. I was talking with Esteban and he mentions some kind of compatibility 
metadata.

If I'm going to give a leap of faith to filetree repos to save code why should 
I care about mcz compatibility? Paying a toll for no reason is evil.

Maybe we could make that optional so those who don't extract value from that 
feature can opt-out?

sebastian

o/





On Dec 11, 2013, at 12:44 PM, Sebastian Sastre <[email protected]> 
wrote:

> Hi Thierry
> 
> On Dec 11, 2013, at 12:43 PM, Goubier Thierry <[email protected]> wrote:
>>> 
>>> I have packages (in the order of hundreds of classes) and save delays
>>> and package click delays are starting to demand patience in a way that
>>> doesn't feel like the right path
>> 
>> Which operations ? I didn't remember noticing much with 179 classes on a 
>> laptop without a SSD.
> 
> choose one. Just for clicking the package that will should you UUID, version 
> and author I need to wait ~16 seconds. Sounds like a lot of overhead for 
> reading a small .json file.
> 
> But the write is the most worrisome 
> 
> 
>>> All that is with a SSD disk, otherwise save delays would be /way/ beyond
>>> unacceptable
>> 
>> I'd like to know more, and understand the reason, for sure. As far as I 
>> know, filetree will rewrite the whole package to disk everytime... and maybe 
>> optimising that could be the solution.
>> 
> 
> Well, that explains a lot. Writing all every time is the lazy thing that's 
> okay for a prototype and temporary code in a proof of concept but that 
> massive redundant reads certainly doesn't sounds like pro software. Specially 
> for SSD's which has a limited quantity of writes 
> 
> 
>> Thierry
>> 
>>> sebastian <https://about.me/sebastianconcept>
>>> 
>>> o/
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> -- 
>> Thierry Goubier
>> CEA list
>> Laboratoire des Fondations des Systèmes Temps Réel Embarqués
>> 91191 Gif sur Yvette Cedex
>> France
>> Phone/Fax: +33 (0) 1 69 08 32 92 / 83 95
>> 
> 
> 

Reply via email to