> On 13 Jun 2019, at 08:10, Konrad Hinsen <konrad.hin...@fastmail.net> wrote:
> 
> Stephan Eggermont <step...@stack.nl> writes:
> 
>>> All of http://files.pharo.org/ ? So how many GB is that?
>> 
>> It is only a few thousand changes per release. There is no reason why that
>> shouldn’t compress well
> 
> Did anybody try?
> 
> In IPFS, files are cut into blocks of roughly 256 KB. Blocks shared
> between multiple files are stored only once. So if changes in Pharo
> images from one version to the next happen mainly in one or two specific
> places (such as beginning and end), they would be stored efficiently on
> IPFS without any effort.
> 
> But again, the best way to know is to try it out.
> 

Indeed. Another thing to try: if someone has a local IPFS running, would it 
cache
only those images that either this machine or near machines requested?

This way we could have one “full” copy and caching (e.g. in the “African class 
room” 
example) would happen automatically.

Of course the downside is that one needs to speak the IPFS protocol, thus 
either running
a client (e.g the go client)… so real transparent use would only be possible if 
Pharo could
implement the protocol… 

(all based on a not complete understanding of IPFS)

        Marcus


Reply via email to