While my example isn't about puppet, it's about another templating
technology - that is by technical design hence also completely open
source. We have a completely modulirized and completely distributed
deployment scenario that is based on git.

The fundamental core how git addresses the solution is explained here
- especially on the visualization.

I'd like to add a point of view that will underline git behaviour as a
core requirement on distributed depoloyment - if there is any issue
with big files, that's a good concern to address.

While this can be well justified and argued to benefit for possibly
platform specific deployment tools such as rpm or msi modules, the
audit trail of distributed software (the core requirement for source
code andyway) still remains. If there is a "big file" that is part of
the audit process, it's consistency needs to be guaranteed. And there
is no way around it but the git way of doing complete secure hashed
history trail.

This is not "nice to have" feature, but critical requirement to deploy
embedded software for automated machinery to suffice safety
regulations. Hospital, aviation, nuclear device software all apply
here. Distributed software processes and projects all benefit from
this - and trying to "cut short" some part of the complete audit trail
of development will cause larger pain points elsewhere.

So is there real problem with big files performance or storage files?
Anything else than "uncommon" slowness on calculating and comparing
large file sha1 hashes?

To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to