Ganesh Sittampalam <[email protected]> added the comment: Looks ok to me. I would be interested whether making the hash in Blob lazy would work as well; but I guess it creates the risk of hanging on to entire files instead of hashing them and dropping the data.
[Defer SHA256 from MonadRW updates until disk flush, saving lots of cycles. Petr Rockai <[email protected]>**20100117161040 Ignore-this: c7b045db905819711c0083934c5ec52a ] hunk ./Storage/Hashed/Darcs.hs 144 +darcsAddMissingHashes :: (Monad m, Functor m) => Tree m -> m (Tree m) +darcsAddMissingHashes = updateTree update + where update (SubTree t) = return . SubTree $ t { treeHash = darcsTreeHash t } + update (File blob@(Blob con NoHash)) = + do hash <- sha256 <$> readBlob blob + return $ File (Blob con hash) + update x = return x + hunk ./Storage/Hashed/Darcs.hs 226 - modify $ \st -> st { tree = darcsUpdateDirHashes $ tree st } + hashed <- liftIO . darcsAddMissingHashes =<< gets tree + modify $ \st -> st { tree = hashed } hunk ./Storage/Hashed/Monad.hs 170 - hash = sha256 con + hash = NoHash -- we would like to say "sha256 con" here, but due + -- to strictness of Hash in Blob, this would often + -- lead to unnecessary computation which would then + -- be discarded anyway; we rely on the sync + -- implementation to fix up any NoHash occurrences ---------- status: needs-review -> accepted-pending-tests __________________________________ Darcs bug tracker <[email protected]> <http://bugs.darcs.net/patch144> __________________________________ _______________________________________________ darcs-users mailing list [email protected] http://lists.osuosl.org/mailman/listinfo/darcs-users
