Hey Dan,
Thanks for your comments.  I appreciate you taking a few moments to look
through this stuff.

> Meta-Comment: Is there a way to tap into a shared content-cache
> when creating images?  This would of course accelerate zone install.
> I guess that's a good addition to this work for later, if not there
> now.

At the moment we don't have code that does this; however, I would
imagine that code would be easy to write.  I don't know anything about
how we configure images for zone installs, but if you'd like to sit down
and hash out the details, I'm around for the rest of the week.  I should
also be in the office at the beginning of next week too.

> client.py:
> 
> For a large download, does cleaning up cached content take
> a while?  I ask, since we might want a progress spinner
> for cleanup if so.

Yes, it takes _forever_.  This is why the feature is disabled by
default.  Bart, Stephen, Danek, Tom, Brock, and I talked about this last
week.  I tried to get you to come over and join the conversation, but
you were busy.  Anyway, I had proposed a couple of ideas to optimize the
removal of the content cache after a download.  We concluded that since
it'll be large, we should just skip removing the content and let the
administrator handle that for now.  If a user wants to change the
default and have the slow delete take place, that's also fine for now.

> client.py:
> 
> "Maximum number of timeouts ..."
> 
> We've seen this be a call (well, email) generator thus far, and
> in the event that timeouts keep happening to people, what should
> we tell them?  I think this error message in particular should
> guide the user to a solution or a way to let us know that our
> heuristic needs work.  I could imagine some or all of:
> 
>         - Pointing to a webpage on opensolaris.org/os/project/pkg
>         - Telling the user to look at a manpage in a "TIMEOUT"
>           section
>         - Telling the user about the relevant environment variables
>         - Telling the user to check/test their network connection
>         - Telling the user to try again, or try again later-- since
>           we're caching the downloaded data, that should eventually
>           help, right?
> 
> It might also be nice to let the user know how many timeouts
> the maximum number was, and how long the timeouts were.  That
> would let the user know the current setting, to make a choice
> about adjusting.

When I configured my test server to be very flaky, I wasn't able to hit
the case where we exceeded the maximum number of timeouts.  The user is
going to have to be a masochist to want to continue to try to talk to a
server that causes us to exceed our timeout count.  I can put a
friendlier message here if you think it matters; however, generating
seems to be a pretty informative mechanism for determining the
effectiveness of our heuristics. :P

> filelist.py:add_action():
> 
> What would happen if a cached file was on disk, but truncated somehow?
> It seems like we have little protection against that...

How do you see this happening?  We download files into a temporary
download directory and only move them into the cache once the download
successfully completes.  If we die in the middle of the download, we
blow away partially completed files and start over.

> A la ZFS, should we be checksumming what we download?  Perhaps
> we do and I didn't realize.

We checksum as part of the decompress, but we throw away the output.
We've been doing this for quite some time now.

> image.py:1217-- kind of odd vertical code formatting.
>          1222-- typo ("clean")

Fixed.  Thanks for catching these.

-j
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to