On Thu 12 Jun 2008 at 03:25PM, [EMAIL PROTECTED] wrote:
> Hey Dan,
> Thanks for your comments.  I appreciate you taking a few moments to look
> through this stuff.
> 
> > Meta-Comment: Is there a way to tap into a shared content-cache
> > when creating images?  This would of course accelerate zone install.
> > I guess that's a good addition to this work for later, if not there
> > now.
> 
> At the moment we don't have code that does this; however, I would
> imagine that code would be easy to write.  I don't know anything about
> how we configure images for zone installs, but if you'd like to sit down
> and hash out the details, I'm around for the rest of the week.  I should
> also be in the office at the beginning of next week too.

It's pretty simple-- it's an image, and we install into it... so if
we have files available in a cache by hash somewhere, it'd "just work".
Perhaps we could have 'pkg --cache-dir=/some/place'.

> > client.py:
> > 
> > For a large download, does cleaning up cached content take
> > a while?  I ask, since we might want a progress spinner
> > for cleanup if so.
> 
> Yes, it takes _forever_.  This is why the feature is disabled by
> default.  Bart, Stephen, Danek, Tom, Brock, and I talked about this last
> week.  I tried to get you to come over and join the conversation, but
> you were busy.  Anyway, I had proposed a couple of ideas to optimize the
> removal of the content cache after a download.  We concluded that since
> it'll be large, we should just skip removing the content and let the
> administrator handle that for now.  If a user wants to change the
> default and have the slow delete take place, that's also fine for now.

Ahh, ok, missed that, clearly :)

> When I configured my test server to be very flaky, I wasn't able to hit
> the case where we exceeded the maximum number of timeouts.  The user is
> going to have to be a masochist to want to continue to try to talk to a
> server that causes us to exceed our timeout count.  I can put a
> friendlier message here if you think it matters; however, generating
> seems to be a pretty informative mechanism for determining the
> effectiveness of our heuristics. :P

Ok.  We can work on refining later if the need arises.  I guess
one of the suggested remedies could be to seek a different mirror,
when we have that support.  It just seems like if a customer does
hit this, there's no suggested remedy.  Software like firefox
usually puts up an error page in an attemp to point you in the
right direction:

    ------------------------------------------------------------------------
        Address Not Found

        Firefox can't find the server at foobar.foobar

        The browser could not find the host server for the provided address.

          * Did you make a mistake when typing the domain? (e.g.
            "ww.mozilla.org" instead of "www.mozilla.org")
          * Are you certain this domain address exists?  Its registration
            may have expired.
          * Are you unable to browse other sites?  Check your network
            connection and DNS server settings.
          * Is your computer or network protected by a firewall or proxy?
            Incorrect settings can interfere with Web browsing.
    ------------------------------------------------------------------------

As a separate matter, we should make a recommendation about what
the GUI should do/print in this case.

> > A la ZFS, should we be checksumming what we download?  Perhaps
> > we do and I didn't realize.
> 
> We checksum as part of the decompress, but we throw away the output.
> We've been doing this for quite some time now.

I didn't know.  What do you mean by "throw away the output"?

        -dp

-- 
Daniel Price - Solaris Kernel Engineering - [EMAIL PROTECTED] - blogs.sun.com/dp
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to