Chris Rijk wrote:
>> No, you won't have to do that, but I didn't make it very clear in
>> the version that was published. I'm envisioning that Sun or other
>> vendors of OpenSolaris-based systems would be able to provide a
>> direct installation service based off of some of the concepts our
>> WAN install technology uses. Burning ISO's should be a last resort,
>> actually.
> 
> Sorry for not giving more context in my original post on this. While
> WAN install is great in a number of ways (particularly if only the
> bits needed are downloaded which is nice for those wanting a small
> install), it has a couple of issues: firstly is the complexity of WAN
> capable network setup (though I'd guess this can overlap a lot with
> Solaris networking setup anyway). The other point is that if you want
> a fast *install time* (ie minimum downtime), then network install is
> going to be noticibly slower than from local media, in most cases.
> Obviously some will have fast connections though - but most SMBs and
> "enthusiasts" (or developers wanting something they can use at home)
> would have relatively slow networking. My connection at home is
> actually faster than my one at work, but it still takes a long time
> to download a full Solaris install. However, at least I can do other
> things at the same time, which I couldn't for WAN install. Long
> downloads are also more succeptible to intermittant networking
> problems.
> 
> I hope the above gives some context to why I think it would be worth
> being able to install from local media that isn't DVD. I'm glad you
> think "Burning ISO's should be a last resort".
> 
> Of course, over time, network connections will get increasingly
> faster - most likely much faster than Solaris itself grows (though I
> guess then JES will be added as standard or something ^-^). So WAN
> install will become more usable over time.
> 

The usability issues with WAN installation are a priority to fix; our 
current support for this is way too hard to set up on the server side.

What I see is that there are tradeoffs to be made by each user.  In 
section 4.1.8 of the paper, I broke down the performance into three 
broad categories, synopses of which were:

1.  download & burn media
2.  start the installer and provide any inputs
3.  lay down the bits and get them running

A Sun-provided WAN installation service minimizes #1, while increasing 
#3 (and perhaps #2); overall it should be the fastest option for a 
single install, and as WAN speed and reliability increase it seems the 
advantage will widen.  Avoiding burning also minimizes #1 to some 
extent.  If you're going to do more than a one-off install, you're 
better off creating a cache on the local LAN, so that you optimize #3.

> 
>> We'd been kicking around some ideas around using virtual machine 
>> technology to run the install under Windows or Linux as it would be
>>  simpler for us architecturally, but that also has the problem of 
>> requiring the user to have an OS and virtualization platform which
>> can do that. Being able to grab the install image out of a Windows
>> or other filesystem would be another possibility.
> 
> One related idea I had but decided not to mention (due to the
> implementation effort required) would be to do a basic ZFS port for
> Windows/Linux/etc (would only need to be single-threaded and only
> being able to handle one disc would be enough for most users). That
> way, once a "spare" partition has been created, the basic ZFS port
> could be used to write the install to it from another OS. However,
> this would require a completely different installer since Solaris
> itself wouldn't be running - naturally, the OS virtualisation
> technique you mention above would get around this and other problems.
> 

Yeah, I'll be happy to get the installer running on OpenSolaris - other 
platforms are out of scope for now.

> 
> 
>> One of my colleagues suggested the other day that perhaps we should
>> just skip the whole "coexist in FDISK" problem in the belief that
>> virtual machine technology will prove to make installing multiple
>> OS's that are separably bootable a quaint practice. He does have a
>> point, but I'm not sure it's the right answer within the next
>> couple of years.
> 
> In about 18 months, probably more than half of all new x86 systems
> will have hardware virtualisation built-in to the CPU. I'm not sure
> how much that helps though compared to current software
> virtualisation technology.
> 

It's a trend which will bear watching.

> 
>> One of our main problems there is how to handle the writable
>> portions, since the duty cycles for flash drives aren't quite up to
>> hard drive standards. I think we'll have to do a stronger job of
>> separating software from configuration and logging to make that a
>> reality. In principle, though, it's not too different from our
>> suggested diskless direction, wherein we'd just download and run
>> images in memory. So perhaps we'll have it almost there, anyway.
> 
> When you say "duty cycles for flash drives aren't quite up to hard
> drive standards", do you mean the number of re-writes Flash cells can
> handle without errors? If I recall correctly, Flash cells can
> generally handle many billion re-writes on average, and also has
> support for offlining groups of cells in a similar way to how hard
> discs handle bad sectors. Sounds like you have looked at this more
> closely than me though. btw, wouldn't ZFS's copy-on-write help a bit
> by spreading the writes around...?
> 

I can't say I've looked at it that closely, my info is secondhand.  We 
certainly have hot spots in the system which would make me skeptical 
that it's a good idea right now.  ZFS might help, I guess, though I 
don't think it was explicitly designed for this purpose so it's more a 
side-effect than intentional, and thus seems likely to not be completely 
effective.

> 
>> As one of the goals is to have the best automation capabilities,
>> these are good ideas to help achieve that. Keep an eye out as we
>> proceed on design, we're going to try to address it.
> 
> Great! From previous comments, didn't sound like automation was that
> high a priority. Glad you liked the ideas.
> 
> 
>> This one gets into controversial territory. The hard-core
>> minimizers argue that any bits that they don't need to run should
>> never even be installed. We've had a lot of discussion on this
>> topic in recent months, and at some point I'm going to have to put
>> together a reasonable story to address it. But yes, we completely
>> agree that locking down shouldn't be purely an install-time
>> decision, and older, deprecated services shouldn't be on by default
>> in most cases. You'll start seeing some movement here in Solaris
>> Nevada really soon now.
> 
> Certainly is tricky. And there's no "one size fits all" that's for
> sure. Maybe an early first question in an interactive installer
> should be "what is your general attitude towards security?" with
> options like "I want maximum ease of use", "a reasonable balance" and
> "paranoid" ^-^ (probably need more technical options). Those high
> level choices would then influence what's installed/activated, and
> maybe other things.
> 

Mostly, we want to find a way to not ask questions like that, because 
they're either too vague and lead to not really achieving the user's 
purpose, or too detailed and the ease-of-use just isn't there.  But I'm 
sure we'll end with some amount of dialog on some path to let users tune 
this.

> As a side-note, how much is being made use of Solaris's Process
> Rights Management? That's certainly one way to help contain any
> possible damage. This could be applied to GUI applications as well.
> Maybe for a next-gen package managament, among the options would be
> to have "open", "restricted" and "paranoid" (or whatever) security
> options on install which would affect things like what PRMs are given
> and so on.
> 

Privileges are being used pretty extensively, but there are places we 
need to leverage them further.

> Going even further would be to integrate overall package mangament
> with security settings for them and things like security alerts. Many
> security alerts aren't very useful in terms of what sys-admin should
> do. "Wait for a patch and disable program if possible in the
> meantime" isn't that handy. For example, if a vulnerability depends
> on a buffer-overflow attack but you have an UltraSPARC chip or an x86
> chip with NX-bit support then such attacks *may* be rendered useless
> - would be nice to get a security alert from Solaris along the lines
> of "You have XYZ installed but because you have anti-stack cracking
> support active, you don't have to worry about this vulnerability". In
> other cases, the following would be interesting too: "The current
> application suffers from this security vulnerability. There are no
> known exploits currently available but it is recommended that you
> disable this program until a patch is available. If this program is
> needed, then in the meantime, the following list of PRM setting
> changes can be made to reduce the scope for damage in case of an
> attack - see this web-page for more information".
> 

I agree, integrating security alerts with the software management tools 
would be a good thing to do.  Our recent acquisition of Aduva seems to 
have brought some technology in this area, though I haven't looked at it 
in detail yet.

Dave

Reply via email to