> No, you won't have to do that, but I didn't make it very clear in the > version that was published. I'm envisioning that Sun or other vendors > of OpenSolaris-based systems would be able to provide a direct > installation service based off of some of the concepts our WAN install > technology uses. Burning ISO's should be a last resort, actually.
Sorry for not giving more context in my original post on this. While WAN install is great in a number of ways (particularly if only the bits needed are downloaded which is nice for those wanting a small install), it has a couple of issues: firstly is the complexity of WAN capable network setup (though I'd guess this can overlap a lot with Solaris networking setup anyway). The other point is that if you want a fast *install time* (ie minimum downtime), then network install is going to be noticibly slower than from local media, in most cases. Obviously some will have fast connections though - but most SMBs and "enthusiasts" (or developers wanting something they can use at home) would have relatively slow networking. My connection at home is actually faster than my one at work, but it still takes a long time to download a full Solaris install. However, at least I can do other things at the same time, which I couldn't for WAN install. Long downloads are also more succeptible to intermittant networking problems. I hope the above gives some context to why I think it would be worth being able to install from local media that isn't DVD. I'm glad you think "Burning ISO's should be a last resort". Of course, over time, network connections will get increasingly faster - most likely much faster than Solaris itself grows (though I guess then JES will be added as standard or something ^-^). So WAN install will become more usable over time. > We'd been kicking around some ideas around using virtual machine > technology to run the install under Windows or Linux as it would be > simpler for us architecturally, but that also has the problem of > requiring the user to have an OS and virtualization platform which can > do that. Being able to grab the install image out of a Windows or other > filesystem would be another possibility. One related idea I had but decided not to mention (due to the implementation effort required) would be to do a basic ZFS port for Windows/Linux/etc (would only need to be single-threaded and only being able to handle one disc would be enough for most users). That way, once a "spare" partition has been created, the basic ZFS port could be used to write the install to it from another OS. However, this would require a completely different installer since Solaris itself wouldn't be running - naturally, the OS virtualisation technique you mention above would get around this and other problems. > One of my colleagues suggested the other day that perhaps we should just > skip the whole "coexist in FDISK" problem in the belief that virtual > machine technology will prove to make installing multiple OS's that are > separably bootable a quaint practice. He does have a point, but I'm not > sure it's the right answer within the next couple of years. In about 18 months, probably more than half of all new x86 systems will have hardware virtualisation built-in to the CPU. I'm not sure how much that helps though compared to current software virtualisation technology. > One of our main problems there is how to handle the writable portions, > since the duty cycles for flash drives aren't quite up to hard drive > standards. I think we'll have to do a stronger job of separating > software from configuration and logging to make that a reality. In > principle, though, it's not too different from our suggested diskless > direction, wherein we'd just download and run images in memory. So > perhaps we'll have it almost there, anyway. When you say "duty cycles for flash drives aren't quite up to hard drive standards", do you mean the number of re-writes Flash cells can handle without errors? If I recall correctly, Flash cells can generally handle many billion re-writes on average, and also has support for offlining groups of cells in a similar way to how hard discs handle bad sectors. Sounds like you have looked at this more closely than me though. btw, wouldn't ZFS's copy-on-write help a bit by spreading the writes around...? > As one of the goals is to have the best automation capabilities, these > are good ideas to help achieve that. Keep an eye out as we proceed on > design, we're going to try to address it. Great! From previous comments, didn't sound like automation was that high a priority. Glad you liked the ideas. > This one gets into controversial territory. The hard-core minimizers > argue that any bits that they don't need to run should never even be > installed. We've had a lot of discussion on this topic in recent > months, and at some point I'm going to have to put together a reasonable > story to address it. But yes, we completely agree that locking down > shouldn't be purely an install-time decision, and older, deprecated > services shouldn't be on by default in most cases. You'll start seeing > some movement here in Solaris Nevada really soon now. Certainly is tricky. And there's no "one size fits all" that's for sure. Maybe an early first question in an interactive installer should be "what is your general attitude towards security?" with options like "I want maximum ease of use", "a reasonable balance" and "paranoid" ^-^ (probably need more technical options). Those high level choices would then influence what's installed/activated, and maybe other things. As a side-note, how much is being made use of Solaris's Process Rights Management? That's certainly one way to help contain any possible damage. This could be applied to GUI applications as well. Maybe for a next-gen package managament, among the options would be to have "open", "restricted" and "paranoid" (or whatever) security options on install which would affect things like what PRMs are given and so on. Going even further would be to integrate overall package mangament with security settings for them and things like security alerts. Many security alerts aren't very useful in terms of what sys-admin should do. "Wait for a patch and disable program if possible in the meantime" isn't that handy. For example, if a vulnerability depends on a buffer-overflow attack but you have an UltraSPARC chip or an x86 chip with NX-bit support then such attacks *may* be rendered useless - would be nice to get a security alert from Solaris along the lines of "You have XYZ installed but because you have anti-stack cracking support active, you don't have to worry about this vulnerability". In other cases, the following would be interesting too: "The current application suffers from this security vulnerability. There are no known exploits currently available but it is recommended that you disable this program until a patch is available. If this program is needed, then in the meantime, the following list of PRM setting changes can be made to reduce the scope for damage in case of an attack - see this web-page for more information". > Seems like Apple's going out of their way to discourage this, but > perhaps they'll see the error of their ways eventually. I imagine we'll > be in the same situation as Linux here, but I haven't been paying too > close attention to this specific issue. Perhaps someone else can pipe > up if they have any thoughts. Maybe we need a grass-roots internet campain to persaude Apple that "the customer is always right" - that it would be in their own interest to support multi-booting (or at least not get in the way). I think to many people an Apple laptop with MacOS X and Solaris/Linux installed would be a dream come true. This message posted from opensolaris.org
