Ray,
I have deleted some stuff below not because I ignored it, but because I wanted to focus some immediate discussion on some of your points:
On Jul 5, 2004, at 11:07 PM, Ray Olszewski wrote:
At 09:41 PM 7/5/2004 -0700, Chad Carr wrote:1) Unconfigured interfaces tend not to route packets to the proper destination (or accept them actually, even if you can figure out their address). A distinct problem for http-based configuration interfaces.
Yes. But http-based configuration of home routers (think Linksys or Netgear) is naturally done on the internel interface, which can easily have workable values as the default settings (the best the external interface can do, I suppose, is default to DHCP). I'm pretty sure the current Bering variants assume eth1 is internal, has address 192.168.1.254, and is on 192.168.1.0/24.
Functionally, LEAF will not be able to bootstrap ethernet interfaces via anything other than a console-based interface until we are off floppy, which is not a goal of this project, from what I have seen.
This is not quite true. Bering's kernel, like many small kernels, has limited NIC support built in. Right now I forget what NICs are supported out of the box, but tulip is a likely example. A LEAF system with 2 supported NICs could easily be brought up to the point where a Web-based interface was available on the LAN side, if the configuration chose sensible defaults. I believe the current defaults fit the need; if not, they would be easy to tweak, using the Linksys setup style as a mental model.
Until we can throw a bunch of modules on the boot media and detect hardware, this is simply not a possibility. Not a limitation (or a responsibility) of the configuration database or any of the infrastructure components.
Theoretically, a pre-configuration utility (as simple as a Makefile or as complex as a Java program) making use of the cdb structure will have a much easier time of building a bootstrapped floppy image for a newbie than without it (modules included!)
Years ago, back in LRP days, there was a Web-based configuration system that would construct a modules.lrp file for you. It handled other things, like the various ip_masq_* modules, but its main purpose was to simplify NIC support. It failed for no fundamental reason, just the site's developer losing interest and nobody else picking it up when LRP changed kernels.
It would be nice to think again about a configuration system that would build a floppy image that had a base kernel, suitable modules, an etc.lrp package with the right /etc/modules file, and the mix of packages suitable to a specific system ... maybe even including choices of firewall packages. But I Think that is not our immediate concern for using cdb.
Let's start with the three sections above. I am all for making a preconfig part of the scope of this effort. I think it is well within the technical limits of the code we are talking about building, and it is an effort that will pay back with time saved on the newbie side (not that it will save _me_ time; I tend to ignore the buggers, but you and Tom have hearts of gold!). I also think that it factors into the decisions we are making with respect to the config-db and associated tools (package manager, for instance)
I think the basic premise of a preconfig system would be to take some user input (this could start small like asking them to select their NICs from a list and what type of boot media they want), then generating a .bin or .iso for them with the modules built in and the initial config-db laid populated with default values.
I have basically two questions I would like to put up to the group:
1) Should this tool be host-based (downloadable) or server-based (a web interface)?
2) What main functionality should it have? This is a list of possibilities to get the discussion going:
configure console (serial, vga, or headless)
configure NICs (possibly including initial IP addressing)
configure dns/dhcpd
configure time zone/ntp
configure additional packages to install (not the package configs themselves)
If we decide to create a host-based tool, we need to answer two additional questions:
1) Do we require linux as a host? 2) Do we require network access during preconfig?
11) The interface displays either an error or success message. On success, the user is presented with a page offering to back up the system to the boot media thus preserving the state of the system across reboots.
My preference would be always to back up the system (or at least the cdb package) as part of a commit. Both approaches have their strengths and their defects ... probably ovbious to anyone interested in this thread ... so just listing the strengths of one and the defects of the other doesn't help much. On balance, I think prompt backup is more familiar to users, especially inexperienced ones, so that's why I favor it.
I prefer to apply the changes and ask the user to commit them explicitly. It is always harder to recover from subtle errors after writing to non-volatile media. If we follow the procedure above, it is closer to the Cisco way of doing things: apply all the changes you want, try them out, and if you bork something really bad, just power cycle the darn thing and you are guaranteed to come up in a consistent state.
This may be a matter of what you are used to. When I'm not working with Linux-based routers, I'm working with small routers from Linksys, Netgear, D-Link, and the like. Their UIs are not perfect, but they are way ahead of what I saw Linux developers coming up with for years ... and they commit changes (to some sort of NVRAM) immediately, with no rollback option ... at least the several models I've actually tested.
I suspect they have a better feel for beginning users of routers than whoever designs Cisco gear (which, last time I checked, you took classes to learn how to operate ... not LEAF's user base). So I'm more inclined to steal from ... uh, make that learn from ... Netgear and its ilk than from Cisco on UI matters.
Right again. Marvels of engineering, but they are a deep well.
I am working on cdb today, because after bad dreams last night I determined that it needs even more flexible array handling and the checkpoint/commit/rollback interface we spoke about. That will be needed regardless of the commit model chosen, so I might as well get started.
Thanks, Chad
-------------------------------------------------------
This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self defense, top technical experts, no vendor pitches, unmatched networking opportunities. Visit www.blackhat.com
_______________________________________________ leaf-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/leaf-devel