Garrett D'Amore writes: > James Carlson wrote: > > The reason it's still there is out of fear of badly-written user space > > programs, and (in particular) SNMP. Having a very large number of > > addresses per interface could cause those applications either to > > consume all memory or all CPU or perhaps both. > > > > Those "badly written" applications live in userland. This particular > problem is exactly what "resource limits" are designed to cover. > > The site that runs into such a problem can easily alleviate the > situation by reducing the number of interfaces actually configured. > Since these are normally manually configured, it shouldn't be a big problem.
Sure. The concern, though, was for having customers unwittingly walk right into a known scaling problem. (No, I don't think it was really the right solution, and it became even less right after SolarMAX integrated.) > Having a tunable to work around these applications is Just Wrong, IMO. Agreed. > We don't provide similar limits for any other kind of resource to > protect applications with crummy assumptions -- e.g. maximum number of > filesystems, maximum number of users, largest file size, system memory, > maximum number of processes, etc. This one's a little different in that scaling those other things is unsurprising. Many applications weren't designed with the idea that a machine might have two IP addresses, let alone 10,000. In any event, I'm not trying to defend the variable. It's a pain in the posterior, and gets in the way of people trying to scale up Zones. I'm just offering some context on why it wasn't removed in the past (particularly with SolarMAX), so that if we do nuke it now, we know what sorts of problems we should decide to accept. (It's one of those unusual things that has a reasonably clear history, unlike much of the ndd swamp.) -- James Carlson, Solaris Networking <james.d.carlson at sun.com> Sun Microsystems / 35 Network Drive 71.232W Vox +1 781 442 2084 MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677
