Ok, I have a slightly odd situation. I have multiple systems with dead on-board NICs. We have added a new NIC to a PCI slot in each of these machines and disabled the onboard NICs. The problem is, for our automated deployment system to work smoothly, these new devices _must_ be eth0 when the system comes up. The kernel always detects the on-board NICs as eth0 and eth1, the new NIC that we've added shows up as eth2.

I know that one way around this is to use a netdev=<irq>,<dma>,eth0 on the kernel append line, but unfortunately not all of these devices have the same IRQ or memory address. I've seen various references to disabling ACPI, and/or passing device driver-specific options. Possibly also certain pci= options may help. I'm hoping someone has dealt with this before.

Any ideas?

Thanks,
-Blake

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to