Re: configuring remote headless servers
> But this led me to wonder how I would cope if, for instance, a server > came up in single-user mode requiring an fsck. The standard way to deal with this in DC deployments is to use IPMI: 1) Redirect the BIOS console to the IPMI virtual console. 2) Redirect the boot loader prompts to the IPMI virtual console device. 3) Spawn a getty on the virtual IPMI console device. We do this on pretty much everything in our DCs. We don't have any gear running NetBSD, but for the OpenBSD and FreeBSD machines, (2) involves one line in boot.conf, and (3) is an entry in gettytab. (1) is OS agnostic, and involves configuring the machine's BIOS with IP addresses and login credentials. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
Re: configuring remote headless servers
On Wed, 31 Aug 2016, Steve Blinkhorn wrote: > It took three days for an engineer with sufficiently developed skills to > become available: He solved the problem by switching the server on. Having found no good way to truly address issues like this without some control of my own, I don't deal with an ISP that won't give me power control and console. HP ILO's are a good solution since they can be used for both a hard power cycle and give you a real remote console. If your console stays in text mode, you don't even have to license the iLO. Many server BIOSs' have a mode whereby they can provide console support via a dedicated serial port (Tyan comes to mind as one of these). If you combine that functionality with something than can do remote power control (like a Baytech RPC or APC network PDUs) then you've got the same features. > But this led me to wonder how I would cope if, for instance, a server > came up in single-user mode requiring an fsck. If you have true console access it wouldn't matter. You'd do the fsck then keep truckin'. > I can see from the man pages for shutdown(8) and fastboot(8) that there > is provision related to this kind of circumstance. I'll just apologize because I doubt my response was what you were looking for. I'll simply say this, when it comes to hosted systems, the faster the system can bring up the network and ssh with the absolute minimum of dependencies, the better. AFAIK, I've never seen an OS that really "gets" this, as evidenced that even though OS's *could* use their ramdisk/miniroots to launch OpenSSH (and statically link it), they rarely do (and there are some reasons, but I usually disagree with their importance). For a server without a decent console, having Openssh started is a defacto the same thing as having a usable server, thus the strategy should place a categorically *premium value* on doing that as soon in the boot process as possible with the least number of dependencies. Also, NetBSD has the ability to redirect the console to a serial port as soon as the kernel starts booting. However, you'd need a serial console in place first before you can take advantage of that. However, in your scenario of the system needing an fsck and stopping the boot process, it'd save you from having to call some data-center hands & eyes at your ISP. -Swift
Re: configuring remote headless servers
st...@prd.co.uk (Steve Blinkhorn) writes: >But this led me to wonder how I would cope if, for instance, a server >came up in single-user mode requiring an fsck. This is handled by using server hardware that has an out-of-band management console, i.e. BMC, ILO, DRAC, iRMC, or just a serial console with a terminal server and a remote power switch. Configuring some kind of emergency network before doing fsck is difficult as you need to run with read-only disks and it wouldn't help with other types of errors. It's too late to answer a kernel or even a boot loader prompt. Greetings, -- -- Michael van Elst Internet: mlel...@serpens.de "A potential Snark may lurk in every tree."
Re: configuring remote headless servers
On 31 August 2016 at 11:34, Steve Blinkhornwrote: > Following on from the recent saga of upgrading from 2.0 to 7.0 which > assiduous readers may recall, the servers were re-installed in their > racks in the data centre. All was well with one of them but the > other apparently failed. It took three days for an engineer with > sufficiently developed skills to become available: He solved the > problem by switching the server on. > > But this led me to wonder how I would cope if, for instance, a server > came up in single-user mode requiring an fsck. Once upon a time I > was able to assume that this would be a circumstance familiar to data > centre staff, but no longer. What I would need would be a boot > sequence that started the network before any file system checking and > allowed remote login. Alternatively, file system checking could be > disabled by default - even if the system went down by power cycling > the machine. > > I can see from the man pages for shutdown(8) and fastboot(8) that > there is provision related to this kind of circumstance. Would it > simply be a matter of having an empty file named /fastboot in the root > directory? If it matters, these are i386 machines. > > Any gotchas with this approach? As a data point - I had a USB key set to boot up with dhcpcd and then run openvpn and sshd, then set the server to boot from USB first. In the event of a server issue the remote hands had to plug the USB key and hit the power switch (the OpenVPN was in case someone had managed to bork the firewall as well such that inbound ssh was disallowed - don't ask) . It was generic enough that they could plug it into most any box with ethernet and have an expectation of it working :)
configuring remote headless servers
Following on from the recent saga of upgrading from 2.0 to 7.0 which assiduous readers may recall, the servers were re-installed in their racks in the data centre. All was well with one of them but the other apparently failed. It took three days for an engineer with sufficiently developed skills to become available: He solved the problem by switching the server on. But this led me to wonder how I would cope if, for instance, a server came up in single-user mode requiring an fsck. Once upon a time I was able to assume that this would be a circumstance familiar to data centre staff, but no longer. What I would need would be a boot sequence that started the network before any file system checking and allowed remote login. Alternatively, file system checking could be disabled by default - even if the system went down by power cycling the machine. I can see from the man pages for shutdown(8) and fastboot(8) that there is provision related to this kind of circumstance. Would it simply be a matter of having an empty file named /fastboot in the root directory? If it matters, these are i386 machines. Any gotchas with this approach? -- Steve Blinkhorn
Re: installing on a VPS
On Tue, Aug 30, 2016 at 03:52:43PM -0400, Al Zick wrote: > Hi, > > I really hope that there is someone who can help me. I want to install > NetBSD on a rackspace VPS. I have NetBSD running on a VPS hosted by someone > else, and they use a very different system for the install and I could mount > a NetBSD ISO to do the install. This worked well. However, the install on > rackspace is more of a problem. I would like to create an image of the > running VPS with dd and ssh. Then I would like to put the rackspace VPS in > recovery mode and once again use dd to install it on /dev/xvdb (which will > be /dev/xvda after switching back from recovery mode). They said on > rackspace that I don't need a Xen kernel. > > I would like to use the NetBSD boot loader, but the current install of Linux > uses grub. What boot loader should I be using? it looks like this is a HVM guest then. You can't just dd the image of your actual Xen VPS, that won't boot. You can handle a HVM guest just as a bare-betal box. You should be able to use what I did here: http://mail-index.netbsd.org/netbsd-users/2015/09/20/msg016929.html Maybe you need adjustements for the network setup. -- Manuel BouyerNetBSD: 26 ans d'experience feront toujours la difference --