Hi Jeff, On Wed, 2018-01-10 at 11:30 -0800, Jeff Van wrote: > Hi Al, > > Hmm, it seems as though I've configured everything correctly then. > > I also tried checking the ARP cache like you suggested, but in the > output, there were no IP/MAC addresses that matched the IP/MAC > address of the node I am trying to communicate with. However, if my > understanding of ARP is correct, that shouldn't be unusual, since my > head node has not been able to receive any kind of communications > from the node I am trying to communicate with.
This is a good piece of data. If the problem was on the server side (i.e. the IPMI chip it isn't responding to your client packets) then there should still be a cache entry on the client side. (This is assuming you have a typical setup where recent ARP entries are cached.) But the lack of an entry suggests this a client side problem. It can't send packets at all because it cannot resolve the IP to a MAC address. You may want to check the ARP configuration on the BMC (on the server) itself. You can see it with "ipmi-config --checkout". Al > I've also checked the ipmi-oem manpage, and I wasn't able to find > anything particularly useful. > > However, I noticed that when I use the command on the head node: > `sudo bmc-device --get-bmc-global-enables`, I receive the following > output: > > Receive Message Queue Interrupt : disabled > Event Message Buffer Full : disabled > Event Message Buffer : enabled > System Event Logging : enabled > OEM 0 : disabled > OEM 1 : disabled > OEM 2 : disabled > > Perhaps the fact that OEM 0-2 are disabled might provide a clue as to > what my problem could be? > > If you happen to have any other ideas on how I might go about > debugging this, or if there are other places to which I can direct my > questions, please let me know. Thank you again for taking the time to > help me with this! > > On Tue, Jan 9, 2018 at 5:32 PM, Albert Chu <ch...@llnl.gov> wrote: > > Hi Jeff, > > > > On Tue, 2018-01-09 at 17:14 -0800, Jeff Van wrote: > > > Hi Al, > > > > > > Thank you for the response! > > > > > > As a follow up question, would you happen to be able to describe > > how > > > one might do BIOS configs to determine which port allows IPMI? > > > > Every motherboard is different, so you'll ultimately have to look > > into > > the product info for your board. > > > > > After searching on the internet on how to do it, I've tried the > > > following on the head node: > > > Pressed F2 to enter system setup after system Power-On Self-Test > > > (POST, or while it's booting up). However, after combing through > > the > > > settings options, I could not find any information about IPMI and > > > associated ports. The closest thing I could find was under > > > "Integrated Devices", where it displayed the capabilities of the > > NIC > > > ports, which I've included a screenshot of, in this email. > > However, > > > none of the NIC ports display IPMI or idrac under the > > "capabilities" > > > section. Additionally, in my configurations, I have NIC Selection > > set > > > to Shared on both the head node and the node I am trying to > > > communicate with, in case that makes a difference. > > > > I haven't worked on a poweredge for years, so I can only go off > > recollection. My recollection is somewhere in the BIOS is some > > type of > > "baseboard management" config and one could pick things like > > "dedicated", "shared" in the BIOS. Take this with a grain of salt, > > as > > I may be confusing this other motherboards. > > > > In the ipmi-oem manpage, "shared" indicates IPMI only works on > > NIC1. > > So that seems like a reasonable config if you're using NIC1. > > > > Al > > > > > I've also tried pressing ctrl + e after POST, but that takes me > > to > > > configurations that are very similar to pressing F10. > > > > > > I've also tried pressing F10 to enter hardware configuration > > > settings, but I was not able to find any useful information there > > > either. > > > > > > If there is any other useful information or advice that you could > > > give me, I would very much appreciate it! I understand that you > > must > > > be busy, so thank you again for taking the time to help me with > > > this. > > > > > > Additionally, I have included a diagram of the physical ports on > > the > > > back of my Dell Poweredge R710. Currently, the cable connecting > > the > > > two nodes are plugged in the left-most port in at number 10 of > > the > > > diagram. > > > > > > On Mon, Jan 8, 2018 at 5:22 PM, Albert Chu <ch...@llnl.gov> > > wrote: > > > > Hi Jeff, > > > > > > > > > I have double checked the BMC configuration for these nodes, > > and > > > > they > > > > > seem to be correct (I can provide screenshots if necessary), > > > > > however, I am still unable to communicate with these servers > > > > using > > > > > IPMI. (eg. ipmipower, ipmiping, etc. all do not work.) and I > > am > > > > not > > > > > sure what else I can do. If anybody could help me or point me > > in > > > > the > > > > > right direction, I would very much appreciate it! > > > > > > > > It's hard to know for sure what the problem could or could not > > be. > > > > But > > > > if ipmiping isn't even working, that means something very basic > > is > > > > not > > > > configured correctly. Here's some ideas of what to look at: > > > > > > > > A) ensure that basic networking is setup correctly on both > > client > > > > and > > > > server. i.e. valid IP, subnet, MAC, etc. Most use a different > > IP > > > > address from their "main" IP address for the node. Ensure that > > > > this IP > > > > address can be resolved on the client (i.e. IP converted into > > MAC > > > > address). You can check the ARP cache to see if it's been done > > > > correctly. > > > > > > > > B) ensure IPMI is enabled (most notably LAN_Channel > > configuration). > > > > > > > > C) ensure you're using the right networking port. Many > > > > motherboards > > > > only support IPMI on one of their networking ports if there is > > > > > > > 1. Or > > > > sometimes BIOS configs need to be done to determine which port > > > > allows > > > > IPMI. > > > > > > > > I've personally worked with Poweredges before, and suspect C > > above > > > > is > > > > your problem. You may want to look into the ipmi-oem command > > > > manpage > > > > and read through some of the extensions in there for "Dell" and > > any > > > > commands that apply to Poweredges. > > > > > > > > Al > > > > > > > > On Mon, 2018-01-08 at 17:04 -0800, Jeff Van wrote: > > > > > Hello! I'm not sure if this is the right place to ask this > > > > question, > > > > > but if > > > > > not, please let me know where I can direct my question. > > > > > > > > > > I am currently using MAAS Version 1.9.5+bzr4599-0ubuntu1 with > > > > Ubuntu > > > > > 14.04.5, with three nodes (all Dell PowerEdge R710). I am > > > > currently > > > > > trying > > > > > to use IPMI version 2.0. > > > > > > > > > > However, I am unable to power-up these nodes with IPMI, due > > to an > > > > > error: > > > > > Failed to query node's BMC - Power state could not be > > queried: > > > > > 10.1.10.134: > > > > > connection timeout > > > > > > > > > > In /var/log/maas/maas.log, an identical error message > > appears. > > > > > > > > > > I have double checked the BMC configuration for these nodes, > > and > > > > they > > > > > seem > > > > > to be correct (I can provide screenshots if necessary), > > however, > > > > I am > > > > > still > > > > > unable to communicate with these servers using IPMI. (eg. > > > > ipmipower, > > > > > ipmiping, etc. all do not work.) and I am not sure what else > > I > > > > can > > > > > do. If > > > > > anybody could help me or point me in the right direction, I > > would > > > > > very much > > > > > appreciate it! > > > > > > > > > > If there's any more useful information that I can provide, > > please > > > > let > > > > > me > > > > > know. > > > > > _______________________________________________ > > > > > Freeipmi-users mailing list > > > > > Freeipmi-users@gnu.org > > > > > https://lists.gnu.org/mailman/listinfo/freeipmi-users > > > > -- > > > > Albert Chu > > > > ch...@llnl.gov > > > > Computer Scientist > > > > High Performance Systems Division > > > > Lawrence Livermore National Laboratory > > > > > > > > > > > > > > > > -- > > Albert Chu > > ch...@llnl.gov > > Computer Scientist > > High Performance Systems Division > > Lawrence Livermore National Laboratory > > > > > > -- Albert Chu ch...@llnl.gov Computer Scientist High Performance Systems Division Lawrence Livermore National Laboratory _______________________________________________ Freeipmi-users mailing list Freeipmi-users@gnu.org https://lists.gnu.org/mailman/listinfo/freeipmi-users