El 15/08/2007, a las 17:01, Michael Edwards escribió:

> Hmm... you would definately need to use a boot CD then, since the
> bonding is set up so late in the boot process.

Exactly, otherwise it's to wired. With the boot CD i have to use that  
only when I have tot reinstall a node. It seems easy.

>
> What happens if you put something like
>
> modprobe bonding
> install bond<N> /sbin/modprobe bonding -o bond<N> miimon=100 mode=0
>
> in a SIS preinstall script (assuming using a redhat UYOK)?  You might
> have to do "./install cluster bond0" when you start oscar too, or hack
> the SIS master script so it uses that interface on the nodes...
>

I'm using FC5 x86_64. I had done the "./install_cluster bond0". On  
the master side, all is working fine. Now I have to deploy the  
images, and the SIS image have to configure the channel bonding.

Today I don't have the machines for made these tests. Tomorrow I will  
try.

I will inform you ;)

Thanks!


> On 8/15/07, Adrià Forés Herranz <[EMAIL PROTECTED]> wrote:
>>
>> El 15/08/2007, a las 14:58, Michael Edwards escribió:
>>
>>> turn off the trunking on the switch until after you image the nodes?
>>
>> The problem of that, it's that in every reboot of the nodes, they
>> will ask what to do by PXE, isn't only at the installation. But it's
>> an option.
>>
>>>
>>> You might be able to do a pre-install script for SIS which  
>>> configures
>>> the channel bonding too...
>>
>> Now I'm working in the sata controler on the SIS image with the doc
>> that you have apported to the main wiki, because the nodes don't have
>> the same hard as the master.
>>
>> It will be interesting configure the channel boning at the SIS image,
>> and deploy these image by CD. Because with PXE we have also the last
>> problem. What I have to do for configure the bonding on the SIS  
>> image?
>>
>> Thanks Michael,
>> Adrià
>>
>>
>>
>>
>>>
>>> On 8/15/07, Adrià Forés Herranz <[EMAIL PROTECTED]> wrote:
>>>> Hello!
>>>>
>>>> I'm building a HPC with OSCAR with 3 machines.
>>>>
>>>> The master have 3 NICS and the nodes have 2 NICS. I'm planning to
>>>> configure channel bonding on every machine.
>>>>
>>>> The channel boning introduces a booting problem, because the switch
>>>> have the opcion Trunking with the two links on every machine for  
>>>> the
>>>> channel bonding and don't conect only a single NIC for PXE.
>>>>
>>>> There are some solution? or the only one it will be made a CD
>>>> installation (if it's possible configure the channel bonding on  
>>>> this
>>>> boot CD).
>>>>
>>>> Anyone have a cluster with channel bonding and only 2 NICS on the
>>>> compute nodes? how it works?
>>>>
>>>> Thanks
>>>>
>>>> ------------------------------------------------------------------- 
>>>> --
>>>> ----
>>>> This SF.net email is sponsored by: Splunk Inc.
>>>> Still grepping through log files to find problems?  Stop.
>>>> Now Search log events and configuration files using AJAX and a
>>>> browser.
>>>> Download your FREE copy of Splunk now >>  http://get.splunk.com/
>>>> _______________________________________________
>>>> Oscar-devel mailing list
>>>> [email protected]
>>>> https://lists.sourceforge.net/lists/listinfo/oscar-devel
>>>>
>>>
>>> -------------------------------------------------------------------- 
>>> --
>>> ---
>>> This SF.net email is sponsored by: Splunk Inc.
>>> Still grepping through log files to find problems?  Stop.
>>> Now Search log events and configuration files using AJAX and a
>>> browser.
>>> Download your FREE copy of Splunk now >>  http://get.splunk.com/
>>> _______________________________________________
>>> Oscar-devel mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/oscar-devel
>>
>>
>> --------------------------------------------------------------------- 
>> ----
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a  
>> browser.
>> Download your FREE copy of Splunk now >>  http://get.splunk.com/
>> _______________________________________________
>> Oscar-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/oscar-devel
>>
>
> ---------------------------------------------------------------------- 
> ---
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a  
> browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> _______________________________________________
> Oscar-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/oscar-devel


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Oscar-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-devel

Reply via email to