Re: [CentOS] HP Proliant ML150 : how do I access disks ?

2010-02-17 Thread Kris Buytaert
On Wed, 2010-02-17 at 15:20 +0100, Rainer Duffner wrote:
 
  On 02/17/2010 03:38 PM, Rainer Duffner wrote:

  Hello there,
 
  I don't know about ML's but with DL series CentOS don't have any
  problems at all and with seeing disks in particular. So I presume that
  Rainer is absolutely right. You have to build an array first.
  Check this link
  http://docs.hp.com/en/9320/acu.pdf
 

 
 
 
 BTW: does 5.4 work on the latest G6 hardware?

It does on my DL360 and BL460's



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Live migration and DRBD

2009-12-01 Thread Kris Buytaert
On Mon, 2009-11-30 at 11:43 -0500, Tait Clarridge wrote:
 On Mon, 2009-11-30 at 13:53 -0200, Gilberto Nunes wrote:
  Hi folks
  
  I deploy a two Dell PowerEdge T300 to test Virtualization with
  kvm+drbd+heartbaet.
  
  The KVM drbd and heartbeat work properly.
  
  However, I have doubt!!
  
  When the primary node has down, the secondary node start the VM that
  has original running on primary node...
  So, this required a full stop of hole system...
  This is not we wish here...
  
  Is there something way to live migrate VM from primary node that was
  shutdown
  
  I have no idea how to make this stuff working...
  
  Thanks for any help
  
 Currently there is work being done on a project for Xen called Remus. I
 am not sure about KVM but Remus is still in development and although it
 has been merged into the xen-unstable repository, it isn't completely
 ready yet (although the developers are working very hard).
 
 Basically it performs the first part of a live migration and if
 connection is lost, it will jump the virtual machine over to the
 secondary host.
 
 http://nss.cs.ubc.ca/remus/
 
 
 It appears that Red Hat is including high availability for KVM in
 their Red Hat Enterprise Virtualization Manager for Servers.
 
 Not sure if this is going to make it to CentOS, can someone
 confirm/deny?

There are 3 different things being discussed in this thread so far :

 1. Live Migration 
 2. Virtual Machine HA 
 3. Continuous Mirroring / Replication 


(1) First of all Live Migration  is not a HA solution,  if your primary
machine dies , there is no way to initiate a Live Migration anymore, as
Live  migration requires the home node to be still active,  it can be
used to migrate workloads away during maintenance slots etc , or to
spread load, but not for HA.



(2) So what people typically configure with HeartBeat is indeed the
restart of a virtual machine from the same shared storage device. 

(3) Remus and Kemari are new kids in town they are going for RealTime
Mirroring and therefore will implement real Virtual Machine HA.
Remus is headed for Xen inclusion, while Kemari has just announced that
they are also starting to work on a KVM port , but their current version
was only Xen targeted.

http://virtualization.com/guest-posts/2009/11/15/remus-and-kemari-still-going-strong/


Hope that clarifies some stuff ..


 __

 _
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] XEN and RH 6

2009-11-12 Thread Kris Buytaert
On Tue, 2009-11-10 at 16:26 +0200, Pasi Kärkkäinen wrote:
 
 
 Both Novell and Oracle having been deeply involved in Xen lately, both
 are developing and supporting their own products based on Xen.
 


Given the fact that Unbreakable is a source rebuild from RHEL,  and
Oracle is putting a lot of effort behind Xen   , they hosted the Xen
summit before, after the aquisition of Virtual Iron where they clearly
defined their roadmap was Xen based,  (not even including potential Xen
based platforms they might get when the Sun acquisition eventually falls
trough) 



So it looks to me that they will have to build a Dom0 based distribution
anyhow.

The bigger question however will be if and how this work can come back
upstream and potentially be used in CentOS ?

What's the idea on that ?

greetigns

Kris



___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] [DRBD-user] Unexplained reboots in DRBD82 + OCFS2 setup

2009-06-30 Thread Kris Buytaert
On Thu, 2009-06-25 at 11:42 +0200, Kris Buytaert wrote:

  Use a serial console, attach that to some monitoring host.
  (you can useUSB-to-Serial, they are cheap and work), and log
  on that one. You'll get the last messages from there.
  
 I indeed had hoped to see some output on on the serial console when the
 reboots happened .. but the best I got so far was a partial timestamp
 with no further explanation before the reboot output started again .. 
 
 Any other ideas ? 
 

Update : 

The problem is indeed ocfs2 fencing off the systems , the logging
however does not show up in a serial console  it DOES show up when using
netconsole 


[base-r...@ccmt-a ~]# nc -l -u -p 
(8,0):o2hb_write_timeout:166 ERROR: Heartbeat write timeout to device
drbd0 after 478000 milliseconds
(8,0):o2hb_stop_all_regions:1873 ERROR: stopping heartbeat on all active
regions.
ocfs2 is very sorry to be fencing this system by restarting
,

One'd think that it output over Serial console before it log over the
network :)   It doesn't . 




Next step is that I`ll start fiddling some more with the timeout
values :) 


 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [Ocfs2-users] Unexplained reboots in DRBD82 + OCFS2 setup

2009-06-25 Thread Kris Buytaert
On Wed, 2009-06-24 at 12:02 -0700, Sunil Mushran wrote:
 Do you have a separate network path for drbd traffic? If you do
 not, then you are probably overloading the network. In this case,
 I believe drbd is unable to replicate the ios fast enough and thus
 is blocking the o2cb disk heartbeat. One workaround is to increase
 the O2CB_HEARTBEAT_THRESHOLD to more than the default of 60 secs.
 Refer to the ocfs2 faq or ocfs2 1.4 user's guide for more on this.
 
I've already modified the O2CB_HEARTBEAT_TRESHOLD to different values
(120, 240 etc), with no changes..


 And if you want to capture the logs, setup netconsole.
 
/dev/console is a serial device connected to a terminal server,  so far
the best I got was a partial timestamp before I saw the output of the
reboot again .. 

It tries to log .. but doesn't finish writing it :(  But mostly there is
no activity at all on the serial console :( 

Any other ideas ? 

greetings


Kris 




 Kris Buytaert wrote:
  We're trying to setup a dual-primary DRBD environment, with a shared
  disk with either OCFS2 or GFS.   The environment is a Centos 5.3 with
  DRBD82 (but also tried with DRBD83 from testing) .
 
  Setting up a single primary disk and running bonnie++ on it works.
  Setting up a dual-primary disk, only mounting it on one node (ext3) and
  running bonnie++  works
 
  When setting up ocfs2 on the /dev/drbd0 disk and mounting it on both
  nodes, basic functionality seems in place but usually less than 5-10
  minutes after I start bonnie++ as a test on one of the nodes , both
  nodes power cycle  with no errors in the logfiles, just a crash.
 
  When at the console at the time of crash it looks like a disk IO (you
  can type , but actions happen)  block happens  then a reboot, no panics,
  no oops , nothing. ( sysctl panic values set to timeouts etc )
  Setting up a dual-primary disk , with ocfs2 only mounting it on one node
  and starting bonnie++ causes only that node to crash.
 
  On DRBD level I get the following error when that node dissapears
 
  drbd0: PingAck did not arrive in time.
  drbd0: peer( Primary - Unknown ) conn( Connected - NetworkFailure )
  pdsk(UpToDate - DUnknown )
  drbd0: asender terminated
  drbd0: Terminating asender thread
 
  That however is an expected error because of the reboot.
 
  At first I assumed OCFS2 to be the root of this problem ..so I moved
  forward and setup an ISCSI target on a 3rd node, and used that device
  with the same OCFS2 setup. There no crashes occured and bonnie++
  flawlessly completed it test run.
 
  So my attention went  back to the combination of DRBD and OCFS 
 
  I tried both DRBD 8.2 drbd82-8.2.6-1.el5.centos kmod-drbd82-8.2.6-2  and
  the 83 variant from Centos Testing
 
  At first I was trying with the ocfs2 1.4.1-1.el5.i386.rpm verson but
  upgrading to  1.4.2-1.el5.i386.rpm didn't change the behaviour
 
 
  Anyone has an idea on this ? 
  How can we get more debug info from OCFS2  , apart from heartbeat
  tracing which doesn't learn me nothing yet ..  in order to potentially
  file a valuable bug report.

 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Unexplained reboots in DRBD82 + OCFS2 setup

2009-06-24 Thread Kris Buytaert


We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS.   The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .

Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++  works

When setting up ocfs2 on the /dev/drbd0 disk and mounting it on both
nodes, basic functionality seems in place but usually less than 5-10
minutes after I start bonnie++ as a test on one of the nodes , both
nodes power cycle  with no errors in the logfiles, just a crash.

When at the console at the time of crash it looks like a disk IO (you
can type , but actions happen)  block happens  then a reboot, no panics,
no oops , nothing. ( sysctl panic values set to timeouts etc )
Setting up a dual-primary disk , with ocfs2 only mounting it on one node
and starting bonnie++ causes only that node to crash.

On DRBD level I get the following error when that node dissapears

drbd0: PingAck did not arrive in time.
drbd0: peer( Primary - Unknown ) conn( Connected - NetworkFailure )
pdsk(UpToDate - DUnknown )
drbd0: asender terminated
drbd0: Terminating asender thread

That however is an expected error because of the reboot.

At first I assumed OCFS2 to be the root of this problem ..so I moved
forward and setup an ISCSI target on a 3rd node, and used that device
with the same OCFS2 setup. There no crashes occured and bonnie++
flawlessly completed it test run.

So my attention went  back to the combination of DRBD and OCFS 

I tried both DRBD 8.2 drbd82-8.2.6-1.el5.centos kmod-drbd82-8.2.6-2  and
the 83 variant from Centos Testing

At first I was trying with the ocfs2 1.4.1-1.el5.i386.rpm verson but
upgrading to  1.4.2-1.el5.i386.rpm didn't change the behaviour


Anyone has an idea on this ? 
How can we get more debug info from OCFS2  , apart from heartbeat
tracing which doesn't learn me nothing yet ..  in order to potentially
file a valuable bug report.


thnx in advance 

Kris 


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Web based management system for Xen

2008-05-15 Thread Kris Buytaert
On Wed, 2008-05-14 at 21:59 +0200, Fabian Arrotin wrote:
 On Wed, 14 May 2008, Karanbir Singh wrote:
 
  Can anyone recommend a web based management panel that would let me bring 
  up 
  / tear down and do some basic management for a bunch of Xen VM's ?
 
  Special bonus points if the panel can manage remote Xen Hosts :D
 
 I've never used/tested it but it seems OpenQRM can manage Xen VM's (as 
 well as deploying newer DomU's). There is a virtual appliance (built on 
 CentOS ;-) ) that can be downloaded from the openqrm website : 
 http://www.openqrm.org/openqrm-virtual-appliance.html
 
 But i can't speak of openqrm because i've no experience with it .. but i 
 know that some experimented people (subscribed to this list) use it on a 
 day-to-day basis, isn't it Kris ? ;-)

Mailinglist need Name highlighting :)

I`m not using openQRM on a day to day basis, but I have been
implementing it to manage virtual machine deployment , migrations and
management before.



http://www.krisbuytaert.be/published_articles/openQRM-Xen/

(Note that this is for the 3.X series :)



It also features live migration as documented here 
(I hope the link works as Youtube is blocked at the custumer I am with
today) 


http://www.google.com/url?sa=tct=rescd=1url=http%3A%2F%
2Fwww.youtube.com%2Fwatch%3Fv%
3DEeqQC5SoPvsei=7f8rSKOrE5aKxAGXrpyYBgusg=AFQjCNGthXmkTTYJQjyvPIcVylu3WIIv3Qsig2=nepxE_sskKNc8_Dlav8LMg


greetz

Kris


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt