Re: [CentOS] crush after update

2012-11-17 Thread Banyan He
error? log? screenshot? backstrace?


Banyan He
Blog: http://www.rootong.com
Email: ban...@rootong.com

On 2012-11-17 11:53 PM, Hossein Lanjanian wrote:
 Hi every body!
 I installed centos 6 64bit on my sony labtop.
 I updated it`s kernel by yum command, and rebooted it.
 But it crush after grub page, right before the centos login page.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] crush after update

2012-11-17 Thread Earl Ramirez
On 17 November 2012 23:53, Hossein Lanjanian hossein.lanjan...@gmail.comwrote:

 Hi every body!
 I installed centos 6 64bit on my sony labtop.
 I updated it`s kernel by yum command, and rebooted it.
 But it crush after grub page, right before the centos login page.

 --
 With The Best
 H.Lanjanian
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



Are you able to boot into single user?

-- 
Kind Regards
Earl Ramirez
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread Steven Crothers
On Sat, Nov 17, 2012 at 12:34 AM, Ian Pilcher arequip...@gmail.com wrote:


 There's a reason that those proprietary vendors are able to charge big
 $$$ for this functionality.


That's the truth... I was hoping they were based off some open source
implementation of iSCSI somewhere.

I mean I could probably dedicate a single machine to run iSCSI and just
schedule downtime, but that's something I wanted to avoid.

I've been looking at something like Open-E, but it's active/passive with
what is essentially a DRBD link between them. Again, not ideal. Speaking of
which, why do people rely so much on DRBD for LAN deployments lately?
Everyone always seems to cheap out and setup
DRBD/Pacemaker/Heartbeat/*insert some HA software here* instead of using
proper clustered file systems. DRBD to me has always screamed WAN
replication. Maybe I just don't put enough value in it, who knows.

Anyway, back to my hunt for a way to implement my Ceph cluster on Windows
2008.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread Digimer
On 11/17/2012 06:08 PM, Steven Crothers wrote:
 On Sat, Nov 17, 2012 at 12:34 AM, Ian Pilcher arequip...@gmail.com wrote:
 

 There's a reason that those proprietary vendors are able to charge big
 $$$ for this functionality.

 
 That's the truth... I was hoping they were based off some open source
 implementation of iSCSI somewhere.
 
 I mean I could probably dedicate a single machine to run iSCSI and just
 schedule downtime, but that's something I wanted to avoid.

You could take two nodes, setup DRBD to replicate the data
(synchronously), manage a floating/virtual IP in pacemaker or rgmanager
and export the DRBD storage as an iSCSI LUN using tgtd. Then you can
migrate to the backup node, take down the primary node for maintenance
and restore with minimal/no downtime. Run this over mode=1 bonding with
each leg on two different switches and you get network HA as well.

I've done this to provide storage to a cluster of VMs and I could even
fail the primary node and the backup would take over without losing any
of my VMs.

I didn't speak up earlier because of all the other features you asked
for, but this will at least give you your HA requirements.

 I've been looking at something like Open-E, but it's active/passive with
 what is essentially a DRBD link between them. Again, not ideal. Speaking of
 which, why do people rely so much on DRBD for LAN deployments lately?
 Everyone always seems to cheap out and setup
 DRBD/Pacemaker/Heartbeat/*insert some HA software here* instead of using
 proper clustered file systems. DRBD to me has always screamed WAN
 replication. Maybe I just don't put enough value in it, who knows.
 
 Anyway, back to my hunt for a way to implement my Ceph cluster on Windows
 2008.

Clustered filesystems like GFS2 and OCFS2 come at a non-trivial
performance hit. It's usually a case of avoiding them when possible.
Using DRBD is not cheaping out. I prefer it to fancy SANs as it's more
HA than a SAN.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread Steven Crothers
On Sat, Nov 17, 2012 at 6:23 PM, Digimer li...@alteeve.ca wrote:

 You could take two nodes, setup DRBD to replicate the data
 (synchronously), manage a floating/virtual IP in pacemaker or rgmanager
 and export the DRBD storage as an iSCSI LUN using tgtd. Then you can
 migrate to the backup node, take down the primary node for maintenance
 and restore with minimal/no downtime. Run this over mode=1 bonding with
 each leg on two different switches and you get network HA as well.


There is nothing active/active about DRBD though, it also doesn't solve the
problem of trying to utilize two heads.

It's just failover. Nothing more.

I'm looking for an active/active failover scenario, to utilize the multiple
physical paths for additional throughput and bandwidth. Yes, I know I can
add more nics. More nics doesn't provide failover of the physical node.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread John R Pierce
On 11/17/12 6:58 PM, Steven Crothers wrote:
 On Sat, Nov 17, 2012 at 6:23 PM, Digimerli...@alteeve.ca  wrote:

 You could take two nodes, setup DRBD to replicate the data
 (synchronously), manage a floating/virtual IP in pacemaker or rgmanager
 and export the DRBD storage as an iSCSI LUN using tgtd. Then you can
 migrate to the backup node, take down the primary node for maintenance
 and restore with minimal/no downtime. Run this over mode=1 bonding with
 each leg on two different switches and you get network HA as well.
 
 There is nothing active/active about DRBD though, it also doesn't solve the
 problem of trying to utilize two heads.

 It's just failover. Nothing more.

 I'm looking for an active/active failover scenario, to utilize the multiple
 physical paths for additional throughput and bandwidth. Yes, I know I can
 add more nics. More nics doesn't provide failover of the physical node


any sort of active-active storage system has difficult issues with 
concurrent operations ...





-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] XFCE4 group missing on Centos 5.x?

2012-11-17 Thread Jake Shipton
On Tue, 13 Nov 2012 21:20:28 + (GMT)
Keith Roberts ke...@karsites.net wrote:

 On Tue, 13 Nov 2012, Johnny Hughes wrote:
 
  To: centos@centos.org
  From: Johnny Hughes joh...@centos.org
  Subject: Re: [CentOS] XFCE4 group missing on Centos 5.x?
  
  On 11/13/2012 02:41 PM, Keith Roberts wrote:
  On Tue, 13 Nov 2012, Johnny Hughes wrote:
 
  To: centos@centos.org
  From: Johnny Hughes joh...@centos.org
  Subject: Re: [CentOS] XFCE4 group missing on Centos 5.x?
 
  On 11/13/2012 01:55 PM, Keith Roberts wrote:
  I had XFCE group installed and working on C5.8 32 bit.
 
  I have done a fresh installation using the C 5.5 DVD.
 
  I cannot seem to find the XFCE group now. Has this been
  removed from Centos 5.x ?
 
  There is a version in CentOS Extras ... however, it is outdated.
 
  I was going to upgrade it ... BUT ... I found that it is now being
  maintained in EPEL for EL5.
 
  I would recommend that you use the EPEL version of XFCE.
  Thanks Johnny.
 
  This is what I'm getting now:
 
  [root@karsites ~]# yum groupinfo XFCE
  Loaded plugins: fastestmirror, priorities
  Setting up Group Process
  Loading mirror speeds from cached hostfile
* base: mirror.for.me.uk
* epel: mirrors.ukfast.co.uk
* extras: mirror.for.me.uk
* rpmforge: nl.mirror.eurid.eu
* updates: mirror.for.me.uk
  Warning: Group XFCE does not exist.
 
  I was installing xfce4 with:
 
  yum -y groupinstall XFCE
 
  Has the name been changed?
 
  The CentOS extras group name is:
 
  XFCE-4.4
 
  I don't think the EPEL version has groups.
 
 [root@karsites ~]# yum groupinfo XFCE-4.4
 Loaded plugins: fastestmirror, priorities
 Setting up Group Process
 Loading mirror speeds from cached hostfile
   * base: mirror.for.me.uk
   * epel: mirrors.ukfast.co.uk
   * extras: mirror.for.me.uk
   * rpmforge: nl.mirror.eurid.eu
   * updates: mirror.for.me.uk
 Warning: Group XFCE-4.4 does not exist.
 
 Maybe it's been removed now from extras as it's old?
 
 OK - got it now Johnny. So I just install every xfce* 
 package from EPEL and that's dealt with it?
 
 Name   : xfce4-session
 Arch   : i386
 Version: 4.6.2
 Release: 1.el5
 Size   : 662 k
 Repo   : epel
 Summary: Xfce session manager
 URL: http://www.xfce.org/
 License: GPLv2+
 Description: xfce4-session is the session manager for the
  : Xfce desktop environment.
 
 Kind Regards,
 
 Keith
 
 ---
 Websites:
 http://www.karsites.net
 http://www.php-debuggers.net
 http://www.raised-from-the-dead.org.uk
 
 All email addresses are challenge-response protected with
 TMDA [http://tmda.net]
 ---
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Hi,

This may or may not be helpful, but I'll put this out there anyway just
in case :-)

In the CentOS 6 version of EPEL repository the group is
called Xfce (Case Sensitive)

so:

yum groupinstall Xfce

Should do the trick. 

If that doesn't work try yum grouplist and find a group related to
XFCE. :-).

Hope this helps.

-- 
Jake Shipton (JakeMS)
GPG Key: 0xE3C31D8F
GPG Fingerprint: 7515 CC63 19BD 06F9 400A DE8A 1D0B A5CF E3C3 1D8F


signature.asc
Description: PGP signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread Digimer
On 11/17/2012 09:58 PM, Steven Crothers wrote:
 On Sat, Nov 17, 2012 at 6:23 PM, Digimer li...@alteeve.ca
 mailto:li...@alteeve.ca wrote:
 
 You could take two nodes, setup DRBD to replicate the data
 (synchronously), manage a floating/virtual IP in pacemaker or rgmanager
 and export the DRBD storage as an iSCSI LUN using tgtd. Then you can
 migrate to the backup node, take down the primary node for maintenance
 and restore with minimal/no downtime. Run this over mode=1 bonding with
 each leg on two different switches and you get network HA as well.
 
 
 There is nothing active/active about DRBD though, it also doesn't solve
 the problem of trying to utilize two heads.
 
 It's just failover. Nothing more.
 
 I'm looking for an active/active failover scenario, to utilize the
 multiple physical paths for additional throughput and bandwidth. Yes, I
 know I can add more nics. More nics doesn't provide failover of the
 physical node.

First, you can run DRBD in dual-primary (aka, Active/Active) just fine.
It will faithfully replicate in real time and in both directions. Of
course, then you need something to synchronize the data at the logical
level (DRBD is just a block device), and that is where GFS2 or OCFS2
comes in, though the performance hit will go counter to your goals.

You could do multi-path to both nodes, technically, but it's not wise
because the cache on the storage can cause problems[1].

Also, you will note that I suggested mode=1, which is Active/Passive
bonding, which provides no aggregated bandwidth. This was on purpose;
I've tested all modes and *only* mode=1 failed and recovered without
interruption reliably.

As for failover, if you run DRBD in dual-primary, but keep access
through one node at a time only, the only thing that is needed to
migrate after the failure of the node that had the IP is to fence the
node, take over the IP and start tgtd. This can happen quickly and, in
my tests, iSCSI on the clients recovered fine. In my case, I had the
LUNs acting as PVs in a clustered LVM with each LV backing a VM. None of
the VMs failed or needed to be rebooted.

So for what I can gather of your needs, you can get everything you want
from open-source. The only caveat is that if you need more speed, you
need to beef up your network, not aggregate (for reasons not related to
HA), If this is not good enough, then there are plenty of commercial
products ready to lighten your wallet by good measure.

Digimer

1.
http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-multipath-dont-do-that/

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread Digimer
On 11/17/2012 10:40 PM, John R Pierce wrote:
 On 11/17/12 6:58 PM, Steven Crothers wrote:
 On Sat, Nov 17, 2012 at 6:23 PM, Digimerli...@alteeve.ca  wrote:

 You could take two nodes, setup DRBD to replicate the data
 (synchronously), manage a floating/virtual IP in pacemaker or rgmanager
 and export the DRBD storage as an iSCSI LUN using tgtd. Then you can
 migrate to the backup node, take down the primary node for maintenance
 and restore with minimal/no downtime. Run this over mode=1 bonding with
 each leg on two different switches and you get network HA as well.

 There is nothing active/active about DRBD though, it also doesn't solve the
 problem of trying to utilize two heads.

 It's just failover. Nothing more.

 I'm looking for an active/active failover scenario, to utilize the multiple
 physical paths for additional throughput and bandwidth. Yes, I know I can
 add more nics. More nics doesn't provide failover of the physical node
 
 
 any sort of active-active storage system has difficult issues with 
 concurrent operations ...

Exactly what is discussed here, as linked in my other reply;

http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-multipath-dont-do-that/

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] iSCSI Question

2012-11-17 Thread Steven Crothers
DRBD is not active/active. I cannot utilize both server's as an active
session. DRBD replication latency will, in-fact, break my storage.

I do not want active/passive or hot-standby failover... DRBD is offtopic
from my original post, as it is not the correct solution.

Steven Crothers
steven.croth...@gmail.com



On Sun, Nov 18, 2012 at 12:07 AM, Digimer li...@alteeve.ca wrote:

 On 11/17/2012 10:40 PM, John R Pierce wrote:
  On 11/17/12 6:58 PM, Steven Crothers wrote:
  On Sat, Nov 17, 2012 at 6:23 PM, Digimerli...@alteeve.ca  wrote:
 
  You could take two nodes, setup DRBD to replicate the data
  (synchronously), manage a floating/virtual IP in pacemaker or
 rgmanager
  and export the DRBD storage as an iSCSI LUN using tgtd. Then you can
  migrate to the backup node, take down the primary node for maintenance
  and restore with minimal/no downtime. Run this over mode=1 bonding
 with
  each leg on two different switches and you get network HA as well.
 
  There is nothing active/active about DRBD though, it also doesn't solve
 the
  problem of trying to utilize two heads.
 
  It's just failover. Nothing more.
 
  I'm looking for an active/active failover scenario, to utilize the
 multiple
  physical paths for additional throughput and bandwidth. Yes, I know I
 can
  add more nics. More nics doesn't provide failover of the physical node
 
 
  any sort of active-active storage system has difficult issues with
  concurrent operations ...

 Exactly what is discussed here, as linked in my other reply;


 http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-multipath-dont-do-that/

 --
 Digimer
 Papers and Projects: https://alteeve.ca/w/
 What if the cure for cancer is trapped in the mind of a person without
 access to education?
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos