Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
OK It's extremely rude to cross post the same question across multiple
lists like this at exactly the same time, and without at least showing
the cross posting. I just replied to the one on Fedora users before I
saw this post. This sort of thing wastes people's time. Pick one list
based on the best case chance for response and give it 24 hours.


Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] disk space trouble on ec2 instance

2015-02-27 Thread Frank Cox
On Sat, 28 Feb 2015 01:46:15 -0500
Tim Dunphy wrote:

 /dev/sda1 9.9G  9.3G   49M 100% /

49mb out of 9.9gb is less than one-half of one percent, so the df command is 
probably rounding that up to 100% instead of showing you 99.51%.  Whatever is 
checking for free disk space is likely doing the same thing.

-- 
MELVILLE THEATRE ~ Real D 3D Digital Cinema ~ www.melvilletheatre.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
https://lists.fedoraproject.org/pipermail/users/2015-February/458923.html


I don't see how the VG metadata is restored with any of the commands
suggested thus far. I think that's vgcfgrestore. Otherwise I'd think
that LVM has no idea how to do the LE to PE mapping.

In any case, this sounds like a data scraping operation to me. XFS
might be a bit more tolerant because AG's are distributed across all 4
PV's in this case, and each AG keeps its own metadata. But I still
don't think the filesystem will be mountable, even read only. Maybe
testdisk can deal with it, and if not then debugfs -c rdump might be
able to get some of the directories. But for sure the LV has to be
active. And I expect modifications (resizing anything, fscking)
astronomically increase the chance of total data loss. If it's XFS
xfs_db itself is going to take longer to read and understand than just
restoring from backup (XFS has dense capabilities).

On the other hand, Btrfs can handle this situation somewhat well so
long as the fs metadata is raid1, which is the mkfs default for
multiple devices. It will permit degraded mounting in such a case so
recovery is straightforward. Missing files are recorded in dmesg.

Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] disk space trouble on ec2 instance

2015-02-27 Thread Tim Dunphy
Hey all,

 Ok, so I've been having some trouble for a while with an EC2 instance
running CentOS 5.11 with  a disk volume reporting 100% usage. Root is on an
EBS volume.

 So I've tried the whole 'du -sk | sort -nr | head -10' routine all around
this volume getting rid of files.  At first I was getting rid of about 50MB
of files. Yet the volume remains at 100% capacity.

 Thinking that maybe the OS was just not letting go of the inodes for the
files on the disk, I attempted rebooting the instance. After logging in
again I did a df -h / on the root volume. And look! Still at 100% capcity
used. Grrr

Ok so I then did a du -h on the /var/www directory, which was mounted on
the root volume. And saw that it was gobbling up 190MB of disk space.

So then I reasoned that I could create an EBS volume, rsync the data there,
blow away the contents of /var/www/* and then mount the EBS volume on the
/var/www directory. So I went through that exercise and lo and behold.
Still at 100% capacity. Rebooted the instance again. Logged in and.. still
at 100% capacity.

Here's how the volumes are looking now.

[root@ops:~] #df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1 9.9G  9.3G   49M 100% /
none  312M 0  312M   0% /dev/shm
/dev/sdi  148G  116G   25G  83% /backup/tapes
/dev/sdh  9.9G  385M  9.0G   5% /backup/tapes/bacula-restores
/dev/sdf  9.9G  2.1G  7.4G  22% /var/lib/mysql
fuse  256T 0  256T   0% /backup/mysql
fuse  256T 0  256T   0% /backup/svn
/dev/sdg  197G  377M  187G   1% /var/www

There are some really important functions I need this volume to perform
that it simply can't because the root volume is at 100% capacity. Like the
fact that neither mysql nor my backup program - bacula will even think of
starting up and functioning!

I'm at a loss to explain how I can delete 190MB worth of data, reboot the
instance and still be at 100% usage.

I'm at my wits end over this. Can someone please offer some advice on how
to solve this problem?

Thanks
Tim





-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
On Fri, Feb 27, 2015 at 8:24 PM, John R Pierce pie...@hogranch.com wrote:
 On 2/27/2015 4:52 PM, Khemara Lyn wrote:

 I understand; I tried it in the hope that, I could activate the LV again
 with a new PV replacing the damaged one. But still I could not activate
 it.

 What is the right way to recover the remaining PVs left?


 take a filing cabinet packed full of 10s of 1000s of files of 100s of pages
 each,   with the index cards interleaved in the files, and remove 1/4th of
 the pages in the folders, including some of the indexes... and toss
 everything else on the floor...this is what you have.   3 out of 4
 pages, semi-randomly with no idea whats what.

 a LV built from PV's that are just simple drives is something like RAID0,
 which isn't RAID at all, as there's no redundancy, its AID-0.

If the LE to PE relationship is exactly linear, as in, the PV, VG, LV
were all made at the same time, it's not entirely hopeless. There will
be some superblocks intact so scraping is possible.

I just tried this with a 4 disk LV and XFS. I removed the 3rd drive. I
was able to activate the LV using:

vgchange -a y --activationmode partial

I was able to mount -o ro but I do get errors in dmesg:
[ 1594.835766] XFS (dm-1): Mounting V4 Filesystem
[ 1594.884172] XFS (dm-1): Ending clean mount
[ 1602.753606] XFS (dm-1): metadata I/O error: block 0x5d780040
(xfs_trans_read_buf_map) error 5 numblks 16
[ 1602.753623] XFS (dm-1): xfs_imap_to_bp: xfs_trans_read_buf()
returned error -5.

# ls -l
ls: cannot access 4: Input/output error
total 0
drwxr-xr-x. 3 root root 16 Feb 27 20:40 1
drwxr-xr-x. 3 root root 16 Feb 27 20:43 2
drwxr-xr-x. 3 root root 16 Feb 27 20:47 3
??? ? ?? ?? 4

# cp -a 1/ /mnt/btrfs
cp: cannot stat ‘1/usr/include’: Input/output error
cp: cannot stat ‘1/usr/lib/alsa/init’: Input/output error
cp: cannot stat ‘1/usr/lib/cups’: Input/output error
cp: cannot stat ‘1/usr/lib/debug’: Input/output error
[...]

And now in dmesg, thousands of
[ 1663.722490] XFS (dm-1): metadata I/O error: block 0x425f96d0
(xfs_trans_read_buf_map) error 5 numblks 8

Out of what should have been 3.5GB of data in 1/, I was able to get 452MB.

That's not so bad for just a normal mount and copy. I am in fact
shocked the file system mounts, and stays mounted. Yay XFS.


-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
OK so ext4 this time, with new disk images. I notice at mkfs.ext4 that
each virtual disk goes from 2MB to 130MB-150MB each. That's a lot of
fs metadata, and it's fairly evenly distributed across each drive.

Copied 3.5GB to the volume. Unmount. Poweroff. Killed 3rd of 4. Boot.
Mounts fine. No errors. HUH surprising. As soon as I use ls though:

[  182.461819] EXT4-fs error (device dm-1): __ext4_get_inode_loc:3806:
inode #43384833: block 173539360: comm ls: unable to read itable block

# cp -a usr /mnt/btrfs
cp: cannot stat ‘usr’: Input/output error

[  214.411859] EXT4-fs error (device dm-1): __ext4_get_inode_loc:3806:
inode #43384833: block 173539360: comm ls: unable to read itable block
[  221.067689] EXT4-fs error (device dm-1): __ext4_get_inode_loc:3806:
inode #43384833: block 173539360: comm cp: unable to read itable block

I can't get anything off the drive. And what I have here are ideal
conditions because it's a brand new clean file system, no
fragmentation, nothing about the LVM volume has been modified, no fsck
done. So nothing is corrupt. It's just missing a 1/4 hunk of its PE's.
I'd say an older production use fs has zero chance of recovery via
mounting.

So this is now a scraping operation with ext4.



Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] disk space trouble on ec2 instance

2015-02-27 Thread John R Pierce

On 2/27/2015 10:46 PM, Tim Dunphy wrote:

I'm at a loss to explain how I can delete 190MB worth of data, reboot the
instance and still be at 100% usage.


190MB is less than one percent of 9.9GB aka 9900MB

BTW, for cases like this, I'd suggest using df -k or -m rather than -h 
to get more precise and consistent values.



also note, Unix (and Linux) file systems usually have a reserved 
freespace, only root can write that last bit.   most modern file systems 
suffer from severe fragmentation if you completely fill them.   ext*fs, 
you adjust this with `tune2fs -m 1 /dev/sdXX`. XFS treats these reserved 
blocks as inviolable, so they don't show up as freespace, they can be 
changed with xfs_io but should be modified at your own risk.






--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread John R Pierce

On 2/27/2015 4:52 PM, Khemara Lyn wrote:

I understand; I tried it in the hope that, I could activate the LV again
with a new PV replacing the damaged one. But still I could not activate
it.

What is the right way to recover the remaining PVs left?


take a filing cabinet packed full of 10s of 1000s of files of 100s of 
pages each,   with the index cards interleaved in the files, and remove 
1/4th of the pages in the folders, including some of the indexes... and 
toss everything else on the floor...this is what you have.   3 out 
of 4 pages, semi-randomly with no idea whats what.


a LV built from PV's that are just simple drives is something like 
RAID0, which isn't RAID at all, as there's no redundancy, its AID-0.






--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
On Fri, Feb 27, 2015 at 9:00 PM, Marko Vojinovic vvma...@gmail.com wrote:
 And this is why I don't like LVM to begin with. If one of the drives
 dies, you're screwed not only for the data on that drive, but even for
 data on remaining healthy drives.

It has its uses, just like RAID0 has uses. But yes, as the number of
drives in the pool increases, the risk of catastrophic failure
increases. So you have to bet on consistent backups and be OK with any
intervening dataloss. If not, well, use RAID1+ or use a
distributed-replication cluster like GlusterFS or Ceph.

 Hardware fails, and storing data without a backup is just simply
 a disaster waiting to happen.

I agree. I kind get a wee bit aggressive and say, if you don't have
backups the data is by (your own) definition not important.

Anyway, changing the underlying storage as little as possible gives
the best chance of success. linux-raid@ list is full of raid5/6
implosions due to people panicking, reading a bunch of stuff, not
identifying their actual problem, and just start typing a bunch of
commands and end up with user induced data loss.

In the case of this thread, I'd say the best chance for success is to
not remove or replace the dead PV, but to do a partial activation.
# vgchange -a y --activationmode partial

And then ext4 it's a scrape operation with debugfs -c. And for XFS
looks like some amount of data is possibly recoverable with just an ro
mount. I didn't try any scrape operation, too tedious to test.


-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Marko Vojinovic
On Fri, 27 Feb 2015 19:24:57 -0800
John R Pierce pie...@hogranch.com wrote:
 On 2/27/2015 4:52 PM, Khemara Lyn wrote:
 
  What is the right way to recover the remaining PVs left?
 
 take a filing cabinet packed full of 10s of 1000s of files of 100s of 
 pages each,   with the index cards interleaved in the files, and
 remove 1/4th of the pages in the folders, including some of the
 indexes... and toss everything else on the floor...this is what
 you have.   3 out of 4 pages, semi-randomly with no idea whats what.

And this is why I don't like LVM to begin with. If one of the drives
dies, you're screwed not only for the data on that drive, but even for
data on remaining healthy drives.

I never really saw the point of LVM. Storing data on plain physical
partitions, having an intelligent directory structure and a few wise
well-placed symlinks across the drives can go a long way in having
flexible storage, which is way more robust than LVM. With today's huge
drive capacities, I really see no reason to adjust the sizes of
partitions on-the-fly, and putting several TB of data in a single
directory is just Bad Design to begin with.

That said, if you have a multi-TB amount of critical data while not
having at least a simple RAID-1 backup, you are already standing in a
big pile of sh*t just waiting to become obvious, regardless of LVM and
stuff. Hardware fails, and storing data without a backup is just simply
a disaster waiting to happen.

Best, :-)
Marko

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
And then Btrfs (no LVM).
mkfs.btrfs -d single /dev/sd[bcde]
mount /dev/sdb /mnt/bigbtr
cp -a /usr /mnt/bigbtr

Unmount. Poweroff. Kill 3rd of 4 drives. Poweron.

mount -o degraded,ro /dev/sdb /mnt/bigbtr   ## degraded,ro is required
or mount fails
cp -a /mnt/bigbtr/usr/ /mnt/btrfs## copy to a different volume

No dmesg errors. Bunch of I/O errors only when it was trying to copy
data on the 3rd drive. But it continues.

# du -sh /mnt/btrfs/usr
2.5G usr

Exactly 1GB was on the missing drive. So I recovered everything that
wasn't on that drive.

One gotcha that applies to all three fs's that I'm not testing: in-use
drive failure. I'm simulate drive failure by first cleanly unmounting
and powering off. Super ideal. How the file system and anything
underneath it (LVM and maybe RAID) handles drive failures while in
use, is a huge factor.


Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread Robert Arkiletian
On Fri, Feb 27, 2015 at 2:59 PM, Chris Murphy li...@colorremedies.com
wrote:

 On Fri, Feb 27, 2015 at 1:53 PM, Robert Arkiletian rob...@gmail.com
 wrote:
  Still have good quality older sata hardware raid cards that require 512
  bytes/sector. As far as I know HD manufacturers are not making native 512
  bytes/sector drives any more.

 512n drives still exist, although they tend to be a bit smaller, 2TB or
 less.

 http://www.hgst.com/tech/techlib.nsf/techdocs/FD3F376DC2ECCE68882579D40082C393/$file/US7K4000_ds.pdf


I too noticed that HGST (now owned by WD) makes native 512n drives. That
pdf states that they come in 2,3,4 TB models. (A6 in the model # represents
512n). But there are almost no reviews on these HGST native 512n drives
online.


 4Kn drives are appearing now also. I don't expect these drives to be
 bootable except possibly by systems with UEFI firmware. It's also
 possible hardware RAID will reject them unless explicitly supported.

 http://www.hgst.com/tech/techlib.nsf/techdocs/29C9312E3B7D10CE88257D41000D8D16/$file/Ultrastar-7K6000-DS.pdf


  Some have better 512e emulation than others. Looking for some advice on
  which to avoid and which are recommended. Thanks. PS this is for a
 CentOS6
  server.

 The emulation implementations don't come into play if the alignment is
 correct from the start. The better implementations have significantly
 less pathological behavior if alignment is wrong, but that's
 anecdotal, I don't have any empirical data available. But I'd say in
 any case you want it properly aligned.


According to this pdf [1] alignment is important but from what I understand
512e emulation still has a small RMW performance hit from writes that are
smaller than 4k or if the writes are not a multiple of 4k.

Also it's probably not a good idea to mix 512e with 512n in a raid set.
Although this may be hard to avoid as drives fail in the future.

[1]
http://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/512e_4Kn_Disk_Formats_120413.pdf
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread John R Pierce

On 2/27/2015 8:00 PM, Marko Vojinovic wrote:

And this is why I don't like LVM to begin with. If one of the drives
dies, you're screwed not only for the data on that drive, but even for
data on remaining healthy drives.


with classic LVM, you were supposed to use raid for your PV's.   The new 
LVM in 6.3+ has integrated raid at an LV level, you just have to declare 
all your LVs with appropriate raid levels.




--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Chris Murphy
On Fri, Feb 27, 2015 at 9:44 PM, John R Pierce pie...@hogranch.com wrote:
 On 2/27/2015 8:00 PM, Marko Vojinovic wrote:

 And this is why I don't like LVM to begin with. If one of the drives
 dies, you're screwed not only for the data on that drive, but even for
 data on remaining healthy drives.


 with classic LVM, you were supposed to use raid for your PV's.   The new LVM
 in 6.3+ has integrated raid at an LV level, you just have to declare all
 your LVs with appropriate raid levels.

I think since inception of LVM2, type mirror has been available which
is now legacy (but still available). The current type since CentOS 6.3
is raid1. But yes for anything raid4+ you previously had to create it
with mdadm or use hardware RAID (which of course you can still do,
most people still prefer managing software raid with mdadm than lvm's
tools).

-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Ok, sorry about that.

On Sat, February 28, 2015 9:13 am, Chris Murphy wrote:
 OK It's extremely rude to cross post the same question across multiple
 lists like this at exactly the same time, and without at least showing the
 cross posting. I just replied to the one on Fedora users before I saw this
 post. This sort of thing wastes people's time. Pick one list based on the
 best case chance for response and give it 24 hours.


 Chris Murphy
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Package group X Window System has disappeared

2015-02-27 Thread Ned Slider


On 27/02/15 13:27, Greg Bailey wrote:
 On 02/27/2015 02:54 AM, Niki Kovacs wrote:
 Hi,

 Until last week, I could install a CentOS 7 based desktop using the
 following approach:

 1. Install minimal system.

 2. yum groupinstall X Window System

 3. yum install gdm gnome-classic-session gnome-terminal liberation-fonts

 4. Install applications as needed.

 This morning, the package group X Window System seems to have
 disappeared. This is embarrassing.

 What happened?

 Niki
 
 Works for me, although I have to do yum group list hidden to see the
 X Window System group (both as available for installation, and as
 installed once I've done the group install invocation).  yum group list
 hidden and yum group list hidden ids are 2 variations I only learned
 about recently...
 

Nice tip! I've added it to the Yum tips and tricks section on the wiki
as there wasn't anything there on working with yum groups:

http://wiki.centos.org/TipsAndTricks/YumAndRPM#head-b3159dc0594ab59a5ae0c27d86c3815085064419

Note 'yum group list' only works on el7 whereas 'yum grouplist' works
across el5/6/7 so I've gone with that syntax on the wiki.

 Nice recipe, BTW, for a simple GUI install.  The spacing of the font
 doesn't look very good in Gnome terminal though; I must be missing
 whatever the default font is configured to be.
 
 -Greg
 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Package group X Window System has disappeared

2015-02-27 Thread Greg Bailey

On 02/27/2015 02:54 AM, Niki Kovacs wrote:

Hi,

Until last week, I could install a CentOS 7 based desktop using the 
following approach:


1. Install minimal system.

2. yum groupinstall X Window System

3. yum install gdm gnome-classic-session gnome-terminal liberation-fonts

4. Install applications as needed.

This morning, the package group X Window System seems to have 
disappeared. This is embarrassing.


What happened?

Niki


Works for me, although I have to do yum group list hidden to see the 
X Window System group (both as available for installation, and as 
installed once I've done the group install invocation).  yum group list 
hidden and yum group list hidden ids are 2 variations I only learned 
about recently...


Nice recipe, BTW, for a simple GUI install.  The spacing of the font 
doesn't look very good in Gnome terminal though; I must be missing 
whatever the default font is configured to be.


-Greg

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] repositories

2015-02-27 Thread Pol Hallen

Hi all :-)
This is my first post: I'm coming from debian/bsd world.

A question about repositories:

minimal installation (version 7) provides:

CentOS-Base.repo
CentOS-CR.repo
CentOS-Debuginfo.repo
CentOS-fasttrack.repo
CentOS-Sources.repo
CentOS-Vault.repo

I known there are others repositories as:

RPMForge, EPEL, REMI, ATrpms, Webtatic (and maybe also others)

so, what kind of these repositories are?

Does they repositories substitute packages from main? Or only adding new 
packages?


thanks for advices and help!

Pol
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] repositories

2015-02-27 Thread Ned Slider


On 27/02/15 12:30, Pol Hallen wrote:
 Hi all :-)
 This is my first post: I'm coming from debian/bsd world.
 
 A question about repositories:
 
 minimal installation (version 7) provides:
 
 CentOS-Base.repo
 CentOS-CR.repo
 CentOS-Debuginfo.repo
 CentOS-fasttrack.repo
 CentOS-Sources.repo
 CentOS-Vault.repo
 
 I known there are others repositories as:
 
 RPMForge, EPEL, REMI, ATrpms, Webtatic (and maybe also others)
 
 so, what kind of these repositories are?
 
 Does they repositories substitute packages from main? Or only adding new
 packages?
 
 thanks for advices and help!
 
 Pol


Hi Pol,

Welcome to CentOS and the mailing lsit.

Start here for more information on 3rd party repositories:

http://wiki.centos.org/AdditionalResources/Repositories

Some may replace distro packages whereas others may have a policy not to
replace distro packages (so only contain packages not in the distro).
Some repositories split into channels where the main repo doesn't
contain distro packages but may have an extras channel that contains
any packages that replace distro packages.

Ultimately CentOS has little influence over what 3rd party repos do so
the ecision / policies are down to each individual repo.

The yum priorities plugin can be used to prevent 3rd party repositories
from replacing distro packages:

http://wiki.centos.org/PackageManagement/Yum/Priorities

Hope that at least gets you started.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 7 hand-edit the network configuration files

2015-02-27 Thread Helmut Drodofsky

Helo,

on
http://wiki.centos.org/FAQ/CentOS7

in
3. And what if I want the old naming back?
is written:
/etc/udev/rules.d/60-net.rules seems necessary to override 
/usr/lib/udev/rules.d/60-net.rules


According to my experience, the text should be changed to:
/etc/udev/rules.d/70-persistent-net.rules has to describe the naming 
rule according to this example:
SUBSYSTEM==net, ACTION==add, DRIVERS==?*, 
ATTR{address}==00:00:00:00:00:00, ATTR{type}==1, KERNEL==eth0, 
NAME=eth0

where ATTR{adress} is the HWADDR of the network interface.

Static assignment: GATEWAY has to be configured in /etc/sysconfig/network


--
Viele Grüße
Helmut Drodofsky
 
Internet XS Service GmbH

Heßbrühlstraße 15
70565 Stuttgart
  
Geschäftsführung

Dr.-Ing. Roswitha Hahn-Drodofsky
HRB 21091 Stuttgart
USt.ID: DE190582774
Tel. 0711 781941 0
Fax: 0711 781941 79
Mail: i...@internet-xs.de
www.internet-xs.de


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cyrus 2.4 and Centos6

2015-02-27 Thread Jonathan Billings
On Fri, Feb 27, 2015 at 11:19:55AM +0100, Timothy Kesten wrote:
 I'd like to install cyrus-imapd 2.4 in CentOS6.
 Found rpm cyrus 2.4 for CentOS6 on rpmseek.
 cyrus-imapd-2.4.17-30.1.x86_64.rpm
 
 But there are conflicts with postfix 2.6.6.
 
 Can I ignore this conflicts or is there a suitable version of ppstfix 
 available?

The supported version of Cyrus IMAPd in CentOS6 is v2.3.  If you found
v2.4 someplace on the internet, they aren't CentOS packages, and I
really doubt that they were intended for CentOS6 if they conflict with
postfix.  I suggest contacting whoever created those v2.4 packages and
asking them about the conflict, or use the CentOS6 packages.

You could also try backporting the CentOS7 packages to CentOS6,
keeping in mind that it relies on systemd and not Upstart to start the
service. 

-- 
Jonathan Billings billi...@negate.org
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-es] Ayuda rutas reglas firewall

2015-02-27 Thread César Martinez
Saludos amigos listeros vuelvo a pedir su ayuda les comento mi problema 
lo he podido solventar a medias paso a explicar lo que he realizado:


En el servidor Linux he creado esta regla a nivel de firewall

$IPTABLES -A FORWARD -i eth3 -j ACCEPT
$IPTABLES -t nat -A POSTROUTING -s 192.168.197.0/24 -p tcp -o eth3 -j 
SNAT --to 192.168.4.8



Con estas reglas ya puedo hacer ping desde el servidor Linux ip 
192.168.197.4 al segmento 192.168.4.X que esta en la otra oficina, 
adicional también puedo conectarme via escritorio remoto a cualquier 
maquina del segmento 192.168.4.X desde cualesquier equipo que está en mi 
red 192.168.197.X


Ahora no se porque no puedo hacer ping al segmento 192.168.0.X desde 
cualquier maquina que está en la red 192.168.197.4., pero desde el 
servidor linux si hago ping a cualquier máquina de la red 192.168.0.X, 
la puerta de enlace para llegar a este es  segmento 192.168.4.8, se que 
no es ruta porque desde el servidor linux si hago ping


Agradezco si me  puedan ayudar con esta última parte para ver si puedo 
resolver mi problema de forma definitiva.



--
Saludos Cordiales

|César Martínez | Ingeniero de Sistemas | SERVICOM
|Tel: (593-2)554-271 2221-386 | Ext 4501
|Celular: 0999374317 |Skype servicomecuador
|Web www.servicomecuador.com Síguenos en:
|Twitter: @servicomecuador |Facebook: servicomec
|Zona Clientes: www.servicomecuador.com/billing
|Blog: http://servicomecuador.com/blog
|Dir. Av. 10 de Agosto N29-140 Entre
|Acuña y  Cuero y Caicedo
|Quito - Ecuador - Sudamérica

On 24/02/15 21:01, César Martinez wrote:
Gracias si tengo habilitada esa opción por defecto, adicional debería 
crear alguna regla iptables con POSTROUTING para mandar el trafico ??




___
CentOS-es mailing list
CentOS-es@centos.org
http://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS] Package group X Window System has disappeared

2015-02-27 Thread Ned Slider


On 27/02/15 09:54, Niki Kovacs wrote:
 Hi,
 
 Until last week, I could install a CentOS 7 based desktop using the
 following approach:
 
 1. Install minimal system.
 
 2. yum groupinstall X Window System
 
 3. yum install gdm gnome-classic-session gnome-terminal liberation-fonts
 
 4. Install applications as needed.
 
 This morning, the package group X Window System seems to have
 disappeared. This is embarrassing.
 
 What happened?
 
 Niki

Not sure as I don't have a CentOS 7 install to hand, only RHEL7 on which
the X Window System group does not exist.

It does exist on RHEL 5/6 - perhaps you are getting confused between
versions?

You can see the list of valid groups with:

yum grouplist

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] repositories

2015-02-27 Thread Pol Hallen

Hope that at least gets you started.


Hi Ned, thanks for help :-)

Pol
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-es] ayudar servidor se me apaga

2015-02-27 Thread José Luis Respeto Alvarez
Suena a topico pero desactiva selinux!
El 27/02/2015 16:42, Aldo Rivadeneira aldo.rivadene...@gmail.com
escribió:

 Lo único que puedo detectar es que activaron IPv6 y eso requiere de un
 reinicio de sistema para que lo tome activo.
 El segundo detalle es el raid no se si te este causando problemas la
 controladora seria cosa de validar si actualizaron el kernel y exista un
 problema con ello.

 Como comentan seria cosa de probar :
 Validar por partes el kernel y si es algo físico.

 Probar si es el caso con el kernel anterior.

 Actualizar kernel nuevamente.

 Actualizar firmware de la controladora RAID.

 Saludos,



 On Wednesday, January 7, 2015, Guillermo Henríquez 
 guillermoma...@gmail.com
 wrote:

  Amigos:
 
  Necesito ayuda, un servidor se me apaga, con el mensaje  shutting down
 for
  system halt y luego al arrancalo muestra el sig. log:
 
 
  kernel: Linux version 2.6.18-348.1.1.el5 (mockbu...@builder10.centos.org
  javascript:;)
  (gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)) #1 SMP Tue Jan 22
 16:19:19
  EST 2013
  kernel: Command line: ro root=/dev/md2
  kernel: BIOS-provided physical RAM map:
  kernel:  BIOS-e820: 0001 - 00093800 (usable)
  kernel:  BIOS-e820: 00093800 - 00093c00 (reserved)
  kernel:  BIOS-e820: 00098000 - 000a (reserved)
  kernel:  BIOS-e820: 000f - 0010 (reserved)
  kernel:  BIOS-e820: 0010 - 7d7d4000 (usable)
  kernel:  BIOS-e820: 7d7d4000 - 7d7de000 (ACPI data)
  kernel:  BIOS-e820: 7d7de000 - 7d7df000 (usable)
  kernel:  BIOS-e820: 7d7df000 - 8000 (reserved)
  kernel:  BIOS-e820: f400 - f800 (reserved)
  kernel:  BIOS-e820: fec0 - fee1 (reserved)
  kernel:  BIOS-e820: ff80 - 0001 (reserved)
  kernel: DMI 2.7 present.
  kernel: No NUMA configuration found
  kernel: Faking a node at -7d7df000
  kernel: Bootmem setup node 0 -7d7df000
  kernel: Memory for crash kernel (0x0 to 0x0) notwithin permissible range
  kernel: disabling kdump
  kernel: ACPI: PM-Timer IO Port: 0x908
  kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
  kernel: Processor #0 7:10 APIC version 21
  kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
  kernel: Processor #2 7:10 APIC version 21
  kernel: ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
  kernel: Processor #4 7:10 APIC version 21
  kernel: ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] enabled)
  kernel: Processor #6 7:10 APIC version 21
  kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] disabled)
  last message repeated 59 times
  kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
  kernel: ACPI: IOAPIC (id[0x08] address[0xfec0] gsi_base[0])
  kernel: IOAPIC[0]: apic_id 8, version 32, address 0xfec0, GSI 0-23
  kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
  kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
  kernel: Setting APIC routing to physical flat
  kernel: ACPI: HPET id: 0x8086a201 base: 0xfed0
  kernel: Using ACPI (MADT) for SMP configuration information
  kernel: Nosave address range: 00093000 - 00094000
  kernel: Nosave address range: 00093000 - 00098000
  kernel: Nosave address range: 00098000 - 000a
  kernel: Nosave address range: 000a - 000f
  kernel: Nosave address range: 000f - 0010
  kernel: Nosave address range: 7d7d4000 - 7d7de000
  kernel: Allocating PCI resources starting at 8800 (gap:
  8000:7400)
  kernel: SMP: Allowing 64 CPUs, 60 hotplug CPUs
  kernel: Built 1 zonelists.  Total pages: 505346
  kernel: Kernel command line: ro root=/dev/md2
  kernel: Initializing CPU#0
  kernel: PID hash table entries: 4096 (order: 12, 32768 bytes)
  kernel: Console: colour VGA+ 80x25
  kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
  kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
  kernel: Checking aperture...
  kernel: Memory: 2012576k/2056060k available (2627k kernel code, 42944k
  reserved, 1677k data, 224k init)
  kernel: Calibrating delay loop (skipped), value calculated using timer
  frequency.. 6186.24 BogoMIPS (lpj=3093121)
  kernel: Security Framework v1.0.0 initialized
  kernel: SELinux:  Initializing.
  kernel: selinux_register_security:  Registering secondary module
 capability
  kernel: Capability LSM initialized as secondary
  kernel: Mount-cache hash table entries: 256
  kernel: CPU: L1 I cache: 32K, L1 D cache: 32K
  kernel: CPU: L2 cache: 256K
  kernel: CPU: L3 cache: 8192K
  kernel: using mwait in idle threads.
  kernel: CPU: Physical Processor ID: 0
  kernel: CPU: Processor Core ID: 0
  kernel: MCE: Machine Check Exception Reporting is disabled.
  kernel: SMP alternatives: switching to UP code
 

[CentOS] Glibc sources?

2015-02-27 Thread ANDY KENNEDY
All,

Please excuse any ignorance in this e-mail as I am not a RH/CentOS/Fedora user 
and may
blunder my way through the correct terminology for my request.

I'm tasked with reconstructing the CentOS version of the GlibC library for 
testing with
gethostbyname().  My mission is to show that we are not affected by the latest 
exploit for
the product we are shipping targeted for RHEL and CentOS.  To do so, I want to 
equip
gethostbyname() with additional code.

My objective is to rebuild from source the EXACT version of GlibC for CentOS 
6.6.
Afterwards, I will make my changes in the code, rebuild and complete my testing.

libc.so.6 reports:
GNU C Library stable release version 2.12, by Roland McGrath et al.
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 4.4.7 20120313 (Red Hat 4.4.7-11).
Compiled on a Linux 2.6.32 system on 2015-01-27.
Available extensions:
The C stubs add-on version 2.1.2.
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
BIND-8.2.3-T5B
RT using linux kernel aio
libc ABIs: UNIQUE IFUNC
For bug reporting instructions, please see:
http://www.gnu.org/software/libc/bugs.html.

But, when looking through the source code for this version on the CentOS 
servers I only see:
http://vault.centos.org/6.6/updates/Source/SPackages/
[ ] glibc-2.12-1.149.el6_6.4.src.rpm07-Jan-2015 22:45   15M 
 
[ ] glibc-2.12-1.149.el6_6.5.src.rpm27-Jan-2015 23:13   15M 
 

Please point me to the correct source tarball, and all required patches so that 
I can
reconstruct my loaded version of GlibC.  A yum command is also acceptable.

Thanks,
Andy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Glibc sources?

2015-02-27 Thread Earl A Ramirez
On 27 February 2015 at 13:49, ANDY KENNEDY andy.kenn...@adtran.com wrote:

 All,

 Please excuse any ignorance in this e-mail as I am not a RH/CentOS/Fedora
 user and may
 blunder my way through the correct terminology for my request.

 I'm tasked with reconstructing the CentOS version of the GlibC library for
 testing with
 gethostbyname().  My mission is to show that we are not affected by the
 latest exploit for
 the product we are shipping targeted for RHEL and CentOS.  To do so, I
 want to equip
 gethostbyname() with additional code.

 My objective is to rebuild from source the EXACT version of GlibC for
 CentOS 6.6.
 Afterwards, I will make my changes in the code, rebuild and complete my
 testing.

 libc.so.6 reports:
 GNU C Library stable release version 2.12, by Roland McGrath et al.
 Copyright (C) 2010 Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.
 There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
 PARTICULAR PURPOSE.
 Compiled by GNU CC version 4.4.7 20120313 (Red Hat 4.4.7-11).
 Compiled on a Linux 2.6.32 system on 2015-01-27.
 Available extensions:
 The C stubs add-on version 2.1.2.
 crypt add-on version 2.1 by Michael Glad and others
 GNU Libidn by Simon Josefsson
 Native POSIX Threads Library by Ulrich Drepper et al
 BIND-8.2.3-T5B
 RT using linux kernel aio
 libc ABIs: UNIQUE IFUNC
 For bug reporting instructions, please see:
 http://www.gnu.org/software/libc/bugs.html.

 But, when looking through the source code for this version on the CentOS
 servers I only see:
 http://vault.centos.org/6.6/updates/Source/SPackages/
 [ ] glibc-2.12-1.149.el6_6.4.src.rpm07-Jan-2015 22:45   15M
 [ ] glibc-2.12-1.149.el6_6.5.src.rpm27-Jan-2015 23:13   15M

 Please point me to the correct source tarball, and all required patches so
 that I can
 reconstruct my loaded version of GlibC.  A yum command is also acceptable.

 Thanks,
 Andy
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


Hi Andy,

You can use yumdownloader to download the source

$ yumdownloader --source glibc

$ rpm -ivh package.src.rpm
This will give you all the relevant files required for building the package.


-- 
Kind Regards
Earl Ramirez
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Glibc sources?

2015-02-27 Thread Stephen Harris
On Fri, Feb 27, 2015 at 06:49:23PM +, ANDY KENNEDY wrote:

 But, when looking through the source code for this version on the CentOS 
 servers I only see:
 http://vault.centos.org/6.6/updates/Source/SPackages/
 [ ]   glibc-2.12-1.149.el6_6.5.src.rpm27-Jan-2015 23:13   15M 
  

This is the latest version for a fully patched CentOS 6 system.

  % rpm -q glibc
  glibc-2.12-1.149.el6_6.5.x86_64
  glibc-2.12-1.149.el6_6.5.i686


-- 

rgds
Stephen
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Glibc sources?

2015-02-27 Thread Frank Cox
On Fri, 27 Feb 2015 18:49:23 +
ANDY KENNEDY wrote:


 Compiled on a Linux 2.6.32 system on 2015-01-27.

glibc-2.12-1.149.el6_6.5.src.rpm   27-Jan-2015 23:13   15M

The date on that rpm matches the compiled on date that you posted.

-- 
MELVILLE THEATRE ~ Real D 3D Digital Cinema ~ www.melvilletheatre.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Package group X Window System has disappeared

2015-02-27 Thread m . roth
Greg Bailey wrote:
 On 02/27/2015 02:54 AM, Niki Kovacs wrote:

 Until last week, I could install a CentOS 7 based desktop using the
 following approach:

 1. Install minimal system.
 2. yum groupinstall X Window System
 3. yum install gdm gnome-classic-session gnome-terminal liberation-fonts
 4. Install applications as needed.

 This morning, the package group X Window System seems to have
 disappeared. This is embarrassing.
snip
 Works for me, although I have to do yum group list hidden to see the

That's *weird* . Why would you even want hidden groups?

I was looking on a 7 box, and did see an environment group of Server
with GUI; I also see, under available groups, Desktop, and Desktop
Platform. I have no idea what the difference is I wish they'd have
user install groups and expert install groups, so that when the CentOS
desktop clobbers Ubuntu, and Mint, and all the rest, those of us who know
what we're doing, and/or working on servers, can install more easily

   mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-es] ayudar servidor se me apaga

2015-02-27 Thread Aldo Rivadeneira
Lo único que puedo detectar es que activaron IPv6 y eso requiere de un
reinicio de sistema para que lo tome activo.
El segundo detalle es el raid no se si te este causando problemas la
controladora seria cosa de validar si actualizaron el kernel y exista un
problema con ello.

Como comentan seria cosa de probar :
Validar por partes el kernel y si es algo físico.

Probar si es el caso con el kernel anterior.

Actualizar kernel nuevamente.

Actualizar firmware de la controladora RAID.

Saludos,



On Wednesday, January 7, 2015, Guillermo Henríquez guillermoma...@gmail.com
wrote:

 Amigos:

 Necesito ayuda, un servidor se me apaga, con el mensaje  shutting down for
 system halt y luego al arrancalo muestra el sig. log:


 kernel: Linux version 2.6.18-348.1.1.el5 (mockbu...@builder10.centos.org
 javascript:;)
 (gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)) #1 SMP Tue Jan 22 16:19:19
 EST 2013
 kernel: Command line: ro root=/dev/md2
 kernel: BIOS-provided physical RAM map:
 kernel:  BIOS-e820: 0001 - 00093800 (usable)
 kernel:  BIOS-e820: 00093800 - 00093c00 (reserved)
 kernel:  BIOS-e820: 00098000 - 000a (reserved)
 kernel:  BIOS-e820: 000f - 0010 (reserved)
 kernel:  BIOS-e820: 0010 - 7d7d4000 (usable)
 kernel:  BIOS-e820: 7d7d4000 - 7d7de000 (ACPI data)
 kernel:  BIOS-e820: 7d7de000 - 7d7df000 (usable)
 kernel:  BIOS-e820: 7d7df000 - 8000 (reserved)
 kernel:  BIOS-e820: f400 - f800 (reserved)
 kernel:  BIOS-e820: fec0 - fee1 (reserved)
 kernel:  BIOS-e820: ff80 - 0001 (reserved)
 kernel: DMI 2.7 present.
 kernel: No NUMA configuration found
 kernel: Faking a node at -7d7df000
 kernel: Bootmem setup node 0 -7d7df000
 kernel: Memory for crash kernel (0x0 to 0x0) notwithin permissible range
 kernel: disabling kdump
 kernel: ACPI: PM-Timer IO Port: 0x908
 kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 kernel: Processor #0 7:10 APIC version 21
 kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
 kernel: Processor #2 7:10 APIC version 21
 kernel: ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
 kernel: Processor #4 7:10 APIC version 21
 kernel: ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] enabled)
 kernel: Processor #6 7:10 APIC version 21
 kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] disabled)
 last message repeated 59 times
 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
 kernel: ACPI: IOAPIC (id[0x08] address[0xfec0] gsi_base[0])
 kernel: IOAPIC[0]: apic_id 8, version 32, address 0xfec0, GSI 0-23
 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 kernel: Setting APIC routing to physical flat
 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed0
 kernel: Using ACPI (MADT) for SMP configuration information
 kernel: Nosave address range: 00093000 - 00094000
 kernel: Nosave address range: 00093000 - 00098000
 kernel: Nosave address range: 00098000 - 000a
 kernel: Nosave address range: 000a - 000f
 kernel: Nosave address range: 000f - 0010
 kernel: Nosave address range: 7d7d4000 - 7d7de000
 kernel: Allocating PCI resources starting at 8800 (gap:
 8000:7400)
 kernel: SMP: Allowing 64 CPUs, 60 hotplug CPUs
 kernel: Built 1 zonelists.  Total pages: 505346
 kernel: Kernel command line: ro root=/dev/md2
 kernel: Initializing CPU#0
 kernel: PID hash table entries: 4096 (order: 12, 32768 bytes)
 kernel: Console: colour VGA+ 80x25
 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
 kernel: Checking aperture...
 kernel: Memory: 2012576k/2056060k available (2627k kernel code, 42944k
 reserved, 1677k data, 224k init)
 kernel: Calibrating delay loop (skipped), value calculated using timer
 frequency.. 6186.24 BogoMIPS (lpj=3093121)
 kernel: Security Framework v1.0.0 initialized
 kernel: SELinux:  Initializing.
 kernel: selinux_register_security:  Registering secondary module capability
 kernel: Capability LSM initialized as secondary
 kernel: Mount-cache hash table entries: 256
 kernel: CPU: L1 I cache: 32K, L1 D cache: 32K
 kernel: CPU: L2 cache: 256K
 kernel: CPU: L3 cache: 8192K
 kernel: using mwait in idle threads.
 kernel: CPU: Physical Processor ID: 0
 kernel: CPU: Processor Core ID: 0
 kernel: MCE: Machine Check Exception Reporting is disabled.
 kernel: SMP alternatives: switching to UP code
 kernel: ACPI: Core revision 20060707
 kernel: Using local APIC timer interrupts.
 kernel: Detected 6.236 MHz APIC timer.
 kernel: SMP alternatives: switching to SMP code
 kernel: Booting processor 1/4 APIC 0x2
 

Re: [CentOS] Package group X Window System has disappeared

2015-02-27 Thread Niki Kovacs



Le 27/02/2015 16:01, m.r...@5-cent.us a écrit :

That's*weird*  . Why would you even want hidden groups?


Weird and... not very intelligent. To say it politely.

:o)

--
Microlinux - Solutions informatiques 100% Linux et logiciels libres
7, place de l'église - 30730 Montpezat
Web  : http://www.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cyrus 2.4 and Centos6

2015-02-27 Thread Mike McCarthy, W1NR
Is there a reason why you need 2.4 vs. the 2.3 package from the CentOS6
repos?

Mike

On 02/27/2015 05:19 AM, Timothy Kesten wrote:
 Hi Folks,

 I'd like to install cyrus-imapd 2.4 in CentOS6.
 Found rpm cyrus 2.4 for CentOS6 on rpmseek.
 cyrus-imapd-2.4.17-30.1.x86_64.rpm

 But there are conflicts with postfix 2.6.6.

 Can I ignore this conflicts or is there a suitable version of ppstfix 
 available?

 Thx
 Timothy
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Odd nfs mount problem

2015-02-27 Thread m . roth
I'm exporting a directory, firewall's open on both machines (one CentOS
6.6, the other RHEL 6.6), it automounts on the exporting machine, but the
other server, not so much.

ls /mountpoint/directory eventually times out (directory being the NFS
mount). mount -t nfs server:/location/being/exported /mnt works... but an
immediate ls /mnt gives me stale file handle.

The twist on this: the directory being exported is on an xfs filesystem...
one that's 33TB (it's an external RAID 6 appliance).

Any ideas?

  mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Odd nfs mount problem

2015-02-27 Thread m . roth
m.r...@5-cent.us wrote:
 I'm exporting a directory, firewall's open on both machines (one CentOS
 6.6, the other RHEL 6.6), it automounts on the exporting machine, but the
 other server, not so much.

 ls /mountpoint/directory eventually times out (directory being the NFS
 mount). mount -t nfs server:/location/being/exported /mnt works... but an
 immediate ls /mnt gives me stale file handle.

 The twist on this: the directory being exported is on an xfs filesystem...
 one that's 33TB (it's an external RAID 6 appliance).

 Any ideas?

Oh, yes: I did just think to install xfs_progs, and did that, but still no
joy.

 mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread Robert Arkiletian
Still have good quality older sata hardware raid cards that require 512
bytes/sector. As far as I know HD manufacturers are not making native 512
bytes/sector drives any more.

Some have better 512e emulation than others. Looking for some advice on
which to avoid and which are recommended. Thanks. PS this is for a CentOS6
server.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread John R Pierce

On 2/27/2015 12:53 PM, Robert Arkiletian wrote:

Still have good quality older sata hardware raid cards that require 512
bytes/sector. As far as I know HD manufacturers are not making native 512
bytes/sector drives any more.

Some have better 512e emulation than others. Looking for some advice on
which to avoid and which are recommended. Thanks. PS this is for a CentOS6
server.


any of the 'enterprise' nearline storage or NAS drives should be fine.   
I wouldn't use anything else in a RAID setup.


Seagate NS series, for instance, or WD Red or Re, etc.



--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum causing RPC timed out?

2015-02-27 Thread Dave Burns
Apparently CentOS-7 - Base is failing, what does that mean? How do I
contact the upstream for the repo? How do I find a working upstream?

More info from command execution:
do_ypcall: clnt_call: RPC: Timed out
do_ypcall: clnt_call: RPC: Timed out
http://mirror.supremebytes.com/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://mirror.supremebytes.com/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30384 milliseconds')
Trying other mirror.


 One of the configured repositories failed (CentOS-7 - Base),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work fix this:

 1. Contact the upstream for the repository and get them to fix the
problem.

 2. Reconfigure the baseurl/etc. for the repository, to point to a
working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).

 3. Disable the repository, so yum won't use it by default. Yum will
then
just ignore the repository until you permanently enable it again or
use
--enablerepo for temporary usage:

yum-config-manager --disable base

 4. Configure the failing repository to be skipped, if it is
unavailable.
Note that yum will try to contact the repo. when it runs most
commands,
so will have to try and fail each time (and thus. yum will be be
much
slower). If it is a very temporary problem though, this is often a
nice
compromise:

yum-config-manager --save --setopt=base.skip_if_unavailable=true

failure:
repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2
from base: [Errno 256] No more mirrors to try.
http://centos.corenetworks.net/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://centos.corenetworks.net/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30385 milliseconds')
http://mirror.us.oneandone.net/linux/distributions/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://mirror.us.oneandone.net/linux/distributions/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30385 milliseconds')
http://centos.sonn.com/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://centos.sonn.com/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30383 milliseconds')
http://centos-distro.cavecreek.net/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://centos-distro.cavecreek.net/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30383 milliseconds')
http://mirror.clarkson.edu/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://mirror.clarkson.edu/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30383 milliseconds')
http://mirror.thelinuxfix.com/CentOS/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://mirror.thelinuxfix.com/CentOS/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30383 milliseconds')
http://mirrors.psychz.net/Centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://mirrors.psychz.net/Centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30383 milliseconds')
http://repos.mia.quadranet.com/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on

[CentOS] yum causing RPC timed out?

2015-02-27 Thread Dave Burns
I just installed centos 7, yum is acting strange, experiencing RPC
time-outs. Sometimes when I disable the additional repos (epel and
rpmforge) it seems to make things act normal. But not this time (see below).

Could I have some misconfiguration? Network glitch? What hypotheses should
I be considering?
Thanks,
Dave

[root@localhost ~]# yum   repolist
repo id  repo name
 status
base/7/x86_64CentOS-7 - Base
 8,465
epel/x86_64  Extra Packages for
Enterprise Linux 7 - x86_64 7,312
extras/7/x86_64  CentOS-7 - Extras
   104
rpmforge RHEL 7 - RPMforge.net -
dag  245
updates/7/x86_64 CentOS-7 - Updates
1,721
repolist: 17,847
[root@localhost ~]# yum   --disablerepo=epel  --disablerepo=rpmforge provides
'*/applydeltarpm'
do_ypcall: clnt_call: RPC: Timed out
do_ypcall: clnt_call: RPC: Timed out
http://centos.corenetworks.net/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
[Errno 12] Timeout on
http://centos.corenetworks.net/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
(28, 'Resolving timed out after 30385 milliseconds')
Trying other mirror.
do_ypcall: clnt_call: RPC: Timed out
[etc. etc.]
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Odd nfs mount problem [SOLVED]

2015-02-27 Thread m . roth
m.r...@5-cent.us wrote:
 m.r...@5-cent.us wrote:
 I'm exporting a directory, firewall's open on both machines (one CentOS
 6.6, the other RHEL 6.6), it automounts on the exporting machine, but
 the
 other server, not so much.

 ls /mountpoint/directory eventually times out (directory being the NFS
 mount). mount -t nfs server:/location/being/exported /mnt works... but
 an
 immediate ls /mnt gives me stale file handle.

 The twist on this: the directory being exported is on an xfs
 filesystem...
 one that's 33TB (it's an external RAID 6 appliance).

 Any ideas?

 Oh, yes: I did just think to install xfs_progs, and did that, but still no
 joy.


Since we got the RAID appliance mounted, we'd started with a project
directory on it, and that exported just fine. So what seems to work was to
put the new directory under that, and then export *that*.  That is,
/path/to/ourproj, which mounts under /ourproj, and we wanted to mount
something else under /otherproj, (note that ourproj is the large xfs
filesystem), so instead of /path/to/otherproj, I just exported
/path/to/ourproj/otherproj, and mounted that on the other system as
/otherproj.

Does that make sense? Clear as mud? Anyway, it looks like we have our
workaround.

   mark wish nfs could handle an option of inode64

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum causing RPC timed out?

2015-02-27 Thread Thomas Eriksson
On 02/27/2015 01:11 PM, Dave Burns wrote:
 Apparently CentOS-7 - Base is failing, what does that mean? How do I
 contact the upstream for the repo? How do I find a working upstream?
 
 More info from command execution:
 do_ypcall: clnt_call: RPC: Timed out
 do_ypcall: clnt_call: RPC: Timed out
 http://mirror.supremebytes.com/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
 [Errno 12] Timeout on
 http://mirror.supremebytes.com/centos/7.0.1406/os/x86_64/repodata/3cda64d1c161dd0fead8398a62ef9c691e78ee02fe56d04566f850c94929f61f-filelists.sqlite.bz2:
 (28, 'Resolving timed out after 30384 milliseconds')
 Trying other mirror.
 

This has nothing to do with yum.

You are using NIS for name lookup and your NIS server is not responding.

-Thomas

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7, igb and DCB support for pause frame ?

2015-02-27 Thread Laurent Wandrebeck


Steven Tardy sjt5a...@gmail.com a écrit :


 DCB requires Priority Flow Control(PFC) aka 802.1Qbb.
Flow Control is 802.3x.

The two are often confused and not compatible.

http://www.intel.com/content/www/us/en/ethernet-controllers/ethernet-controller-i350-datasheet.html

Mentions flow control several times, but never
PFC/priority-flow-control/802.1Qbb.

PFC capable switches purposefully disable 802.3x flow control. Also PFC has
to negotiate between two devices/switches matching QoS/CoS/no-drop policies.

Some good reading for beginner PFC knowledge:

http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/ieee-802-1-data-center-bridging/at_a_glance_c45-460907.pdf

What exactly are you trying to pause? Typically FCoE/iSCSI is set to
no-drop and Ethernet traffic is paused/dropped in favor of storage
traffic. If there is only one type/class/CoS of traffic PFC won't gain much
over regular flow control/802.3x.

Hope that helps.


Hello Steven,

You’ve been really helpful !

Our switches indeed do support 802.3x and not 802.1Qbb.

Ethtool telling:
Supported pause frame use: Symmetric
Advertised pause frame use: Symmetric

I guess (I’m more of a sysad guy than netad) we’re on the right track  
and have no need of DCB* and lldpad.


Actually, our masters will be metadata server for the distributed FS  
(RozoFS not to name it), and will export a system image via NFS to  
nodes (2×1gbps, 802.3ad) which are « diskless » (no disk for OS but  
disks for distributed FS storage only).

FC (802.3x) usage is mandatory for RozoFS.
There will be some other traffic due to HTCondor (nodes will be  
execute nodes too), syslog being centralized on masters…
I know, that not the perfect config, but we had to do that way due to  
budget constraints.
Now I need to find how to get a single image for all the nodes :)  
(PXE, dhcpd, dracut and yum --installroot should do the trick I hope).


Thanks again for the head’s up !

Regards,
Laurent.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] L2TP over IPSEC?

2015-02-27 Thread dixan rodriges
i know , i have configured

On Thu, Feb 26, 2015 at 11:38 PM, CS DBA cs_...@consistentstate.com wrote:

 Hi All;

 anyone have any info on setting up a L2TP over IPSEC client vpn
 connection?

 Thanks in advance


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos




-- 
Thanks and Regards,
---
Dixson Rodriges,

MoB:+91-9249500540
Email: dixa...@gmail.com,amdi...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Easy way to strip down CentOS?

2015-02-27 Thread Scott Robbins
On Fri, Feb 27, 2015 at 08:36:58AM +0100, Niki Kovacs wrote:
 Le 26/02/2015 15:53, David Both a écrit :
 Ok, I understand, now. I just leave multiple desktops in place and
 switch between them as I want. But perhaps you have reasons to do it as
 you do. That is one thing I really appreciate about Linux, the fact that
 there are many, many ways to accomplish almost everything and that what
 is right and works for me may not be what works best for you.

I find that it's quite easy to get a minimal desktop going.  I tend to use
a custom compiled dwm, but this will work with most window managers.

http://srobb.net/minimaldesktop.html


-- 
Scott Robbins
PGP keyID EB3467D6
( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
gpg --keyserver pgp.mit.edu --recv-keys EB3467D6

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum causing RPC timed out?

2015-02-27 Thread Dave Burns
On Fri, Feb 27, 2015 at 12:41 PM, Stephen Harris li...@spuddy.org wrote:

 do_ypcall is a NIS error message.  (Previous NIS was called yellow
 pages; the yp in do_ypcall is a reference to that).

 Maybe you have hosts: files nis in /etc/nsswitch.conf or something
 else that's causing the OS to want to talk to NIS.


grep hosts /etc/nsswitch.conf
hosts:  files nis dns myhostname

Maybe I should change to
hosts:  files dns nis myhostname
?



 You _DO_ have a problem with your NIS setup somewhere.


It is a problem if yum expects it to be something else. NIS passes all the
tests I have (ypcat  various outputs what I expect).
thanks,
Dave
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Dear All,

I am in desperate need for LVM data rescue for my server.
I have an VG call vg_hosting consisting of 4 PVs each contained in a
separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
And this LV: lv_home was created to use all the space of the 4 PVs.

Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).

I have tried with the following:

1. Removing the broken PV:

# vgreduce --force vg_hosting /dev/sdc1
  Physical volume /dev/sdc1 still in use

# pvmove /dev/sdc1
  No extents available for allocation

2. Replacing the broken PV:

I was able to create a new PV and restore the VG Config/meta data:

# pvcreate --restorefile ... --uuid ... /dev/sdc1
# vgcfgrestore --file ... vg_hosting

However, vgchange would give this error:

# vgchange -a y
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume vg_hosting-lv_home (253:4)
  0 logical volume(s) in volume group vg_hosting now active

Could someone help me please???
I'm in dire need for help to save the data, at least some of it if possible.

Regards,
Khem


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum causing RPC timed out?

2015-02-27 Thread Stephen Harris
On Fri, Feb 27, 2015 at 12:38:06PM -1000, Dave Burns wrote:
 What makes you think NIS is involved?

 Is Errno 12 a clue? I tried searching for (do_ypcall: clnt_call: rpc: timed

do_ypcall is a NIS error message.  (Previous NIS was called yellow
pages; the yp in do_ypcall is a reference to that).

Maybe you have hosts: files nis in /etc/nsswitch.conf or something
else that's causing the OS to want to talk to NIS.

You _DO_ have a problem with your NIS setup somewhere. 

-- 

rgds
Stephen
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread John R Pierce

On 2/27/2015 4:25 PM, Khemara Lyn wrote:

Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).


your data is spread across all 4 drives, and you lost 25% of it. so only 
3 out of 4 blocks of data still exist.  good luck with recovery.




--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread James A. Peltier


- Original Message -
| Dear All,
| 
| I am in desperate need for LVM data rescue for my server.
| I have an VG call vg_hosting consisting of 4 PVs each contained in a
| separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
| And this LV: lv_home was created to use all the space of the 4 PVs.
| 
| Right now, the third hard drive is damaged; and therefore the third PV
| (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
| left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
| 
| I have tried with the following:
| 
| 1. Removing the broken PV:
| 
| # vgreduce --force vg_hosting /dev/sdc1
|   Physical volume /dev/sdc1 still in use
| 
| # pvmove /dev/sdc1
|   No extents available for allocation

This would indicate that you don't have sufficient extents to move the data off 
of this disk.  If you have another disk then you could try adding it to the VG 
and then moving the extents.

| 2. Replacing the broken PV:
| 
| I was able to create a new PV and restore the VG Config/meta data:
| 
| # pvcreate --restorefile ... --uuid ... /dev/sdc1
| # vgcfgrestore --file ... vg_hosting
| 
| However, vgchange would give this error:
| 
| # vgchange -a y
| device-mapper: resume ioctl on  failed: Invalid argument
| Unable to resume vg_hosting-lv_home (253:4)
| 0 logical volume(s) in volume group vg_hosting now active

There should be no need to create a PV and then restore the VG unless the 
entire VG is damaged.  The configuration should still be available on the other 
disks and adding the new PV and moving the extents should be enough.  

| Could someone help me please???
| I'm in dire need for help to save the data, at least some of it if possible.

Can you not see the PV/VG/LV at all?

-- 
James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.sfu.ca/itservices
Twitter : @sfu_rcg
Powering Engagement Through Technology
Build upon strengths and weaknesses will generally take care of themselves - 
Joyce C. Lock

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Thank you, John for your quick reply.
That is what I hope. But how to do it? I cannot even activate the LV with
the remaining PVs.

Thanks,
Khem

On Sat, February 28, 2015 7:34 am, John R Pierce wrote:
 On 2/27/2015 4:25 PM, Khemara Lyn wrote:

 Right now, the third hard drive is damaged; and therefore the third PV
 (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
  left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).

 your data is spread across all 4 drives, and you lost 25% of it. so only 3
 out of 4 blocks of data still exist.  good luck with recovery.



 --
 john r pierce  37N 122W somewhere on
 the middle of the left coast

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Dear James,

Thank you for being quick to help.
Yes, I could see all of them:

# vgs
# lvs
# pvs

Regards,
Khem

On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:



 - Original Message -
 | Dear All,
 |
 | I am in desperate need for LVM data rescue for my server.
 | I have an VG call vg_hosting consisting of 4 PVs each contained in a
 | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
 | And this LV: lv_home was created to use all the space of the 4 PVs.
 |
 | Right now, the third hard drive is damaged; and therefore the third PV
 | (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
  | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
 |
 | I have tried with the following:
 |
 | 1. Removing the broken PV:
 |
 | # vgreduce --force vg_hosting /dev/sdc1
 |   Physical volume /dev/sdc1 still in use
 |
 | # pvmove /dev/sdc1
 |   No extents available for allocation


 This would indicate that you don't have sufficient extents to move the
 data off of this disk.  If you have another disk then you could try
 adding it to the VG and then moving the extents.

 | 2. Replacing the broken PV:
 |
 | I was able to create a new PV and restore the VG Config/meta data:
 |
 | # pvcreate --restorefile ... --uuid ... /dev/sdc1
 | # vgcfgrestore --file ... vg_hosting
 |
 | However, vgchange would give this error:
 |
 | # vgchange -a y
 |   device-mapper: resume ioctl on  failed: Invalid argument
 |   Unable to resume vg_hosting-lv_home (253:4)
 |   0 logical volume(s) in volume group vg_hosting now active


 There should be no need to create a PV and then restore the VG unless the
 entire VG is damaged.  The configuration should still be available on the
 other disks and adding the new PV and moving the extents should be
 enough.

 | Could someone help me please???
 | I'm in dire need for help to save the data, at least some of it if
 possible.

 Can you not see the PV/VG/LV at all?


 --
 James A. Peltier
 IT Services - Research Computing Group
 Simon Fraser University - Burnaby Campus
 Phone   : 778-782-6573
 Fax : 778-782-3045
 E-Mail  : jpelt...@sfu.ca
 Website : http://www.sfu.ca/itservices
 Twitter : @sfu_rcg
 Powering Engagement Through Technology
 Build upon strengths and weaknesses will generally take care of
 themselves - Joyce C. Lock

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread Chris Murphy
The default for fdisk, parted, and gdisk is starting the first
partition on LBA 2048, which is 8 sector aligned. You don't need any
options. The alternative is to simply not partition the drives or the
resulting RAID and just format it.

Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread John R Pierce

On 2/27/2015 4:37 PM, James A. Peltier wrote:

| I was able to create a new PV and restore the VG Config/meta data:
|
| # pvcreate --restorefile ... --uuid ... /dev/sdc1
|


oh, that step means you won't be able to recover ANY of the data that 
was formerly on that PV.




--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Dear John,

I understand; I tried it in the hope that, I could activate the LV again
with a new PV replacing the damaged one. But still I could not activate
it.

What is the right way to recover the remaining PVs left?

Regards,
Khem

On Sat, February 28, 2015 7:42 am, John R Pierce wrote:
 On 2/27/2015 4:37 PM, James A. Peltier wrote:

 | I was able to create a new PV and restore the VG Config/meta data:
 |
 | # pvcreate --restorefile ... --uuid ... /dev/sdc1
 |


 oh, that step means you won't be able to recover ANY of the data that was
 formerly on that PV.



 --
 john r pierce  37N 122W somewhere on
 the middle of the left coast

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Hello James and All,

For your information, here's the listing looks like:

[root@localhost ~]# pvs
  PV VG Fmt  Attr PSize PFree
  /dev/sda1  vg_hosting lvm2 a--  1.82t0
  /dev/sdb2  vg_hosting lvm2 a--  1.82t0
  /dev/sdc1  vg_hosting lvm2 a--  1.82t0
  /dev/sdd1  vg_hosting lvm2 a--  1.82t0
[root@localhost ~]# lvs
  LV  VG Attr   LSize  Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  lv_home vg_hosting -wi-s-  7.22t
  lv_root vg_hosting -wi-a- 50.00g
  lv_swap vg_hosting -wi-a- 11.80g
[root@localhost ~]# vgs
  VG #PV #LV #SN Attr   VSize VFree
  vg_hosting   4   3   0 wz--n- 7.28t0
[root@localhost ~]#

The problem is, when I do:

[root@localhost ~]# vgchange -a y
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume vg_hosting-lv_home (253:4)
  3 logical volume(s) in volume group vg_hosting now active

Only lv_root and lv_swap are activated; but lv_home is not, with the error
above (on the vgchange command).

How to activate the lv_home even with the 3 PVs left?
The PV /dev/sdb2 is the one lost. I created it from a new blank hard disk
and restore the VG using:

# pvcreate --restorefile ... --uuid ... /dev/sdb2
# vgcfgrestore --file ... vg_hosting

Regards,
Khem

On Sat, February 28, 2015 7:42 am, Khemara Lyn wrote:
 Dear James,


 Thank you for being quick to help.
 Yes, I could see all of them:


 # vgs
 # lvs
 # pvs


 Regards,
 Khem


 On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:




 - Original Message -
 | Dear All,
 |
 | I am in desperate need for LVM data rescue for my server.
 | I have an VG call vg_hosting consisting of 4 PVs each contained in a
 | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
 | And this LV: lv_home was created to use all the space of the 4 PVs.
 |
 | Right now, the third hard drive is damaged; and therefore the third PV
  | (/dev/sdc1) cannot be accessed anymore. I would like to recover
 whatever | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and
 /dev/sdd1).
 |
 | I have tried with the following:
 |
 | 1. Removing the broken PV:
 |
 | # vgreduce --force vg_hosting /dev/sdc1
 |   Physical volume /dev/sdc1 still in use
 |
 | # pvmove /dev/sdc1
 |   No extents available for allocation



 This would indicate that you don't have sufficient extents to move the
 data off of this disk.  If you have another disk then you could try
 adding it to the VG and then moving the extents.

 | 2. Replacing the broken PV:
 |
 | I was able to create a new PV and restore the VG Config/meta data:
 |
 | # pvcreate --restorefile ... --uuid ... /dev/sdc1
 | # vgcfgrestore --file ... vg_hosting
 |
 | However, vgchange would give this error:
 |
 | # vgchange -a y
 |  device-mapper: resume ioctl on  failed: Invalid argument
 |  Unable to resume vg_hosting-lv_home (253:4)
 |  0 logical volume(s) in volume group vg_hosting now active



 There should be no need to create a PV and then restore the VG unless
 the entire VG is damaged.  The configuration should still be available
 on the other disks and adding the new PV and moving the extents should
 be enough.

 | Could someone help me please???
 | I'm in dire need for help to save the data, at least some of it if
 possible.

 Can you not see the PV/VG/LV at all?



 --
 James A. Peltier
 IT Services - Research Computing Group
 Simon Fraser University - Burnaby Campus
 Phone   : 778-782-6573
 Fax : 778-782-3045
 E-Mail  : jpelt...@sfu.ca
 Website : http://www.sfu.ca/itservices
 Twitter : @sfu_rcg
 Powering Engagement Through Technology
 Build upon strengths and weaknesses will generally take care of
 themselves - Joyce C. Lock

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos





 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread S.Tindall
On Sat, 2015-02-28 at 07:25 +0700, Khemara Lyn wrote:

 I have tried with the following:
 
 1. Removing the broken PV:
 
 # vgreduce --force vg_hosting /dev/sdc1
   Physical volume /dev/sdc1 still in use

Next time, try vgreduce --removemissing VG first.

In my experience, any lvm command using --force often has undesirable
side effects.


Regarding getting the lvm functioning again, there is also a --partial
option that is sometimes useful with the various vg* commands with a
missing PV (see man lvm).

And vgdisplay -v often regenerates missing metadata (as in getting a
functioning lvm back).

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum causing RPC timed out?

2015-02-27 Thread Thomas Eriksson
On 02/27/2015 04:16 PM, Dave Burns wrote:
 On Fri, Feb 27, 2015 at 12:41 PM, Stephen Harris li...@spuddy.org wrote:

 do_ypcall is a NIS error message.  (Previous NIS was called yellow
 pages; the yp in do_ypcall is a reference to that).

 Maybe you have hosts: files nis in /etc/nsswitch.conf or something
 else that's causing the OS to want to talk to NIS.

 
 grep hosts /etc/nsswitch.conf
 hosts:  files nis dns myhostname
 
 Maybe I should change to
 hosts:  files dns nis myhostname
 ?
 
 

 You _DO_ have a problem with your NIS setup somewhere.

 
 It is a problem if yum expects it to be something else. NIS passes all the
 tests I have (ypcat  various outputs what I expect).
 thanks,

Yum is blissfully unaware of how the hostname is resolved. It just uses
a system call and expects to get an answer within a reasonable time.

The message do_ypcall: clnt_call: RPC: Timed out is coming from ypbind
and indicates that NIS is not working as it should.

Swapping the order of dns and nis in nsswitch.conf will probably get you
going for this particular case, provided that dns is working. But
the NIS problem is going to bite you sooner or later if you don't sort
it out.

-Thomas
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Package group X Window System has disappeared

2015-02-27 Thread Niki Kovacs

Hi,

Until last week, I could install a CentOS 7 based desktop using the 
following approach:


1. Install minimal system.

2. yum groupinstall X Window System

3. yum install gdm gnome-classic-session gnome-terminal liberation-fonts

4. Install applications as needed.

This morning, the package group X Window System seems to have 
disappeared. This is embarrassing.


What happened?

Niki
--
Microlinux - Solutions informatiques 100% Linux et logiciels libres
7, place de l'église - 30730 Montpezat
Web  : http://www.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Cyrus 2.4 and Centos6

2015-02-27 Thread Timothy Kesten
Hi Folks,

I'd like to install cyrus-imapd 2.4 in CentOS6.
Found rpm cyrus 2.4 for CentOS6 on rpmseek.
cyrus-imapd-2.4.17-30.1.x86_64.rpm

But there are conflicts with postfix 2.6.6.

Can I ignore this conflicts or is there a suitable version of ppstfix 
available?

Thx
Timothy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum causing RPC timed out?

2015-02-27 Thread Dave Burns
On Fri, Feb 27, 2015 at 11:57 AM, Thomas Eriksson 
thomas.eriks...@slac.stanford.edu wrote:


 This has nothing to do with yum.

 You are using NIS for name lookup and your NIS server is not responding.


NIS is working fine, at least, for what I expect it to do.

What makes you think NIS is involved?
What does yum use NIS for?
Is there a test command I could use to see whether NIS is working for yum?
What names is yum looking up?
Is Errno 12 a clue? I tried searching for (do_ypcall: clnt_call: rpc: timed
out  errno 12 ), got many confusing hits, nothing obviously helpful.

thanks,
Dave
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread Chris Murphy
On Fri, Feb 27, 2015 at 1:53 PM, Robert Arkiletian rob...@gmail.com wrote:
 Still have good quality older sata hardware raid cards that require 512
 bytes/sector. As far as I know HD manufacturers are not making native 512
 bytes/sector drives any more.

512n drives still exist, although they tend to be a bit smaller, 2TB or less.
http://www.hgst.com/tech/techlib.nsf/techdocs/FD3F376DC2ECCE68882579D40082C393/$file/US7K4000_ds.pdf


4Kn drives are appearing now also. I don't expect these drives to be
bootable except possibly by systems with UEFI firmware. It's also
possible hardware RAID will reject them unless explicitly supported.
http://www.hgst.com/tech/techlib.nsf/techdocs/29C9312E3B7D10CE88257D41000D8D16/$file/Ultrastar-7K6000-DS.pdf


 Some have better 512e emulation than others. Looking for some advice on
 which to avoid and which are recommended. Thanks. PS this is for a CentOS6
 server.

The emulation implementations don't come into play if the alignment is
correct from the start. The better implementations have significantly
less pathological behavior if alignment is wrong, but that's
anecdotal, I don't have any empirical data available. But I'd say in
any case you want it properly aligned.


-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread m . roth
Chris Murphy wrote:
snip
 The emulation implementations don't come into play if the alignment is
 correct from the start. The better implementations have significantly
 less pathological behavior if alignment is wrong, but that's
 anecdotal, I don't have any empirical data available. But I'd say in
 any case you want it properly aligned.

You really, really want it properly aligned. We ran into that problem when
we started getting 3TB drives a couple-three years ago. Proper alignment
made a measured... trying to remember, but I think it was at *least* 20%
difference in throughput.

Alignment's easy: using parted (the user-hostile program), if you do go in
with parted -a optimal /dev/drive, and do
mkpart pri ext4 0.0GB 100% (for non-root drives, for example), it's
aligned correctly.

mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-27 Thread John R Pierce

On 2/27/2015 3:06 PM, m.r...@5-cent.us wrote:

Alignment's easy: using parted (the user-hostile program), if you do go in
with parted -a optimal /dev/drive, and do
mkpart pri ext4 0.0GB 100% (for non-root drives, for example), it's
aligned correctly.


i found -a optimal to do weird things, and almost always complain.I 
just use -a none now, and specify partition start in (512b) sectors, like..


# parted /dev/sdc
align none
mklabel gpt
mkpart pri 512s -1s

don't start at 0, as thats where the MBR or GPT has to go.512 
sectors is 256K bytes, which puts you on a erase block boundary with 
most SSD's as well as HD's.-1s is end of the disk.




--
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Package group X Window System has disappeared

2015-02-27 Thread anax

Try

yum groupinstall Xfce

or
yum groupinstall MATE Desktop

or
yum groupinstall GNOME Desktop

or
yum groupinstall Server with GUI

suomi



On 02/27/2015 10:54 AM, Niki Kovacs wrote:

Hi,

Until last week, I could install a CentOS 7 based desktop using the
following approach:

1. Install minimal system.

2. yum groupinstall X Window System

3. yum install gdm gnome-classic-session gnome-terminal liberation-fonts

4. Install applications as needed.

This morning, the package group X Window System seems to have
disappeared. This is embarrassing.

What happened?

Niki

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos