[CentOS] nameserver issue

2017-04-19 Thread Fred Smith
Hi all!

This question is, at best, somewhat peripheral to Centos, but I'm
hoping to be forgiven, and that someone here can give me a clue.

I've just brought up a nameserver on my household LAN, bind9 on a
Raspberry Pi.

The connection with Centos is this: my main desktop is C7, and its
hardwired network is also manual, not dhcp. I've edited the ipv4 config
(in NM) and changed the DNS settings from 192.168.2.1 (the router) to
192.168.2.2 (the RPi). I've also manually tweaked /etc/resolv.conf
to contain 192.168.2.2 instead of 192.168.2.1. 

works fine. until I fire up a vpn. having done that, looking in 
/etc/resolv.conf (while the vpn is connected) it has reverted to 
192.168.2.1.

after shutting down the vpn, 192.168.2.1 remains in resolv.conf

what am I overlooking here?

now the not-so-Centos-related question:
I've changed the dhcp settings in my router so it should deliver
192.168.2.2 to the dhcp clients instead of 192.168.2.1. And it does,
sorta. all the systems that use DHCP, now are configured with two
DNS server addresses, 192.168.2.2, and 192.168.2.1. And I have no 
clue why 192.168.2.1 is still showing. Both windows and Linux systems
are showing this behavior.

Clues appreciated!

and thanks in advance.
-- 
 Fred Smith -- fre...@fcshome.stoneham.ma.us -
   But God demonstrates his own love for us in this: 
 While we were still sinners, 
  Christ died for us.
--- Romans 5:8 (niv) --
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS-virt] qemu-kvm-ev ppc64le release builds

2017-04-19 Thread Lance Albertson
Hi,

We're using qemu-kvm-ev on ppc64le and I've noticed that it's included in
the extras repo for ppc64le but in the qemu-kvm-ev repo for x86_64. I also
noticed the version in ppc64le is lagging behind x86. I see that ppc64le is
being built for this, however isn't tagged for virt7-kvm-common-release and
thus showing up under /virt/kvm-common on the mirrors.

Is there any particular reason why this is happening? If possible, I can
certainly provide some testing to get this moving along.

Thank you!

[1] http://cbs.centos.org/koji/packageinfo?packageID=539

-- 
Lance Albertson
Director
Oregon State University | Open Source Lab
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] PUPPET - group IDS

2017-04-19 Thread Leroy Tennison
I'm not familiar with the syntax you're using but the below worked for me using 
'puppet apply grp-usr.pp' on my laptop where grp-usr.pp contained:

group { 'poc':
ensure  =>  present,
gid =>  '1002'
}

user { 'one':
ensure  =>  present,
uid =>  '1005',
gid =>  '1002',
require =>  Group['poc']
}

user { 'two':
ensure  =>  present,
uid =>  '1006',
gid =>  '1002',
require =>  Group['poc']
}

The run produced no errors and

grep poc /etc/group

produced:

poc:x:1002:

with

egrep 'one|two' /etc/passwd

producing (with a couple of extraneous entries):

nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
whoopsie:x:109:116::/nonexistent:/bin/false
two:x:1006:1002::/home/two:
one:x:1005:1002::/home/one:



- Original Message -
From: "Paul Heinlein" 
To: "centos" 
Sent: Wednesday, April 19, 2017 4:20:08 PM
Subject: Re: [CentOS] PUPPET - group IDS

On Wed, 19 Apr 2017, Ian Diddams wrote:

> hope thus comes under the remit of this mailking list...
>
>
>
> We use puppet, and Im trying to come up with "code" that will create two user 
> accounts with a shared groiup ID
> eg 
> user1 with UID 1000user 2 with UID 1001
> but I would like them BOTH to share the GID of 2000
> I've tried the following
> accounts::groups:    jointgroup:        gid: '2000'
> accounts::users:
>     user1:        uid: '1000'        gid: '2000'        home: '/home/user1'   
>      shell: '/bin/bash'        password: ''
>     user2:        uid: '1001'        gid: '200'        home: '/home/user2'    
>     shell: '/bin/bash'        password: ''
> But when I trfy and use this puppet agent -tv complains when trying to create 
> user2 that GID 2000 is slready used .
>
> how may I manage this?

I haven't used the "allowdupe" option, so I don't know if it works for 
GIDs, but supposedly this works:

   user { 'user1':
 uid => 1000, gid => 2000, ...,
 allowdupe => true
   }

   user { 'user2':
 uid => 1001, gid => 2000, ...,
 allowdupe => true
   }

In YAML-ese, I guess you'd just add

accounts::users:
   user1:
 allowdupe: 'true'

-- 
Paul Heinlein <> heinl...@madboa.com <> http://www.madboa.com/
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] centos 7 and nvme

2017-04-19 Thread jsl6uy js16uy
Sweet!
Thanks much for the update sir! We are locked on a ver, ver == repo/pkg
time slice, within the Cent7u2 release run, but this could be a reason to
push the pointer forward in our infra
I can live with those headaches
Thanks again
Will report back if we do anything above normal tweaking

On Wed, Apr 19, 2017 at 4:23 PM, Jonathan Billings 
wrote:

> On Apr 19, 2017, at 4:25 PM, jsl6uy js16uy  wrote:
> > Hello all, and hope all is well
> > Has anyone installed / on an nvme ssd for Cent 7? Would anyone know if
> that
> > is supported?
> > I have installed using Arch Linux, but at the time, mid last year, had to
> > patch grub to recognize nvme. Arch is obviously running a much more
> recent
> > kernel.
> > Not afraid todo some empirical leg work. Just asking if anyone had tried
> > already
>
> I installed RHEL 7.3 (same kernel as CentOS7) and it worked fine except
> for the fact that ‘efibootmgr’ defaults to using /dev/sda, and the disk is
> called /dev/nvme0n1, so doing stuff like changing the default boot back to
> Windows (dual boot) didn’t work without some tweaking.
>
> --
> Jonathan Billings 
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS-es] Analisis de Logs.

2017-04-19 Thread Wilmer Arambula
Buenas Tardes cual es el mejor programa para analisis de logs, para Centos7.

Saludos,

-- 
*Wilmer Arambula. *
___
CentOS-es mailing list
CentOS-es@centos.org
https://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS] centos 7 and nvme

2017-04-19 Thread Jonathan Billings
On Apr 19, 2017, at 4:25 PM, jsl6uy js16uy  wrote:
> Hello all, and hope all is well
> Has anyone installed / on an nvme ssd for Cent 7? Would anyone know if that
> is supported?
> I have installed using Arch Linux, but at the time, mid last year, had to
> patch grub to recognize nvme. Arch is obviously running a much more recent
> kernel.
> Not afraid todo some empirical leg work. Just asking if anyone had tried
> already

I installed RHEL 7.3 (same kernel as CentOS7) and it worked fine except for the 
fact that ‘efibootmgr’ defaults to using /dev/sda, and the disk is called 
/dev/nvme0n1, so doing stuff like changing the default boot back to Windows 
(dual boot) didn’t work without some tweaking.

--
Jonathan Billings 


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] PUPPET - group IDS

2017-04-19 Thread Paul Heinlein

On Wed, 19 Apr 2017, Ian Diddams wrote:


hope thus comes under the remit of this mailking list...



We use puppet, and Im trying to come up with "code" that will create two user 
accounts with a shared groiup ID
eg 
user1 with UID 1000user 2 with UID 1001
but I would like them BOTH to share the GID of 2000
I've tried the following
accounts::groups:    jointgroup:        gid: '2000'
accounts::users:
    user1:        uid: '1000'        gid: '2000'        home: '/home/user1'     
   shell: '/bin/bash'        password: ''
    user2:        uid: '1001'        gid: '200'        home: '/home/user2'      
  shell: '/bin/bash'        password: ''
But when I trfy and use this puppet agent -tv complains when trying to create 
user2 that GID 2000 is slready used .

how may I manage this?


I haven't used the "allowdupe" option, so I don't know if it works for 
GIDs, but supposedly this works:


  user { 'user1':
uid => 1000, gid => 2000, ...,
allowdupe => true
  }

  user { 'user2':
uid => 1001, gid => 2000, ...,
allowdupe => true
  }

In YAML-ese, I guess you'd just add

accounts::users:
  user1:
allowdupe: 'true'

--
Paul Heinlein <> heinl...@madboa.com <> http://www.madboa.com/
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] PUPPET - group IDS

2017-04-19 Thread Ian Diddams
hope thus comes under the remit of this mailking list...



We use puppet, and Im trying to come up with "code" that will create two user 
accounts with a shared groiup ID
eg 
user1 with UID 1000user 2 with UID 1001
but I would like them BOTH to share the GID of 2000
I've tried the following
accounts::groups:    jointgroup:        gid: '2000'
accounts::users:
    user1:        uid: '1000'        gid: '2000'        home: '/home/user1'     
   shell: '/bin/bash'        password: ''
    user2:        uid: '1001'        gid: '200'        home: '/home/user2'      
  shell: '/bin/bash'        password: ''
But when I trfy and use this puppet agent -tv complains when trying to create 
user2 that GID 2000 is slready used .

how may I manage this?
(Obvs I could have all users with their own GID and add users to a seperate 
group m... but this is just tidier to my mind?
cheersdidds
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] centos 7 and nvme

2017-04-19 Thread jsl6uy js16uy
Hello all, and hope all is well
Has anyone installed / on an nvme ssd for Cent 7? Would anyone know if that
is supported?
I have installed using Arch Linux, but at the time, mid last year, had to
patch grub to recognize nvme. Arch is obviously running a much more recent
kernel.
Not afraid todo some empirical leg work. Just asking if anyone had tried
already

thanks all for any/all help
regards
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: systemd Poll - So Long, and Thanks for All the fish.

2017-04-19 Thread Chris Murphy
On Wed, Apr 19, 2017 at 5:21 AM, James B. Byrne  wrote:
>
> On Mon, April 17, 2017 17:13, Warren Young wrote:
>
>>
>> Also, I’ll remind the list that one of the *prior* times the systemd
>> topic came up, I was the one reminding people that most of our jobs
>> summarize as “Cope with change.â€
>>
>
> At some point 'coping with change' is discovered to consume a
> disproportionate amount of resources for the benefits obtained.  In my
> sole opinion the Linux community appears to have a
> change-for-change-sake fetish. This is entirely appropriate for an
> experimental project.  The mistake that I made many years ago was
> inferring that Linux was nonetheless suitable for business.
>
> To experimenters a ten year product cycle may seem an eternity. To
> many organisations ten years is barely time to work out all the kinks
> and adapt internal processes to automated equivalents.  And the
> smaller the business the more applicable that statement becomes.
>
> I do not have any strong opinion about systemd as I have virtually no
> experience with it.  But the regular infliction of massively
> disruptive changes to fundamental software has convinced us that Linux
> does not meet our business needs. Systemd and Upstart are not the
> cause of that.  They are symptoms of a fundamental difference of focus
> between what our firm needs and what the Linux community wants.

Apple has had massively disruptive changes on OS X and iOS. Windows
has had a fairly disruptive set of changes in Windows 10. About the
only things that don't change are industrial OS's.

When it comes to breaking user space, there's explicit rules against
that in Linux kernel development. And internally consistent API/ABI
stability is something you're getting in CentOS/RHEL kernels, it's one
of the points the distributions exist. But the idea that Windows and
OS X have better overall API stability I think is untrue, having
spoken to a very wide assortment of developers who build primarily
user space apps.

What does happen, in kernel ABI changes can break your driver, as
there's no upstream promise for ABI compatibility within the kernel
itself. The effect of this is very real on say, Android, and might be
one of the reasons for Google's Fuscia project which puts most of the
drivers, including video drivers, into user space. And Microsoft also
rarely changes things in their kernel, so again drivers tend to not
break.


-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] virsh error: driver is not whitelisted

2017-04-19 Thread Marco Aurelio L. Gomes

Hi,

As you suggested, I build the qemu-kvm-ev package from the SRPM, modify 
the configure script to include the vvfat driver to the whitelist, 
install the new binaries and now the instance could start.


Thanks for your help!

Marco Gomes
NCC - UNESP

On 2017-04-19 13:52, Johnny Hughes wrote:

On 04/19/2017 10:00 AM, Marco Aurelio L. Gomes wrote:

Hi,

I'm using virsh to instance a VM in my environment, but I'm running on
some issues.




I got the following error:

error: Failed to create domain from domain.xml
error: internal error: qemu unexpectedly closed the monitor:
2017-04-17T17:00:37.012369Z qemu-kvm: -drive
file=fat:/usr/src/dpdk-stable-16.11.1,if=none,id=drive-virtio-disk1,readonly=on:
Driver 'vvfat' is not whitelisted

If I comment the disk that cause this error, the instance starts 
without
error. Is there a way to whitelist this vvfat driver to instance this 
VM?


And the strange thing about this error is that when I check the
available drives, there is vvfat in the list:

/usr/libexec/qemu-kvm -drive format=?
Supported formats: ftps http null-aio null-co file quorum blkverify
vvfat blkreplay qed raw qcow2 bochs dmg vmdk parallels vhdx vpc https
sheepdog host_cdrom ssh host_device nbd gluster qcow iscsi rbd tftp 
ftp

vdi blkdebug luks cloop

Here some information about the environment:

cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)

virsh --version
2.0.0

/usr/libexec/qemu-kvm --version
QEMU emulator version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.6.1), 
Copyright

(c) 2003-2008 Fabrice Bellard

Thanks in advance for the help




If you look here:

https://rwmj.wordpress.com/2015/09/25/virt-v2v-libguestfs-and-qemu-remote-drivers-in-rhel-7/

The things supported by qemu-img and qemu are not necessarily the same.

If you look at the last qemu-kvm.spec file, you can see what is set to
rw and ro:

https://git.centos.org/raw/rpms!qemu-kvm/976a86fff9adb9a2a6968b9f73fe9a615266f59b/SPECS!qemu-kvm.spec

--block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd


--block-drv-ro-whitelist=vmdk,vhdx,vpc,ssh,https

So, those listed files are the only ones that will work.

You would need to recompile the qemu-kvm RPMs after modifying those
whitelist lines in the spec file if you want to add things to the 
whitelist.





___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.

2017-04-19 Thread Johnny Hughes
On 04/19/2017 12:18 PM, PJ Welsh wrote:
> 
> On Wed, Apr 19, 2017 at 5:40 AM, Johnny Hughes  > wrote:
> 
> On 04/18/2017 12:39 PM, PJ Welsh wrote:
> > Here is something interesting... I went through the BIOS options and
> > found that one R710 that *is* functioning only differed in that "Logical
> > Processor"/Hyperthreading was *enabled* while the one that is *not*
> > functioning had HT *disabled*. Enabled Logical Processor and the system
> > starts without issue! I've rebooted 3 times now without issue.
> > Dell R710 BIOS version 6.4.0
> > 2x Intel(R) Xeon(R) CPU L5639  @ 2.13GHz
> > 4.9.20-26.el7.x86_64 #1 SMP Tue Apr 4 11:19:26 CDT 2017 x86_64 x86_64
> > x86_64 GNU/Linux
> >
> 
> Outstanding .. I have now released a 4.9.23-26.el6 and .el7 to the
> system as normal updates.  It should be available later today.
> 
> 
> 
>  
> I've verified with a second Dell R710 that disabling
> Hyperthreading/Logical Processor causes the primary xen booting kernel
> to fail and reboot. Consequently, enabling allows for the system to
> start as expected and without any issue:
> Current tested kernel was: 4.9.13-22.el7.x86_64 #1 SMP Sun Feb 26
> 22:15:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> 
> I just attempted an update and the 4.9.23-26 is not yet up. Does this
> update address the Hyperthreading issue in any way?
> 

I don't think so .. at least I did not specifically add anything to do so.

You can get it here for testing:

https://buildlogs.centos.org/centos/7/virt/x86_64/xen/

(or from /6/ as well for CentOS-6)

Not sure why it did not go out on the signing run .. will check that server.





signature.asc
Description: OpenPGP digital signature
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-announce] CESA-2017:1095 Important CentOS 7 bind Security Update

2017-04-19 Thread Johnny Hughes

CentOS Errata and Security Advisory 2017:1095 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2017-1095.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
41aac9c0065db17450f133fb0e744b2e0b9d5810cd3e3b33f7ac92bb67b3d953  
bind-9.9.4-38.el7_3.3.x86_64.rpm
37aafd25848fabc139431ffde194c5fe87c2bf6021e2ff16bceb80a6d3775f4d  
bind-chroot-9.9.4-38.el7_3.3.x86_64.rpm
e06f209f8ca60f3e631b59997aa135fe522355a11527a96bba680045aa916239  
bind-devel-9.9.4-38.el7_3.3.i686.rpm
ec6d11535efa4dfd0dc0317656eb9bb889ad38566be166ceed04bd67936462b1  
bind-devel-9.9.4-38.el7_3.3.x86_64.rpm
7038fb1ea60b7299643824b25e39af1cc4573634c08fea402ebf33123af311bb  
bind-libs-9.9.4-38.el7_3.3.i686.rpm
2442703f191c4fd5ed3e36fa7b267da0b52b15414f4c4551b621b00955ee6411  
bind-libs-9.9.4-38.el7_3.3.x86_64.rpm
626cedc17e03b8575562e78d7717a32d82dd513123239b7ff73c18738d2ff9c5  
bind-libs-lite-9.9.4-38.el7_3.3.i686.rpm
d97c677eeef268520e10951d7a1de580e240ca17cf6e0e7b63249f715a2de391  
bind-libs-lite-9.9.4-38.el7_3.3.x86_64.rpm
8a40293dc07d63a009c9559818f3977b55201c04502c054078d5b4f659b9f6b9  
bind-license-9.9.4-38.el7_3.3.noarch.rpm
58193a8c1660325e0d9a1b63c91fc6dae591d7bdf3c41232462c3e488612316e  
bind-lite-devel-9.9.4-38.el7_3.3.i686.rpm
9a8bb442c028bd7682fa9710ca085548083f5bbf602909f52f21887d9314c8d3  
bind-lite-devel-9.9.4-38.el7_3.3.x86_64.rpm
91c55d24adbac7682a3fd5ce096e835cda7a05ae069564c31268375cb474a638  
bind-pkcs11-9.9.4-38.el7_3.3.x86_64.rpm
0acf41b2baddb00640934e69f64931eb908064c2b890bfc6edb3e84a21c005d9  
bind-pkcs11-devel-9.9.4-38.el7_3.3.i686.rpm
f1b518addd7717a5fde9e8972c0c71d9b28a0129bba8201ca82fe6d7735e542d  
bind-pkcs11-devel-9.9.4-38.el7_3.3.x86_64.rpm
00aca50af4e66b5db2be7e0a37fbe3924d48ba76e20b688ed70cdb842edce924  
bind-pkcs11-libs-9.9.4-38.el7_3.3.i686.rpm
bf9f7f748442c78f3483443450af89bc77b601338041b71602bdce0b306394cf  
bind-pkcs11-libs-9.9.4-38.el7_3.3.x86_64.rpm
462fe7bcf54effb70c91c107a46c5e04748b90fcc10550b2fcce2699278d2da9  
bind-pkcs11-utils-9.9.4-38.el7_3.3.x86_64.rpm
ccee502f2da49959e99e1b62da8c4cd9b4382c4ca79408e47f8c3ae89d265787  
bind-sdb-9.9.4-38.el7_3.3.x86_64.rpm
4b9183c9d659864810c904be5f85cc84e8a594b7db2539ebdec2a80347e0547f  
bind-sdb-chroot-9.9.4-38.el7_3.3.x86_64.rpm
45d5c457513673354589679542b03d7523cf40e211eee0fafc40005f7ed0cae2  
bind-utils-9.9.4-38.el7_3.3.x86_64.rpm

Source:
ba64bef4d94b3bb7e4fd00b95d77975d60b0a40ed5cc00fc1e1fe93d7971093b  
bind-9.9.4-38.el7_3.3.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS

___
CentOS-announce mailing list
CentOS-announce@centos.org
https://lists.centos.org/mailman/listinfo/centos-announce


Re: [CentOS-virt] Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.

2017-04-19 Thread PJ Welsh
On Wed, Apr 19, 2017 at 5:40 AM, Johnny Hughes  wrote:

> On 04/18/2017 12:39 PM, PJ Welsh wrote:
> > Here is something interesting... I went through the BIOS options and
> > found that one R710 that *is* functioning only differed in that "Logical
> > Processor"/Hyperthreading was *enabled* while the one that is *not*
> > functioning had HT *disabled*. Enabled Logical Processor and the system
> > starts without issue! I've rebooted 3 times now without issue.
> > Dell R710 BIOS version 6.4.0
> > 2x Intel(R) Xeon(R) CPU L5639  @ 2.13GHz
> > 4.9.20-26.el7.x86_64 #1 SMP Tue Apr 4 11:19:26 CDT 2017 x86_64 x86_64
> > x86_64 GNU/Linux
> >
>
> Outstanding .. I have now released a 4.9.23-26.el6 and .el7 to the
> system as normal updates.  It should be available later today.
>
> 
>
>
I've verified with a second Dell R710 that disabling Hyperthreading/Logical
Processor causes the primary xen booting kernel to fail and reboot.
Consequently, enabling allows for the system to start as expected and
without any issue:
Current tested kernel was: 4.9.13-22.el7.x86_64 #1 SMP Sun Feb 26 22:15:59
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

I just attempted an update and the 4.9.23-26 is not yet up. Does this
update address the Hyperthreading issue in any way?

Thanks
PJ
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-announce] CEBA-2017:0993 CentOS 7 libqb BugFix Update

2017-04-19 Thread Johnny Hughes

CentOS Errata and Bugfix Advisory 2017:0993 

Upstream details at : https://rhn.redhat.com/errata/RHBA-2017-0993.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
56037bd50c881f91b88ea955adc898464006bec6e0386578a4ca16485bacb6ae  
libqb-1.0-1.el7_3.1.i686.rpm
ee103fb75c84c0a99d6b8171a13843b49366b03e0c3ab11cac3b3cdadfea320b  
libqb-1.0-1.el7_3.1.x86_64.rpm
4a7e07c78aa7ce725df4874ae954f64406527bf89ee8e35f3d5165729481571a  
libqb-devel-1.0-1.el7_3.1.i686.rpm
ac32c572e4401fdc944dcb170eeac4a6792e2090732fcfa739eb54985edd205c  
libqb-devel-1.0-1.el7_3.1.x86_64.rpm

Source:
ba64af866558466177a33cf4154bf4d601efd42b3a713cb11691a9845bfb5fc5  
libqb-1.0-1.el7_3.1.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS

___
CentOS-announce mailing list
CentOS-announce@centos.org
https://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CEBA-2017:0992 CentOS 7 lvm2 BugFix Update

2017-04-19 Thread Johnny Hughes

CentOS Errata and Bugfix Advisory 2017:0992 

Upstream details at : https://rhn.redhat.com/errata/RHBA-2017-0992.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
d1807a28bed10ec7acf8011356ed5d75b99fa13b530f92051d28d043414db5b7  
cmirror-2.02.166-1.el7_3.4.x86_64.rpm
5fb1880fa7981be0ea09fc18b271b017ec507e2b232dd7bdccfce2846e6be56a  
device-mapper-1.02.135-1.el7_3.4.x86_64.rpm
1dc2535d0a6587f364a273f55cad983787ab5882786738d65411568d28c72bf4  
device-mapper-devel-1.02.135-1.el7_3.4.i686.rpm
ee87fd2344fe891a69bce92bf8738776deda95713a717063e7057a069ea199ff  
device-mapper-devel-1.02.135-1.el7_3.4.x86_64.rpm
5819cecf9b2d40c01eb173f5a7df4038a46580a8378bb493656d364de9ea4c5b  
device-mapper-event-1.02.135-1.el7_3.4.x86_64.rpm
cd5d12827b85bf6425502a64a5af1c91a7e40296f595ea9639c92f22e6b144c7  
device-mapper-event-devel-1.02.135-1.el7_3.4.i686.rpm
bd6135c60ab01807e01964ce85f731b79e71c016d6a83bdb7f1cee5081936661  
device-mapper-event-devel-1.02.135-1.el7_3.4.x86_64.rpm
2783956c6691566291268d619108722f80fc1dd2aff8204f7e214181c6d94b7a  
device-mapper-event-libs-1.02.135-1.el7_3.4.i686.rpm
26710f523a61eb0c0729b386e2b458e83089c8b4b16a7aada701f4a6dcb16295  
device-mapper-event-libs-1.02.135-1.el7_3.4.x86_64.rpm
5b5b11fa697db5e78c6d3e3c6a4ebe60186d49909307a16381af8c03807f46a6  
device-mapper-libs-1.02.135-1.el7_3.4.i686.rpm
64738ef54103a9234506d99a45316546de8aa7bada7d025cfc002ca9a7ed4499  
device-mapper-libs-1.02.135-1.el7_3.4.x86_64.rpm
ea44f6a54fb238fb5b7892c0d1027cb858fae6d283164ef380a5b7bda70e96b1  
lvm2-2.02.166-1.el7_3.4.x86_64.rpm
f3d5af879b3c78299d3da9e5a9bbc582aa2d3908540585d324081fe3e72acf55  
lvm2-cluster-2.02.166-1.el7_3.4.x86_64.rpm
dd9bef904dade9ca180d7ffb6ea02ada6f7eb42b3a058fd35401252871824b2b  
lvm2-devel-2.02.166-1.el7_3.4.i686.rpm
60fa65b9bf3ade31a9ba36eb3e1faa6848ba67165490769f3ed885a5081bac2e  
lvm2-devel-2.02.166-1.el7_3.4.x86_64.rpm
c1ce43ade1ce9c403323c67be610e786c3ddfc83172af4457d09cc02903c5b91  
lvm2-libs-2.02.166-1.el7_3.4.i686.rpm
5612c13ecc039fe4230d773a0e37cfc754b452bc81a0d789150a540b887668d2  
lvm2-libs-2.02.166-1.el7_3.4.x86_64.rpm
0343ead658577d20ab2cbb67abdb3a7bef378c456fca7726679e230ac2786aa8  
lvm2-lockd-2.02.166-1.el7_3.4.x86_64.rpm
5c024206e6d836c81852be73d336f159b75878a58936d8e8a0b633938263db47  
lvm2-python-libs-2.02.166-1.el7_3.4.x86_64.rpm
90b3ab67f6c6053318c92864d976227d6617aab0fd4cb12c838b0cdf9af90a8f  
lvm2-sysvinit-2.02.166-1.el7_3.4.x86_64.rpm

Source:
2d935efd7141351d2fa1144a568dbff5188103cac45f534fa95c87b86bb53185  
lvm2-2.02.166-1.el7_3.4.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS

___
CentOS-announce mailing list
CentOS-announce@centos.org
https://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CESA-2017:0987 Important CentOS 7 qemu-kvm Security Update

2017-04-19 Thread Johnny Hughes

CentOS Errata and Security Advisory 2017:0987 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2017-0987.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
8f5b6e393d90eaf271f3dc164c440e7e3b2440f2fd53e377ae5b83c8b8790e4e  
qemu-img-1.5.3-126.el7_3.6.x86_64.rpm
2156b1d4b2a1144325325df2359e59dcdc75731fbaea24807118f81194ca830f  
qemu-kvm-1.5.3-126.el7_3.6.x86_64.rpm
9c0235699ce78451732fd91771db2205c1ab0a891b0a10feff62d79e9a0d2ae1  
qemu-kvm-common-1.5.3-126.el7_3.6.x86_64.rpm
3f3e04d49c3f9dabb00ff323e6fe66b365e5cd76dd24fa913c716d5f8307efbe  
qemu-kvm-tools-1.5.3-126.el7_3.6.x86_64.rpm

Source:
87af97f4a3faad567481be22a3fc96e8a6134a25e31f52b9afa26d256c260e5b  
qemu-kvm-1.5.3-126.el7_3.6.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS

___
CentOS-announce mailing list
CentOS-announce@centos.org
https://lists.centos.org/mailman/listinfo/centos-announce


Re: [CentOS-virt] virsh error: driver is not whitelisted

2017-04-19 Thread Johnny Hughes
On 04/19/2017 10:00 AM, Marco Aurelio L. Gomes wrote:
> Hi,
> 
> I'm using virsh to instance a VM in my environment, but I'm running on
> some issues. 


> I got the following error:
> 
> error: Failed to create domain from domain.xml
> error: internal error: qemu unexpectedly closed the monitor:
> 2017-04-17T17:00:37.012369Z qemu-kvm: -drive
> file=fat:/usr/src/dpdk-stable-16.11.1,if=none,id=drive-virtio-disk1,readonly=on:
> Driver 'vvfat' is not whitelisted
> 
> If I comment the disk that cause this error, the instance starts without
> error. Is there a way to whitelist this vvfat driver to instance this VM?
> 
> And the strange thing about this error is that when I check the
> available drives, there is vvfat in the list:
> 
> /usr/libexec/qemu-kvm -drive format=?
> Supported formats: ftps http null-aio null-co file quorum blkverify
> vvfat blkreplay qed raw qcow2 bochs dmg vmdk parallels vhdx vpc https
> sheepdog host_cdrom ssh host_device nbd gluster qcow iscsi rbd tftp ftp
> vdi blkdebug luks cloop
> 
> Here some information about the environment:
> 
> cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> 
> virsh --version
> 2.0.0
> 
> /usr/libexec/qemu-kvm --version
> QEMU emulator version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.6.1), Copyright
> (c) 2003-2008 Fabrice Bellard
> 
> Thanks in advance for the help
> 


If you look here:

https://rwmj.wordpress.com/2015/09/25/virt-v2v-libguestfs-and-qemu-remote-drivers-in-rhel-7/

The things supported by qemu-img and qemu are not necessarily the same.

If you look at the last qemu-kvm.spec file, you can see what is set to
rw and ro:

https://git.centos.org/raw/rpms!qemu-kvm/976a86fff9adb9a2a6968b9f73fe9a615266f59b/SPECS!qemu-kvm.spec

--block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd


--block-drv-ro-whitelist=vmdk,vhdx,vpc,ssh,https

So, those listed files are the only ones that will work.

You would need to recompile the qemu-kvm RPMs after modifying those
whitelist lines in the spec file if you want to add things to the whitelist.





signature.asc
Description: OpenPGP digital signature
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] anaconda/kickstart: bonding device not created as expected

2017-04-19 Thread Tris Hoar

On 18/04/2017 15:54, Frank Thommen wrote:

Hi,

I am currently struggling with the right way to configure a bonding
device via kickstart (via PXE).

I am installing servers which have "eno" network interfaces.  Instead of
the expected bonding device with two active slaves (bonding mode is
balance-alb), I get a bonding device with only one active slave and an
independent, non-bonded network device.  Also the bonding device gets
its MAC address from the second instead of from the first device.

I appreciate any hint (or rtfm with the name of the correct fm ;-) on
how to achieve the desired setup through kickstart.  Please find the
used PXE and kickstart settings and resulting network configuration below.

I did this with CentOS 7.2.1511.  We cannot go further due to Infiniband
and lustre drivers which are currently only supported for this CentOS
7.x version

Cheers
frank

--

The used PXE configuration is

LABEL CentOS-7
kernel centos-7/vmlinuz
append initrd=centos-7/initrd.img ip=dhcp nameserver=xx.xx.xx.xx
ksdevice=eno1 inst.repo=http://our.mirror.server/7/os/x86_64
inst.ks.sendmac inst.ks=http://our.kickstart.server/ks.cgi


and the network settings in the kickstart file are

network --device bond0 --bondslaves=eno1,eno2
--bondopts=mode=balance-alb --bootproto=dhcp --hostname=myhost --activate


I would have expected to get a bonding device with eno1 and eno2 as
slave devices, the bonding device inheriting the MAC address from eno1
(otherwise DHCP won't work).  Instead the result is a bonding device
with eno2 as - sole - slave device and eno1 as a single active device
with the main IP address of the host:


bond0: flags=5187  mtu 1500
inet6 fe80::42f2:e9ff:fec7:b5f1  prefixlen 64  scopeid 0x20
ether 40:f2:e9:c7:b5:f1  txqueuelen 0  (Ethernet)
RX packets 29  bytes 5274 (5.1 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 39  bytes 3486 (3.4 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno1: flags=4163  mtu 1500
inet xx.xx.xx.xx  netmask 255.255.255.0  broadcast xx.xx.xx.xx
inet6 fe80::42f2:e9ff:fec7:b5f0  prefixlen 64  scopeid 0x20
ether 40:f2:e9:c7:b5:f0  txqueuelen 1000  (Ethernet)
RX packets 4303  bytes 798163 (779.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1686  bytes 481585 (470.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 16

eno2: flags=6211  mtu 1500
ether 40:f2:e9:c7:b5:f1  txqueuelen 1000  (Ethernet)
RX packets 29  bytes 5274 (5.1 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 39  bytes 3486 (3.4 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17


The ifcfg-files look basically ok, but there are two for the eno1 device.

ifcfg of the bonding device:

$ cat ifcfg-bond0
# Generated by parse-kickstart
IPV6INIT="yes"
DHCP_HOSTNAME="myhost"
NAME="Bond connection bond0"
BONDING_MASTER="yes"
BOOTPROTO="dhcp"
BONDING_OPTS="mode=balance-alb"
DEVICE="bond0"
TYPE="Bond"
ONBOOT="yes"
UUID="35910614-4a7c-43c9-8e44-dcf44b783358"
$


ifcfg of the two slave devices

$ cat ifcfg-bond0_slave_1
# Generated by parse-kickstart
NAME="bond0 slave 1"
MASTER="35910614-4a7c-43c9-8e44-dcf44b783358"
HWADDR="40:f2:e9:c7:b5:f0"
TYPE="Ethernet"
ONBOOT="yes"
UUID="f3a0a007-861c-42b6-8264-6efba62232ce"
$


$ cat ifcfg-bond0_slave_2
# Generated by parse-kickstart
NAME="bond0 slave 2"
MASTER="35910614-4a7c-43c9-8e44-dcf44b783358"
HWADDR="40:f2:e9:c7:b5:f1"
TYPE="Ethernet"
ONBOOT="yes"
UUID="ee3f7c84-d4cb-412e-887d-6b1c753eb913"
$


ifcfg of eno1 (which physically has the MAC address 40:f2:e9:c7:b5:f0,
which is the same as ifcfg-bond0_slave_1

$ cat ifcfg-eno1
# Generated by dracut initrd
NAME="eno1"
DEVICE="eno1"
ONBOOT=yes
NETBOOT=yes
UUID="d20645a0-8093-45f3-9630-d0249f76726b"
IPV6INIT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
DNS1="192.55.188.177"
$



Hi Frank,

This is from my satellite kickstart where I'm building the bond at the 
point of PXE booting, and using static (I'm working on doing this with 
DHCP and tagged VLANs but currently cant get to the hardware needed 
since messing up the BMC config :( )


LABEL linux
KERNEL boot/RedHat-7.3-x86_64-vmlinuz
APPEND initrd=boot/RedHat-7.3-x86_64-initrd.img 
ks=http://example.com/host.ks ks.device=bootif network ks.sendmac 
bond=bond0:eno1,eno2:mode=802.3ad vlan=bond0.10:bond0 
ip=10.10.0.2::10.10.0.1:255.255.255.0:host.example.com:bond0.10:none 
nameserver=10.10.0.1



Then in the KS we have
network  --bootproto=static --device=link --gateway=10.10.0.1 
--hostname=host.example.com --ip=10.10.0.2 
--nameserver=10.10.0.1,10.11.0.1 --netmask=255.255.255.0


It should be fairly simple to convert that to use DHCP as 

[CentOS-virt] virsh error: driver is not whitelisted

2017-04-19 Thread Marco Aurelio L. Gomes

Hi,

I'm using virsh to instance a VM in my environment, but I'm running on 
some issues. I created the following domain file:



  demovm
  4a9b3f53-fa2a-47f3-a757-dd87720d9d1d
  4194304
  4194304
  

  

  
  2
  
4096



  
  
hvm

  
  


  
  



  memAccess='shared'/>


  
  destroy
  restart
  destroy
  
/usr/libexec/qemu-kvm

  
  
  


  
  
  
  


  
  mode='client'/>

   
  

  


  


  

  


When I run:

virsh create domain.xml

I got the following error:

error: Failed to create domain from domain.xml
error: internal error: qemu unexpectedly closed the monitor: 
2017-04-17T17:00:37.012369Z qemu-kvm: -drive 
file=fat:/usr/src/dpdk-stable-16.11.1,if=none,id=drive-virtio-disk1,readonly=on: 
Driver 'vvfat' is not whitelisted


If I comment the disk that cause this error, the instance starts without 
error. Is there a way to whitelist this vvfat driver to instance this 
VM?


And the strange thing about this error is that when I check the 
available drives, there is vvfat in the list:


/usr/libexec/qemu-kvm -drive format=?
Supported formats: ftps http null-aio null-co file quorum blkverify 
vvfat blkreplay qed raw qcow2 bochs dmg vmdk parallels vhdx vpc https 
sheepdog host_cdrom ssh host_device nbd gluster qcow iscsi rbd tftp ftp 
vdi blkdebug luks cloop


Here some information about the environment:

cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)

virsh --version
2.0.0

/usr/libexec/qemu-kvm --version
QEMU emulator version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.6.1), Copyright 
(c) 2003-2008 Fabrice Bellard


Thanks in advance for the help

Marco Aurelio Gomes
NCC - UNESP
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] [SPAM?] Re: kmod-jfs on Centos 6

2017-04-19 Thread H

On 04/18/2017 08:30 PM, m.r...@5-cent.us wrote:

H wrote:

A  couple of days ago I submitted a request to ElRepo and kmod-jfs is now
available for CentOS 7 as well.

On 04/12/2017 12:58 AM, H wrote:

Thank you, installed it and it worked fine. Now I am looking for the
same for CentOS 7... It did not look like you have that in your
repository?


I don't know about this - at home, I'm running C 6. Would this let me talk
to my Barnes&(ig)Noble Nook?

   mark

On 3/13/2017 1:09 PM, Nux! wrote:

yum -y install
http://mirrors.coreix.net/elrepo/elrepo/el6/x86_64/RPMS/kmod-jfs-0.0-1.el6.elrepo.x86_64.rpm

(that's for 64bit, adjust the url accordingly for 32bit)

it won't hose your system

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -

From: "H" 
To: "CentOS mailing list" 
Sent: Friday, 10 March, 2017 19:20:37
Subject: [CentOS] kmod-jfs on Centos 6
I am a bit of a noob with Linux and Centos but would like to be able
to access
an old external USB disk formatted JFS by OS/2. I have seen there is a
kmod-jfs
package on elrepo that ought to work with Centos 6 but am unsure how
to install
kmods without hosing my existing system...

If anyone would like to be so kind to give me a short how-to, I would
be very
grateful.

Thank you.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos



___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


I doubt that. I was looking to transfer files from old harddisks formatted 
using JFS under OS/2.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS-announce] CESA-2017:0979 Moderate CentOS 6 libreoffice Security Update

2017-04-19 Thread Johnny Hughes

CentOS Errata and Security Advisory 2017:0979 Moderate

Upstream details at : https://rhn.redhat.com/errata/RHSA-2017-0979.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

i386:
d51563e3f68c496946cf48865de63fa37c6f75b45441527367441486483cdfe6  
autocorr-af-4.3.7.2-2.el6_9.1.noarch.rpm
eedfca4cf81181bca1d908ad8954a8add788f699ba707501c4d7e5690f3d1334  
autocorr-bg-4.3.7.2-2.el6_9.1.noarch.rpm
82d5be578ee70605365140ead4089f3ef7bef661a63f1b40ee3c584c03288fac  
autocorr-ca-4.3.7.2-2.el6_9.1.noarch.rpm
9335f2208789c38b58e79661316541d49fb234b621f47f85779d7be13fdc41bc  
autocorr-cs-4.3.7.2-2.el6_9.1.noarch.rpm
c15d5816161695a348ffc4785f19f3ebffdda49c0c6081da5e546454c3653464  
autocorr-da-4.3.7.2-2.el6_9.1.noarch.rpm
92142d08a38b7d0b1861b0c6fca530776e80d61e6b8f7d44fbfbe2d516e256a9  
autocorr-de-4.3.7.2-2.el6_9.1.noarch.rpm
1f36c6b84490f435c890b25328b6f12e8fe8bea90101448b8293734ab3b8d3a7  
autocorr-en-4.3.7.2-2.el6_9.1.noarch.rpm
59ef61f5c6b9606ec0b035c6b3dc9b9551cdbbf16b2c4132613d67e0ae7b1b3e  
autocorr-es-4.3.7.2-2.el6_9.1.noarch.rpm
ae1549f1c51685d4152909da95431e4e276451466a404ace861e3f5dbeffcb1c  
autocorr-fa-4.3.7.2-2.el6_9.1.noarch.rpm
8b8b4ea6dca1b628cfd0eced133016c67b5c38baa71d6d3e9472eb7d55a9d27e  
autocorr-fi-4.3.7.2-2.el6_9.1.noarch.rpm
6591faa45c1e9fab30509cf7b0120fcb68c0a91737694a53b364a583a0ce4fd7  
autocorr-fr-4.3.7.2-2.el6_9.1.noarch.rpm
57b14ed0c2e887bbfada4b864dd2d3722b92069296c2f0a68d870d20e2b75da6  
autocorr-ga-4.3.7.2-2.el6_9.1.noarch.rpm
d6da61f91a365b55356e75bc398e0da9141f49c93ac141d2e36be0e4b196db6f  
autocorr-hr-4.3.7.2-2.el6_9.1.noarch.rpm
e36727a04c43d6fada715722b2cdc8ae391cbdf28a6d5c3d826c50b4121eb759  
autocorr-hu-4.3.7.2-2.el6_9.1.noarch.rpm
6afcb021be37adea055aabcf5114764fc7306dcbb2c5bf64c3a36191b1356250  
autocorr-is-4.3.7.2-2.el6_9.1.noarch.rpm
24eb4aa62c7161f130affd1cf7c50adecddf787c31782d1e7713b14353f3d6ae  
autocorr-it-4.3.7.2-2.el6_9.1.noarch.rpm
20558ba271e722ec6def43d9efba92b6cc74016b66655970e7edce4955d39f59  
autocorr-ja-4.3.7.2-2.el6_9.1.noarch.rpm
7678de2817e3ac9df748dc7c0295b4624d0863e716940a0ce64ab29918103687  
autocorr-ko-4.3.7.2-2.el6_9.1.noarch.rpm
bd21944a4c533d980e9f17966b8c6d32b444fb40f7a3954d9a90e8d890fd403c  
autocorr-lb-4.3.7.2-2.el6_9.1.noarch.rpm
7ea583353157678bedc7142d1b00e75e1b46889b5214fd7ab2a402f6c11a4c03  
autocorr-lt-4.3.7.2-2.el6_9.1.noarch.rpm
1b427448bd461b12e40ca7f0fb1a84f9993dfe49c62102eae0164abc96a98c59  
autocorr-mn-4.3.7.2-2.el6_9.1.noarch.rpm
fbc3d9aca975e61ddaaee11ea79d94d0adad211e9c707f1082ecfba9293b1fd1  
autocorr-nl-4.3.7.2-2.el6_9.1.noarch.rpm
434d70b81c3842837015b8c6fcbf088ce2778eac74b4c5b6b132a855f18170b6  
autocorr-pl-4.3.7.2-2.el6_9.1.noarch.rpm
fb132363202c20be5e3143483ff4c39d4aa18313555964db81533c6e6b9ca94a  
autocorr-pt-4.3.7.2-2.el6_9.1.noarch.rpm
6d3c8cf7f5f2e91816174cfe26cf55b7df32e6b70e3c19fbbb9f2202d26d8c60  
autocorr-ro-4.3.7.2-2.el6_9.1.noarch.rpm
9d9b703c2685fe76a3ef3eca1c9ea2be4e0e3dc576f0b88273854aac1f126539  
autocorr-ru-4.3.7.2-2.el6_9.1.noarch.rpm
41cfd0a728e9a2a3f4ff14a2f3df47bd31294943e490bcef4c96c1a27b8282e5  
autocorr-sk-4.3.7.2-2.el6_9.1.noarch.rpm
5440f1cf55b34bfce25740a7e19fde56c3b408e9b1764b42a616aee1e3abd560  
autocorr-sl-4.3.7.2-2.el6_9.1.noarch.rpm
e717182b25f1fcb20bbcd98ca2e897369cb408b25d4031c5c9ed6710c9626228  
autocorr-sr-4.3.7.2-2.el6_9.1.noarch.rpm
98d3d397d5c67b448e90a7cfc547cc9ef1794e9b1abe934e1780cdb9e5c6fc2d  
autocorr-sv-4.3.7.2-2.el6_9.1.noarch.rpm
ca28c4905ba73ac580a9f819b19fe779405df3f1f3f8456ef519fc72cc3dde6a  
autocorr-tr-4.3.7.2-2.el6_9.1.noarch.rpm
ab796174d3609349631a915c6561da57a6fa93d9f449183d3c33a4f7ad460b27  
autocorr-vi-4.3.7.2-2.el6_9.1.noarch.rpm
f2635bf8aad9f78ddc70846967ec81c5bb8a305e629710c7ba5c790567450cbd  
autocorr-zh-4.3.7.2-2.el6_9.1.noarch.rpm
3d3157b081c9f3e6219f3c77a5d85f737d050737ee7edd62890549e5475e  
libreoffice-4.3.7.2-2.el6_9.1.i686.rpm
5c7784a1bc2f8a5f3ad8812dbbfc7b854c0aef37ae373a36626b47926510c7cc  
libreoffice-base-4.3.7.2-2.el6_9.1.i686.rpm
1af1d6d7c502647737d16d24d6cf764df842c2989e6133e40d02a45c0b114dc7  
libreoffice-bsh-4.3.7.2-2.el6_9.1.i686.rpm
78f0e5747bdd3541ba1cb549916cd78d2fd44f3519b78e98e6b50b07107d5f3c  
libreoffice-calc-4.3.7.2-2.el6_9.1.i686.rpm
c7893379150bb032279cfb49b13ecdef40b065be843bde875a53669831f3215f  
libreoffice-core-4.3.7.2-2.el6_9.1.i686.rpm
2de5d5948c0a4be3deb718c7bfed5b3d4b11342dd73416eeae23e15d530bcd92  
libreoffice-draw-4.3.7.2-2.el6_9.1.i686.rpm
fdba42a94745f3fc74fa46123f3764bbf3bf63f48d3e4ebb887b82b393ad0f54  
libreoffice-emailmerge-4.3.7.2-2.el6_9.1.i686.rpm
6f51ebd9e2d1b28b6ec792048de4f85ba241a5dc88915fe09827be30ca578dea  
libreoffice-filters-4.3.7.2-2.el6_9.1.i686.rpm
3468c45276f82c30e29acd510259c2842c921bd403cac2d73730e97ff2fecd6d  
libreoffice-gdb-debug-support-4.3.7.2-2.el6_9.1.i686.rpm
9babbdb87948d7ddbf36219265998ecfc34a98a3d9f7f4aa5fbb83475b4acd12  
libreoffice-glade-4.3.7.2-2.el6_9.1.i686.rpm

Re: [CentOS] OT: systemd Poll - So Long, and Thanks for All the fish.

2017-04-19 Thread James B. Byrne

On Mon, April 17, 2017 17:13, Warren Young wrote:

>
> Also, I’ll remind the list that one of the *prior* times the systemd
> topic came up, I was the one reminding people that most of our jobs
> summarize as “Cope with change.”
>

At some point 'coping with change' is discovered to consume a
disproportionate amount of resources for the benefits obtained.  In my
sole opinion the Linux community appears to have a
change-for-change-sake fetish. This is entirely appropriate for an
experimental project.  The mistake that I made many years ago was
inferring that Linux was nonetheless suitable for business.

To experimenters a ten year product cycle may seem an eternity. To
many organisations ten years is barely time to work out all the kinks
and adapt internal processes to automated equivalents.  And the
smaller the business the more applicable that statement becomes.

I do not have any strong opinion about systemd as I have virtually no
experience with it.  But the regular infliction of massively
disruptive changes to fundamental software has convinced us that Linux
does not meet our business needs. Systemd and Upstart are not the
cause of that.  They are symptoms of a fundamental difference of focus
between what our firm needs and what the Linux community wants.

-- 
***  e-Mail is NOT a SECURE channel  ***
Do NOT transmit sensitive data via e-Mail
 Do NOT open attachments nor follow links sent by e-Mail

James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems With Booting CentOS on Dell T7910

2017-04-19 Thread Zdenek Sedlak
On 2017-04-18 23:29, Paul E. Virgo wrote:
>
> Hello!
>
> Does anyone have any experience with installing CentOS 6 (specfically,
> 6.8), on a Dell T7910? I've tried at least a dozen installs,
> everything gets configured, and when I have the system reboot, I get
> 'No boot device found press any key to reboot the machine'. In BIOS,
> I've enabled AHCI, Legacy boot and modes, and enabled the SAS
> controller. The disks are seen and written to. It uses a MPT SAS3 RAID
> controller, and I've even specified a 'diskdrive' statement pointing
> to the known drivers for the controller, and still nothing. This
> system is listed as compatible, but I don't know. Any suggestions or
> help appreciated..
>
>
Hello,

How is the RAID configured?

Are you using single disk or configured multiple in RAID?

Did you try to install CentOS 7.3 to verify if you'd get the same issue?

Try to boot the CentOS Live and check how is/are the disk(s) presented.

//Zdenek
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos