[CentOS] How to restore the old network interface name?

2019-07-02 Thread Leroy Tennison
Might look into 70-persistent-net.rules in addition to the article below (do 
your web research for that and CentOS 7), it's a file you probably have to 
create (not necessarily auto-generated as some documentation says) under 
/etc/udev/rules.d.  There have been two known formats for that file and a given 
format doesn't work in all cases.  Here are the formats I've seen, hope it 
helps (everything below is literal except what's contained in the less/greater 
than delimiters):

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
ATTR{address}=="", 
ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME=""

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
ATTR{address}=="", 
ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME=""

Note the missing KERNEL==... in the latter form.

From: CentOS  on behalf of Ralf Prengel 

Sent: Tuesday, July 2, 2019 4:56 AM
To: CentOS mailing list
Subject: [EXTERNAL] [CentOS] How to restore the old network interface name?

Hallo,

I need the device eth0 for one tool using centos 7.6.
Using this tutorial
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.certdepot.net%2frhel7-restore-old-network-interface-name%2f=E,1,_N-6Ga7-RXX-iwhg9-7842nyxrBXlZ3jmvPHUhIYBoIRbfi51krljOSNJKWZlazwotUW4gPX0NsSZ6l6Sjdtdaba3SAt1YES6sfHIll53M2YxmPjTrrb98aASA,,=1
doesn t work.

Thanks for a hint.

Ralf

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

Harriscomputer

Leroy Tennison
Network Information/Cyber Security Specialist
E: le...@datavoiceint.com


[cid:Data-Voice-International-LOGO_aa3d1c6e-5cfb-451f-ba2c-af8059e69609.PNG]


2220 Bush Dr
McKinney, Texas
75070
www.datavoiceint.com


This message has been sent on behalf of a company that is part of the Harris 
Operating Group of Constellation Software Inc. These companies are listed 
here.

If you prefer not to be contacted by Harris Operating Group please notify 
us.



This message is intended exclusively for the individual or entity to which it 
is addressed. This communication may contain information that is proprietary, 
privileged or confidential or otherwise legally exempt from disclosure. If you 
are not the named addressee, you are not authorized to read, print, retain, 
copy or disseminate this message or any part of it. If you have received this 
message in error, please notify the sender immediately by e-mail and delete all 
copies of the message.




___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Jon Pruente
On Tue, Jul 2, 2019 at 8:28 AM Jason Pyeron  wrote:

> This is kinda of why it makes sense to purchase at least one license.
>

Red Hat does now offer free developer subscriptions which includes access
to the Red hat Customer Portal. You officially need a business or
enterprise email address, which I verified when they rejected my personal
gmail address the first time. It recommended that I change it to a business
or enterprise one, but instead I just used a gmail supported + alias (
me+red...@gmail.com ), which Red hat accepted. (
https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html )
It's very worthwhile to have a Red Hat Dev Subscription so you can try
newer releases before they get released downstream by CentOS and others,
and also to get access to otherwise hidden subscriber content.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Elliot

On 7/2/19 6:18 AM, Giles Coochey wrote:
Does Anyone with a RedHat subscription able to give a hint as to what 
the solution to the following knowledgebase article is:


https://access.redhat.com/solutions/2801051
You only need a Red Hat account, not subscription. I can read it after 
logging in my Red Hat account, and I have no subscription of any product.


With a free Red Hat account you can also get free RHEL for development 
purposes (do read the terms).


--
Elliot
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Giles Coochey

On 02/07/2019 14:28, Jason Pyeron wrote:

This is kinda of why it makes sense to purchase at least one license.

I would start with a loop back test on both ends. Dirty ports happen.

Did you grab the most recent version of ethtool and build it?


OK, so this is a third party product that is built on Centos/RHEL, the 
product provider does not allow us to install/modify stuff. So we're 
stuck with the tools on the system and cannot make/build modifications 
on it, so in fact we have no Centos nor RedHat in this environment, so I 
was just curious to hear upstream's view on what a possible solution 
might be.


We have a plan to do many things as part of the diagnosis, but I'm 
currently performing an information gathering exercise to discern the 
other of our future steps.


I have received an answer to my query that the optical RX/TX 
inforrmation is only available on RHEL 7 and not on RHEL 6. We will 
therefore look to boot this host into diagnostic mode for further 
troubleshooting.



-Original Message-
From: CentOS  On Behalf Of Giles Coochey
Sent: Tuesday, July 2, 2019 9:19 AM
To: CentOS mailing list 
Subject: [CentOS] Anyone with RedHat Subscription?

Does Anyone with a RedHat subscription able to give a hint as to what
the solution to the following knowledgebase article is:

https://access.redhat.com/solutions/2801051

I'm having a similar issue with an SFP on a Centos host, and am
searching for a way to view Optical RX/TX Power on the SFP.

  From the switch side, I'm not seeing any RX Power from the Centos host.

Thanks in advance

Giles


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Giles Coochey



On 02/07/2019 14:35, Scott Silverman wrote:

Their "resolution" is: Update to RHEL 7 to get the more recent ethtool
output format.

You should be able to build a newer ethtool from source (or depending on
your NIC manufacturer, they may supply a tool with more recent features.
Solarflare, for example, provides 'sfctool', basically new ethtool features
for old kernels).


I was a bit economical with the situation in full in my original post.

This system is using third-party repo's, i.e. neither Centos / RedHat, 
although it is clearly based on Centos. The repo's do not have any 
development tool-chains, so we would have to put together another 
system, build ethtool on that, create an rpm and then invalidate the 
third-parties warranty by installing it on the production system.


I think we'll just boot into diagnostic mode and see what we can discern 
from there.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Scott Silverman
Their "resolution" is: Update to RHEL 7 to get the more recent ethtool
output format.

You should be able to build a newer ethtool from source (or depending on
your NIC manufacturer, they may supply a tool with more recent features.
Solarflare, for example, provides 'sfctool', basically new ethtool features
for old kernels).


Thanks,

Scott


On Tue, Jul 2, 2019 at 8:19 AM Giles Coochey  wrote:

> Does Anyone with a RedHat subscription able to give a hint as to what
> the solution to the following knowledgebase article is:
>
> https://access.redhat.com/solutions/2801051
>
> I'm having a similar issue with an SFP on a Centos host, and am
> searching for a way to view Optical RX/TX Power on the SFP.
>
>  From the switch side, I'm not seeing any RX Power from the Centos host.
>
> Thanks in advance
>
> Giles
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>

-- 
DISCLAIMER: NOTICE REGARDING PRIVACY AND CONFIDENTIALITY 

The information 
contained in and/or accompanying this communication is intended only for 
use by the addressee(s) named herein and may contain legally privileged 
and/or confidential information. If you are not the intended recipient of 
this e-mail, you are hereby notified that any dissemination, distribution 
or copying of this information, and any attachments thereto, is strictly 
prohibited. If you have received this e-mail in error, please immediately 
notify the sender and permanently delete the original and any copy of any 
e-mail and any printout thereof. Electronic transmissions cannot be 
guaranteed to be secure or error-free. The sender therefore does not accept 
liability for any errors or omissions in the contents of this message which 
arise as a result of e-mail transmission. Simplex Trading, LLC and its 
affiliates reserves the right to intercept, monitor, and retain electronic 
communications to and from its system as permitted by law. Simplex Trading, 
LLC is a registered Broker Dealer with CBOE and a Member of SIPC.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Jason Pyeron
This is kinda of why it makes sense to purchase at least one license.

I would start with a loop back test on both ends. Dirty ports happen.

Did you grab the most recent version of ethtool and build it?

> -Original Message-
> From: CentOS  On Behalf Of Giles Coochey
> Sent: Tuesday, July 2, 2019 9:19 AM
> To: CentOS mailing list 
> Subject: [CentOS] Anyone with RedHat Subscription?
> 
> Does Anyone with a RedHat subscription able to give a hint as to what
> the solution to the following knowledgebase article is:
> 
> https://access.redhat.com/solutions/2801051
> 
> I'm having a similar issue with an SFP on a Centos host, and am
> searching for a way to view Optical RX/TX Power on the SFP.
> 
>  From the switch side, I'm not seeing any RX Power from the Centos host.
> 
> Thanks in advance
> 
> Giles
> 
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Anyone with RedHat Subscription?

2019-07-02 Thread Giles Coochey
Does Anyone with a RedHat subscription able to give a hint as to what 
the solution to the following knowledgebase article is:


https://access.redhat.com/solutions/2801051

I'm having a similar issue with an SFP on a Centos host, and am 
searching for a way to view Optical RX/TX Power on the SFP.


From the switch side, I'm not seeing any RX Power from the Centos host.

Thanks in advance

Giles


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to restore the old network interface name?

2019-07-02 Thread Jonathan Billings
On Tue, Jul 02, 2019 at 11:56:03AM +0200, Ralf Prengel wrote:
> I need the device eth0 for one tool using centos 7.6.
> Using this tutorial
> https://www.certdepot.net/rhel7-restore-old-network-interface-name/ doesn t
> work.

This Red Hat documentation explains how the naming works:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-understanding_the_device_renaming_procedure

You don't need to set kernel parameters to set the interface name.

-- 
Jonathan Billings 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS-announce Digest, Vol 173, Issue 1

2019-07-02 Thread centos-announce-request
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ...@centos.org

You can reach the person managing the list at
centos-announce-ow...@centos.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of CentOS-announce digest..."


Today's Topics:

   1. CESA-2019:1603 Critical CentOS 7 firefox Security Update
  (Johnny Hughes)
   2. CESA-2019:1626 Important CentOS 7 thunderbird Security Update
  (Johnny Hughes)
   3. CEEA-2019:1612 CentOS 7 microcode_ctl Enhancement Update
  (Johnny Hughes)
   4. CESA-2019:1619 Important CentOS 7 vim SecurityUpdate
  (Johnny Hughes)
   5. CESA-2019:1604 Critical CentOS 6 firefox Security Update
  (Johnny Hughes)
   6. CESA-2019:1624 Important CentOS 6 thunderbird Security Update
  (Johnny Hughes)


--

Message: 1
Date: Mon, 1 Jul 2019 15:53:24 +
From: Johnny Hughes 
To: centos-annou...@centos.org
Subject: [CentOS-announce] CESA-2019:1603 Critical CentOS 7 firefox
SecurityUpdate
Message-ID: <20190701155324.ga30...@bstore1.rdu2.centos.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2019:1603 Critical

Upstream details at : https://access.redhat.com/errata/RHSA-2019:1603

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
08d2604423c6a69b2a32c0356784acc9e0f21cb277a7f78da828e8367f5d9c72  
firefox-60.7.2-1.el7.centos.i686.rpm
2e7d25bb455d5c0474656115c3fb6864f1887988b78e7697f69e2a77d5be5da0  
firefox-60.7.2-1.el7.centos.x86_64.rpm

Source:
59bb4421beed2b74a7df4222f7c9decf996914c8b0731f1b175900a5dbe2bd2b  
firefox-60.7.2-1.el7.centos.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS



--

Message: 2
Date: Mon, 1 Jul 2019 15:54:02 +
From: Johnny Hughes 
To: centos-annou...@centos.org
Subject: [CentOS-announce] CESA-2019:1626 Important CentOS 7
thunderbird Security Update
Message-ID: <20190701155402.ga30...@bstore1.rdu2.centos.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2019:1626 Important

Upstream details at : https://access.redhat.com/errata/RHSA-2019:1626

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
3def848ee0e2f0c98b8c94eeddaf6768e3385c1b69ff751ff7205a1613f1ddf4  
thunderbird-60.7.2-2.el7.centos.x86_64.rpm

Source:
48f315bc83bbaf08dd7b2e7a413760bea001b471499e5200b4e42ebfaa19e1f9  
thunderbird-60.7.2-2.el7.centos.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS



--

Message: 3
Date: Mon, 1 Jul 2019 15:54:45 +
From: Johnny Hughes 
To: centos-annou...@centos.org
Subject: [CentOS-announce] CEEA-2019:1612 CentOS 7 microcode_ctl
Enhancement Update
Message-ID: <20190701155445.ga31...@bstore1.rdu2.centos.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Enhancement Advisory 2019:1612 

Upstream details at : https://access.redhat.com/errata/RHEA-2019:1612

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
fdc9dc05c868dd21990e27415fc46ba47907c308a2ac721e090674f79270ad25  
microcode_ctl-2.1-47.5.el7_6.x86_64.rpm

Source:
7ccb912f8c7a6fd6e59732c364877856e9b8d8b93166597d597d88452cb2c902  
microcode_ctl-2.1-47.5.el7_6.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net
Twitter: @JohnnyCentOS



--

Message: 4
Date: Mon, 1 Jul 2019 15:55:11 +
From: Johnny Hughes 
To: centos-annou...@centos.org
Subject: [CentOS-announce] CESA-2019:1619 Important CentOS 7 vim
SecurityUpdate
Message-ID: <20190701155511.ga31...@bstore1.rdu2.centos.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2019:1619 Important

Upstream details at : https://access.redhat.com/errata/RHSA-2019:1619

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
afcc9e1a04ec14de245e51c4e6dd12e437d957e25a781d608b722736270c2a04  
vim-common-7.4.160-6.el7_6.x86_64.rpm
a72bf35b8146ccfc708da2c074c5201553eb147194f6067fe0c69a24b53d2155  
vim-enhanced-7.4.160-6.el7_6.x86_64.rpm
9be5f369745afc73c69050c226b286b0ae460634e1f612a7fa1d035d4bb4b1fa  
vim-filesystem-7.4.160-6.el7_6.x86_64.rpm
6f12c145f92c8a73fca7d0a34877325b6d258cc246d39edb5b726a7229ca9240  

[CentOS] How to restore the old network interface name?

2019-07-02 Thread Ralf Prengel

Hallo,

I need the device eth0 for one tool using centos 7.6.
Using this tutorial  
https://www.certdepot.net/rhel7-restore-old-network-interface-name/  
doesn t work.


Thanks for a hint.

Ralf

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Was, Re: raid 5 install, is ZFS

2019-07-02 Thread Warren Young
On Jul 1, 2019, at 9:44 AM, mark  wrote:
> 
> it was on Ubuntu, but that shouldn't make a difference, I would think

Indeed not.  It’s been years since the OS you were using implied a large set of 
OS-specific ZFS features.

There are still differences among the implementations, but the number of those 
is getting smaller as the community converges on ZoL as the common base.

Over time, the biggest difference among ZFS implementations will be time-based: 
a ZFS pool created in 2016 will have fewer feature flags than one created in 
2019, so the 2019 pool won’t import on older OSes.

> I pulled one drive, to simulate a drive failure, and it
> rebuilt with the hot spare. Then I pushed the drive I'd pulled back in...
> and it does not look like I've got a hot spare. zpool status shows
> config:

I think you’re expecting more than ZFS tries to deliver here.  Although it’s 
filesystem + RAID + volume manager, it doesn’t also include storage device 
management features.

If you need this kind of thing to just happen automagically, you probably want 
to configure zed:

https://zfsonlinux.org/manpages/0.8.0/man8/zed.8.html

But, if you can spare human cycles to deal with it, you don’t need zed.

What’s happened here is that you didn’t tell ZFS that the disk is no longer 
part of the pool, so that when it came back, ZFS says, “Hey, I recognize that 
disk!  It belonged to me once.  It must be mine again.”  But then it goes and 
tries to fit it into the pool and finds that there are no gaps to stick it into.

So, one option is to remove that replaced disk from the pool, then reinsert it 
as the new hot spare:

$ sudo zpool remove export1 sdb
$ sudo zpool add export1 spare sdb

The first command removes the ZFS header info from the disk, and the second 
puts it back on, marking it as a spare.

Alternately, you can relieve your prior hot spare (sdl) from its new duty — 
“new sdb” — putting sdb back in its prior place:

$ sudo zpool replace export1 sdl sdb

That does a full resilver of the replacement disk, a cost you already paid for 
with the hot spare failover, but it does have the advantage of keeping the 
disks in alphabetical order by /dev name, as you’d probably expect.

But, rather than get exercised about whether putting sdl between sda and sdc 
makes sense, I’d strongly encourage you to get away from raw /dev/sd? names.  
The fastest path in your setup to logical device names is:

$ sudo zpool export export1
$ sudo zpool import -d /dev/disk/by-serial export1

All of the raw /dev/sd? names will change to /dev/disk/by-serial/* names, which 
I find to be the most convenient form for determining which disk is which when 
swapping out failed disks.  It doesn’t take a very smart set of remote “hands” 
at a site to read serial numbers off of disks to determine which is the faulted 
disk.

The main problem with that scheme is that pulling disks to read their labels 
works best with the pool exported.  If you want to be able to do device 
replacement with the pool online, you need some way to associate particular 
disks with their placement in the server’s drive bays.

To get there, you’d have to be using GPT-partitioned disks.  ZFS normally does 
that these days, creating one big partition that’s optimally-aligned, which you 
can then label with gdisk’s “c” command.

Having done that, then you can do “zfs import -d /dev/disk/by-partlabel” 
instead, which gets you the logical disk naming scheme I’ve spoken of twice in 
the other thread.

If you must use whole-disk vdevs, then I’d at least write the last few digits 
of each drive’s serial number on the drive cage or the end of the drive itself, 
so you can just tell the tech “remove the one marked ab212”.

Note by the way that all of this happened because you reintroduced a 
ZFS-labeled disk into the pool.  That normally doesn’t happen.  Normally, a 
replacment is a brand new disk, without any ZFS labeling on it, so you’d jump 
straight to the “zpool add” step.  The prior hot spare took over, so now you’re 
just giving the pool a hot spare again.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] raid 5 install

2019-07-02 Thread Simon Matter via CentOS
>
>
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev 
>> wrote:
>>>
>>> RAID function, which boils down to simple, short, easy to debug well
>>> program.
>
> I didn't intend to start software vs hardware RAID flame war when I
> joined somebody's else opinion.
>
> Now, commenting with all due respect to famous person who Warren Young
> definitely is.
>
>>
>> RAID firmware will be harder to debug than Linux software RAID, if only
>> because of easier-to-use tools.
>
> I myself debug neither firmware (or "microcode", speaking the language
> as it was some 30 years ago), not Linux kernel. In both cases it is
> someone else who does the debugging.
>
> You are speaking as the person who routinely debugs Linux components. I
> still have to stress, that in debugging RAID card firmware one has small
> program which this firmware is.
>
> In the case of debugging EVERYTHING that affects reliability of software
> RAID, on has to debug the following:
>
> 1. Linux kernel itself, which is huge;
>
> 2. _all_ the drivers that are loaded when system runs. Some of the
> drivers on one's system may be binary only, like NVIDIA video card
> drives. So, even for those who like Warren can debug all code, these
> still are not accessible.
>
> All of the above can potentially panic kernel (as they all run in kernel
> context), so they all affect reliability of software RAID, not only the
> chunk of software doing software RAID function.
>
>>
>> Furthermore, MD RAID only had to be debugged once, rather that once per
>> company-and-product line as with hardware RAID.
>
> Alas, MD RAID itself not the only thing that affects reliability of
> software RAID. Panicking kernel has grave effects on software RAID, so
> anything that can panic kernel had also to be debugged same thoroughly.
> And it always have to be redone once changed to kernel or drivers are
> introduced.
>
>>
>> I hope you’re not assuming that hardware RAID has no bugs.  It’s
>> basically a dedicated CPU running dedicated software that’s difficult to
>> upgrade.
>
> That's true, it is dedicated CPU running dedicated program, and it keeps
> doing it even if the operating system crashed. Yes, hardware itself can
> be unreliable. But in case of RAID card it is only the card itself.
> Failure rate of which in my racks is much smaller that overall failure
> rate of everything. In case of kernel panic, any piece of hardware
> inside computer in some mode of failure can cause it.
>
> One more thing: apart from hardware RAID "firmware" program being small
> and logically simple, there is one more factor: it usually runs on RISC
> architecture CPU, and introduce bugs programming for RISC architecture
> IMHO is more difficult that when programming for i386 and amd64
> architectures. Just my humble opinion I carry since the time I was
> programming.
>
>>
>>> if kernel (big and buggy code) is panicked, current RAID operation will
>>> never be finished which leaves the mess.
>>
>> When was the last time you had a kernel panic?  And of those times, when
>> was the last time it happened because of something other than a hardware
>> or driver fault?  If it wasn’t for all this hardware doing strange
>> things, the kernel would be a lot more stable. :)
>
> Yes, I half expected that. When did we last have kernel crash, and who
> of us is unable to choose reliable hardware, and unable to insist that
> our institution pays mere 5-10% higher price for reliable box than they
> would for junk hardware? Indeed, we all run reliable boxes, and I am
> retiring still reliably working machines of age 10-13 years...
>
> However, I would rather suggest to compare not absolute probabilities,
> which, exactly as you said, are infinitesimal. But with relative
> probabilities, I still will go with hardware RAID.
>
>>
>> You seem to be saying that hardware RAID can’t lose data.  You’re
>> ignoring the RAID 5 write hole:
>>
>>  https://en.wikipedia.org/wiki/RAID#WRITE-HOLE
>
> Neither of our RAID cards runs without battery backup.
>
>>
>> If you then bring up battery backups, now you’re adding cost to the
>> system.  And then some ~3-5 years later, downtime to swap the battery,
>> and more downtime.  And all of that just to work around the RAID write
>> hole.
>
> You are absolutely right about system with hardware RAID being more
> expensive than that with software RAID. I would say, for "small scale
> big storage" boxes (i.e. NOT distributed file systems), hardware RAID
> adds about 5-7% of cost in our case. Now, with hardware RAID all
> maintenance (what one needs to do in case of single failed drive
> replacement routine) takes about 1/10 of a time necessary do deal with
> similar failure in case of software RAID. I deal with both, as it
> historically happened, so this is my own observation. Maybe software
> RAID boxes I have to deal with are too messy (imagine almost two dozens
> of software RAIDs 12-16 drives each on one machine; even bios runs out
> of 

Re: [CentOS] raid 5 install

2019-07-02 Thread Warren Young
On Jul 1, 2019, at 10:10 AM, Valeri Galtsev  wrote:
> 
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev  wrote:
>>> 
>>> RAID function, which boils down to simple, short, easy to debug well 
>>> program.
> 
> I didn't intend to start software vs hardware RAID flame war

Where is this flame war you speak of?  I’m over here having a reasonable 
discussion.  I’ll continue being reasonable, if that’s all right with you. :)

> Now, commenting with all due respect to famous person who Warren Young 
> definitely is.

Since when?  I’m not even Internet Famous.

>> RAID firmware will be harder to debug than Linux software RAID, if only 
>> because of easier-to-use tools.
> 
> I myself debug neither firmware (or "microcode", speaking the language as it 
> was some 30 years ago)

There is a big distinction between those two terms; they are not equivalent 
terms from different points in history.  I had a big digression explaining the 
difference, but I’ve cut it as entirely off-topic.

It suffices to say that with hardware RAID, you’re almost certainly talking 
about firmware, not microcode, not just today, but also 30 years ago.  
Microcode is a much lower level thing than what happens at the user-facing 
product level of RAID controllers.

> In both cases it is someone else who does the debugging.

If it takes three times as much developer time to debug a RAID card firmware as 
it does to debug Linux MD RAID, and the latter has to be debugged only once 
instead of multiple times as the hardware RAID firmware is reinvented again and 
again, which one do you suppose ends up with more bugs?

> You are speaking as the person who routinely debugs Linux components.

I have enough work fixing my own bugs that I rarely find time to fix others’ 
bugs.  But yes, it does happen once in a while.

> 1. Linux kernel itself, which is huge;

…under which your hardware RAID card’s driver runs, making it even more huge 
than it was before that driver was added.

You can’t zero out the Linux kernel code base size when talking about hardware 
RAID.  It’s not like the card sits there and runs in a purely isolated 
environment.

It is a testament to how well-debugged the Linux kernel is that your hardware 
RAID card runs so well!

> All of the above can potentially panic kernel (as they all run in kernel 
> context), so they all affect reliability of software RAID, not only the chunk 
> of software doing software RAID function.

When the kernel panics, what do you suppose happens to the hardware RAID card?  
Does it keep doing useful work, and if so, for how long?

What’s more likely these days: a kernel panic or an unwanted hardware restart?  
And when that happens, which is more likely to fail, a hardware RAID without 
BBU/NV storage or a software RAID designed to be always-consistent?

I’m stripping away your hardware RAID’s advantage in NV storage to keep things 
equal in cost: my on-board SATA ports for your stripped-down hardware RAID 
card.  You probably still paid more, but I’ll give you that, since you’re using 
non-commodity hardware.

Now that they’re on even footing, which one is more reliable?

> hardware RAID "firmware" program being small and logically simple

You’ve made an unwarranted assumption.

I just did a blind web search and found this page:

   
https://www.broadcom.com/products/storage/raid-controllers/megaraid-sas-9361-8i#downloads

…on which we find that the RAID firmware for the card is 4.1 MB, compressed.

Now, that’s considered a small file these days, but realize that there are no 
1024 px² icon files in there, no massive XML libraries, no language 
internationalization files, no high-level language runtimes… It’s just millions 
of low-level highly-optimized CPU instructions.

From experience, I’d expect it to take something like 5-10 person-years to 
reproduce that much code.

That’s far from being “small and logically simple.”

> it usually runs on RISC architecture CPU, and introduce bugs programming for 
> RISC architecture IMHO is more difficult that when programming for i386 and 
> amd64 architectures.

I don’t think I’ve seen any such study, and if I did, I’d expect it to only be 
talking about assembly language programming.

Above that level, you’re talking about high-level language compilers, and I 
don’t think the underlying CPU architecture has anything to do with the error 
rates in programs written in high-level languages.

I’d expect RAID firmware to be written in C, not assembly language, which means 
the CPU the has little or nothing to do with programmer error rates.

Thought experiment: does Linux have fewer bugs on ARM than on x86_64?

I even doubt that you can dig up a study showing that assembly language 
programming on CISC is significantly more error-prone than RISC programming in 
the first place.  My experience says that error rates in programs are largely a 
function of the number of lines of code, and that puts RISC at a severe 

Re: [CentOS] raid 5 install

2019-07-02 Thread Simon Matter via CentOS
>> You seem to be saying that hardware RAID can’t lose data.  You’re
>> ignoring the RAID 5 write hole:
>>
>>  https://en.wikipedia.org/wiki/RAID#WRITE-HOLE
>>
>> If you then bring up battery backups, now you’re adding cost to the
>> system.  And then some ~3-5 years later, downtime to swap the battery,
>> and more downtime.  And all of that just to work around the RAID write
>> hole.
>
> Yes. Furthermore, with the huge capacity disks in use today, rebuilding
> a RAID 5 array after a disk fails, with all the necessary parity
> calculations, can take days.
> RAID 5 is obsolete, and I'm not the only one saying it.

Needless to say hardware and software RAID have the problem above.

Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] raid 5 install

2019-07-02 Thread Simon Matter via CentOS
> On Mon, 1 Jul 2019, Warren Young wrote:
>
>> If you then bring up battery backups, now you’re adding cost to the
>> system.  And then some ~3-5 years later, downtime to swap the battery,
>> and more downtime.  And all of that just to work around the RAID write
>> hole.
>
> Although batteries have disappeared in favour of NV storage + capacitors,
> meaning you don't have to replace anything on those models.

That's what you think before you have to replace the capacitors module :-)

Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos