Re: [CentOS] first boot after install - WD_BLACK SN850X 4TB - fails ?

2024-01-19 Thread Warren Young
NVMe requires EFI booting. In some UEFI implementations, if you say "BIOS +
UEFI" or however it puts it, CentOS will put a `/biosboot` partition on the
disk, not `/boot/efi`, giving exactly the symptom you report.

Put it into pure UEFI mode, ensure the partitioning step creates
`/boot/efi`, and I bet your symptom will go away,
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] emulate ARM ?

2023-11-15 Thread Warren Young
On Nov 15, 2023, at 20:11, Warren Young  wrote:
> 
> Docker also uses QEMU in this fashion:

I forgot to point out that “docker” is actually an alias for Podman on the test 
system, which fully satisfies your wish for using only things that come with 
the platform, no third-party programs, Podman being a Red Hat project.

My example used a statically-linked binary to keep things simple, simplifying 
cross-platform execution, but for larger programs needing third-party 
libraries, containerization gives you another benefit here: a way to cart 
around the rest of the platform needed to run your ARM binaries.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] emulate ARM ?

2023-11-15 Thread Warren Young
On Nov 14, 2023, at 13:44, lejeczek via CentOS  wrote:
> 
> How do you emulate AMR arch

With QEMU:

$ uname -r
5.14.0-284.30.1.el9_2.x86_64
$ sudo dnf install qemu-user-static-aarch64
$ docker pull --platform=linux/arm64 tangentsoft/iperf3
$ docker export $(docker create --name iperf3 tangentsoft/iperf3) > iperf3.tar
$ tar xf iperf3.tar 
$ file bin/iperf3
bin/iperf3: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), 
statically linked, BuildID[sha1]=254575ed4ae36c21c317691a8008f3382eb7225e, 
stripped
$ bin/iperf3 -s
---
Server listening on 5201 (test #1)
---

That’s an ARM64 binary running on x86_64 via the magic of the kernel’s binfmt 
feature .

The use of Docker in this context is incidental, giving me a way to give me a 
way to share a small yet useful ARM64 binary with you for testing. However, 
Docker also uses QEMU in this fashion:

$ docker run iperf3
WARNING: image platform (linux/arm64) does not match the expected platform 
(linux/amd64)

That’s the same binary but now running inside the container we created above 
before extracting the statically-linked binary from it via a tarball. Docker 
yells about the platform mismatch, but it then does run.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Minimising a CentOS installation

2021-08-20 Thread Warren Young
On Aug 20, 2021, at 8:24 AM, Anand Buddhdev  wrote:
> What other things do folk usually remove to make their installation smaller?

Our post-install removal command here is:

dnf -y remove cockpit* pcp*

But we’re old-school Unix geeks, so there you go. :)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Help with default shell

2021-06-23 Thread Warren Young
On Jun 23, 2021, at 7:12 AM, Hooton, Gerard  wrote:
> 
> The users are authenticated using OpenLDAP.
> On LDAP the default shell is csh.
> When  ssh to login it works, i.e. $SHELL = /bin/csh
> Also, when using xrdp it works.
> However, a login from the  keyboard and screen attached computer we get 
> $SHELL = /bin/bash

The shell is a symptom, not the core issue here.  The core issue is that local 
console logins aren’t configured to use LDAP on your system, so they fall back 
to the old flat-file-based user info sources.  (/etc/passwd, /etc/group, 
/etc/shadow…) 

The question then is, do you really *want* local logins to require the LDAP 
server to be up before it’ll accept a login?  If an LDAP package upgrade 
roaches things, do you want to be forced to reboot into single-user mode to fix 
it?  If there’s a network outage between this box and the OpenLDAP server, are 
you going to wait to log in locally as well until the network’s fixed?

Me, I’d just do a “chsh” on the users or a sed pass on /etc/passwd to change 
all the shells locally so they match the LDAP configuration so I can have it 
both ways.

However, if you’re bound and determined to have LDAP be the single source of 
all user truth, the bit at the end of Step 2 here looks like it should do that:

https://arthurdejong.org/nss-pam-ldapd/setup

May you live to *not* regret doing that!
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Can't upgrade sssd-*

2021-04-09 Thread Warren Young
On Apr 9, 2021, at 9:37 AM, Johnny Hughes  wrote:
> 
> donated machines that are part of the
> mirror.centos.org dns name.

My key incorrect assumption was that this is just a front end, and all of the 
actual file pulls came from other second-level domains.  I didn’t realize you 
were allowing other organizations to masquerade as centos.org.

The usual solution to this sort of problem is to set up another domain; 
centosmirrors.org or similar.  Then you can separately manage the key spaces of 
the two domains.

This sort of design also solves certain types of CORS and XSS problems, such as 
third-parties getting sent cookies for the main site they haven’t actually got 
any business seeing, because the HTTP client can’t tell the difference.

This is why you’ll find your uploads to social media sites being served back 
from domains other than the main user-facing one: it’s user-provided content, 
so they refuse to ship it from the domain that handles authentication.

> we do sign the metadata .. so you can make sure the rpms, no  matter
> their origin, are real if you enable signed repodata 

I wasn’t worried about that.  I just wanted to use HTTPS to hide the RPM data 
from the site’s overly paranoid “translucent” HTTP gateway proxy, so it 
wouldn’t block the download.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Can't upgrade sssd-*

2021-04-05 Thread Warren Young
On Apr 5, 2021, at 8:32 AM, Johnny Hughes  wrote:
> 
> wrt private keys .. we don't want any to live on machines we
> don't physically own.

Yeah, I get that.

What I don’t get is why, if DNF goes to http://foo.centos.org to pull metadata, 
and it tells DNF to go to https://bar.qux.example.edu to download the packages 
specified by that metadata, why must there be any private keys for *.centos.org 
involved on example.edu’s servers?

Surely the sysadmin of bar.qux.example.edu obtains a TLS key pair from some 
trusted CA that certifies that bar.qux.example.edu is valid according to the 
worldwide TLS public PKI.

If we’re talking about package signing keys, surely that all happens on 
centos.org servers, and the resulting RPM packages are distributed as-is, not 
re-signed on each mirror server.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Can't upgrade sssd-*

2021-04-02 Thread Warren Young
On Apr 2, 2021, at 8:46 AM, Johnny Hughes  wrote:
> 
> We just can't risk putting private keys for centos.org on
> machines that are donated.

I guess I don’t understand how the mirror system works, then, because I thought 
DNF/YUM contacted a central server (presumably under centos.org) which then 
selected one or more mirrors with an entirely different Internet domain, with 
none of the actual package traffic being on the centos.org servers, only 
metadata.

While I might be nice to have the metadata secured as well — more than nice, 
since an attacker could do bad stuff by MITM’ing it — my immediate problem 
would be solved if it contacted the mirror over HTTPS, since I haven’t 
configured this box to accept keys minted by any sort of snoopware box on the 
site LAN.

I suppose the site might just block HTTPS entirely if it doesn’t pass through 
their snoopware, but one problem at a time, yes?

Meanwhile, I suppose I’ll just download the packages on another box and 
manually rpm -U them.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Can't upgrade sssd-*

2021-04-01 Thread Warren Young
On Mar 26, 2021, at 7:08 AM, Warren Young  wrote:
> 
> Is anyone else getting this on dnf upgrade?
> 
> [MIRROR] sssd-proxy-2.3.0-9.el8.x86_64.rpm: Interrupted by header callback: 
> Server reports Content-Length: 9937 but expected size is: 143980

The short reply size made me think to try a packet capture, and it turned out 
to be a message from the site’s “transparent” HTTP proxy, telling me that 
content’s blocked.

Rather than fight with site IT over the block list, I have a new question: is 
there any plan for getting HTTPS-only updates in CentOS?  Changing all “http” 
to “https” in my repo conf files just made the update stall, so I assume there 
are mirrors that are still HTTP-only.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Can't upgrade sssd-*

2021-03-26 Thread Warren Young
Is anyone else getting this on dnf upgrade?


[MIRROR] sssd-proxy-2.3.0-9.el8.x86_64.rpm: Interrupted by header callback: 
Server reports Content-Length: 9937 but expected size is: 143980
[MIRROR] sssd-proxy-2.3.0-9.el8.x86_64.rpm: Interrupted by header callback: 
Server reports Content-Length: 9937 but expected size is: 143980
[MIRROR] sssd-proxy-2.3.0-9.el8.x86_64.rpm: Interrupted by header callback: 
Server reports Content-Length: 9937 but expected size is: 143980
[MIRROR] sssd-proxy-2.3.0-9.el8.x86_64.rpm: Interrupted by header callback: 
Server reports Content-Length: 9937 but expected size is: 143980


I can install almost everything else in the latest batch of updates but not any 
of sssd-* or anything directly dependent upon it.  (Basically, gvfs, samba, and 
assorted libraries built atop sssd.)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] proper recipe for local mirror

2021-03-04 Thread Warren Young
On Mar 4, 2021, at 7:04 AM, Adrian Sevcenco  
wrote:
> 
> What is the proper solution for a local mirror for centos 7 (or oven 8) 
> repositories?

https://www.osradar.com/how-to-create-centos-8-local-repository-mirrors-with-rsync-nginx/
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Transition test report going from CentOS8 to Debian 10.

2021-02-05 Thread Warren Young
On Feb 5, 2021, at 9:03 AM, Lamar Owen  wrote:
> 
>> 1. The package names are often different, and not always differing by an 
>> obvious translation rule.  ...
> 
> I consider this to be very minor in comparison to other items.

If you’re making a wholesale transition, sure, but when you’re maintaining a 
mix of systems and you know what you’re trying to accomplish but can’t remember 
the exact one of six different incantations for accomplishing that on the 
platforms you maintain, it’s enough to frost one’s cookies.

It’s gotten bad enough for me that I now maintain a private wiki listing 
instructions for how to set up a new system, alternate top-level sections 
giving translations of the primary platform’s instructions for each platform I 
have to manage.  Bleah!

(I mined that wiki to compose my prior reply.  Real-world experience here.)

>> 2. Some packages simply won’t be available.  Most often this happens in the 
>> Debian → CentOS direction, but I’ve run into cases going the other way. ...
> 
> Yes, true.  True going from C7 to C8, too, especially if you rely on 
> third-party repositories for packages.

Yes, I lost many packages moving from C7 to C8, too.  However, I’d prefer to 
hold tight to *one* bag of problems rather than gather several bags of problems 
unto my bosom. :)

>> 3. Debian adopted systemd, but it didn’t adopt the rest of the Red Hat 
>> userland tooling.  For instance, it’s firewalld on CentOS, UFW on Ubuntu, 
>> and raw kernel firewall manipulation on Debian unless you install one of 
>> those two.  And then, which?
> 
> That one is more serious for the server than the other two, for sure.

And realize that firewalld is just an example, not the full scope of the 
problem.  Solve that one across platforms, and you’ve got several more to deal 
with next.

> If migrating from CentOS I would probably go with firewalld.  I haven't 
> decided yet in my evaluations.  But I put an ACL on the Cisco 7609's here

How does fobbing the problem off on Cisco help in today’s “deny everything by 
default” world?  Unless you’re lucky enough to be using binary packages that 
take care of all of this for you, you’re still going to have to manually punch 
a hole in the firewall for some service or other, which means you’ve now got to 
learn to do that on every platform you’re supporting, because it probably won’t 
be the same way on any two of them.

> I have one of the most maxxed-out Pentium III-S systems you'll find

Good in winter for an under-desk system, exhaust fan pointed at your frozies 
toesies. :)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Transition test report going from CentOS8 to Debian 10.

2021-02-04 Thread Warren Young
On Feb 4, 2021, at 8:39 AM, Lamar Owen  wrote:
> 
> I posted a pretty complete rundown on the scientific linux users mailing 
> list, so I won't recap it all here.

Link?

> the transition was not any more difficult, really, than moving from CentOS 7 
> to CentOS 8.

That’s not my experience.

I keep several of my packages running on CentOS and Debian (and more) and I 
keep running into several common problems:

1. The package names are often different, and not always differing by an 
obvious translation rule.  For instance, it’s “openldap-devel” on CentOS but 
“libldap2-dev” on Debian, where the normal rule would make it 
“libopenldap-dev”.  Why the difference?  Dunno, but I have to track such things 
down when setting up scripts that do cross-distro builds.  If I automate that 
translation, now I’m setting myself up for a future breakage when the package 
names change again.  (libldap3-dev?)

2. Some packages simply won’t be available.  Most often this happens in the 
Debian → CentOS direction, but I’ve run into cases going the other way.  Just 
for one, I currently have to install NPM from source on Debian because the 
platform version won’t work properly with the platform version of Node, last 
time I tested it.  Why?  Same answer as above.

3. Debian adopted systemd, but it didn’t adopt the rest of the Red Hat userland 
tooling.  For instance, it’s firewalld on CentOS, UFW on Ubuntu, and raw kernel 
firewall manipulation on Debian unless you install one of those two.  And then, 
which?

4. Network configuration is almost entirely different unless you turn off all 
the automation on all platforms, in which case you might as well switch to 
macOS or FreeBSD for all the good your muscle memory and training will do you.

I’m not saying “don’t do it,” but to say it’s as smooth as from CentOS 7 to 8?  
Hard sell.

I’ll give you one mulligan: the changes to the security rules in CentOS 8 
caused a huge upheaval for one of my applications, since it basically stopped 
it from running, being naughty in Red Hat’s omnisciently beneficent eyes.  We 
spent about a year fixing breakages due to 25 years of built-up assumptions 
about what was correct and sensible, which don’t affect us on other Linuxes 
because they didn’t implement the same SELinux rules.

The details aren’t super-important, because the real take-away is this: it’s 
always *something.*

(For those that must know, the biggie was that our systemd-based service used 
to run from /home/$APPNAME but that’s a no-no on C8 now.  Moving it all under 
/opt/$APPNAME and rearranging it all according to LFS rules, then finding and 
fixing all the places we depended on such paths was *painful*.)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Intel/64 CentOS VM running on a Mac M1?

2021-02-04 Thread Warren Young
On Feb 3, 2021, at 5:28 PM, Lists  wrote:
> 
> I had the impression that MacOS' Rosetta II might do what I need

That’s rather difficult when the x86 code in question is on the other side of a 
virtualized CPU.  It’s a double translation, you see: real x86 code run on a 
virtual x86 CPU under your CPU’s virtualization extensions (e.g. Intel VT-x) 
under an Apple M1 ARM64 variant.

That’s not an impossible dance to pull off, but you’ll need three parties 
coordinating the dance steps if you want a high-fidelity CentOS-on-bare-metal 
emulation: Intel, Apple, and your VM technology provider of choice.

If you’re willing to drop one of those three parties out of the equation, you 
have alternatives:

1. Full CPU simulation, as with QEMU.  This should be able to run x86_64 CentOS 
on an M1, but it’ll be like the bad old days of software virtualization, back 
around 2000, where every instruction inside the VM had to be translated into 
native instructions.

2. Cross-compilation to x86 code under macOS, which allows Rosetta II to take 
effect, but now you aren’t running under CentOS proper any more.  Even if you 
port over the whole userland you depend on, you’ve still got the macOS kernel 
under your app, which may differ in significant areas that matter.

> I need to have access to a VM that's binary-compatible 
> with production so that I can make sure it "really really works" before 
> pushing stuff out. 

If “really really works” is defined in terms of automated testing — and if not, 
why not? — then it sounds like you want a CI system, though probably not a 
CI/CD system, if I read your intent properly.

That is, you build and test on macOS with ARM code, commit your changes to 
whatever release repository you maintain now, the CI system picks that up, 
tries to build it, runs the tests, and notifies you if anything fails.  The 
resulting binary packages can then be manually pushed to deployment.

(It’s that last difference that makes this something other than CI/CD.)

Making your code work across CPU types is more work, but it can point out 
hidden assumptions that are better off excised.

For instance, this line of C code has a data race in a multithreaded 
application:

 ++i;

…even though it compiles to a single Intel CPU instruction, even when ‘i’ is 
guaranteed to be stored in a register!

Whether it bites you on Intel gets you way down into niggly implementation 
details, but it’s *statistically guaranteed* to bite you on ARM due to its RISC 
nature, because it’s an explicit load-modify-store sequence requiring 3 or 4 
CPU instructions, and that few only if you don’t add write barriers to fix the 
problem.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-02 Thread Warren Young
On Jan 2, 2021, at 11:17 AM, Fred  wrote:
> 
> I assume that the yottamaster device runs Linux, just like 99% of other
> such devices.

99% of NAS boxes, maybe, but not dumb RAID boxes like the one I believe you’re 
referring to.

(And I doubt even that, with the likes of FreeNAS extending down from the 
enterprise space where consumer volume can affect that sort of thing.)

I have more than speculation to back that guess: the available firmware images 
are far too small to contain a Linux OS image, their manuals don’t talk about 
Linux or GPL that I can see, and there’s no place to download their Linux 
source code per the GPL.

While doing this exploration, I’ve run into multiple problems with their web 
site, which strengthens my suspicion that this box is your culprit.  If they’re 
this slipshod with their marketing material, what does that say about their 
engineering department?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-02 Thread Warren Young
On Jan 2, 2021, at 9:55 AM, Fred  wrote:
> 
> Plantronics USB headset/microphone?
> Yottamaster RAID-1 storage (USB3)?
> Behringer USB audio interface?
> Logitech wireless mouse?
> Leopold USB keyboard?

HID devices won’t go to sleep when the computer does, else they couldn’t wake 
it back up.  (Keyboard & mouse, mainly.)

The two audio interfaces may or may not sleep.  Try checking their indicator 
LEDs when the computer goes to sleep: I’d expect them to visibly show that 
they’ve gone to sleep if they do.  If they do, then on wake, they *could* do 
this sort of thing.

I’d go after the RAID enclosure first, particularly if it’s hardware RAID, 
since that means it’s “clever,” thus suspect.  Check that you’ve got the 
current firmware:

https://www.yottamaster.com/?route=common/driver

If it’s one of their JBOD models, requiring that you do some sort of software 
RAID, I’d expect a much different report in the kernel log if the corresponding 
software RAID component had a bug, which would mean it’s got some fundamental 
USB compatibility problem if that’s the device causing the problem. Again, 
check for firmware updates.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-02 Thread Warren Young
On Jan 2, 2021, at 7:44 AM, Fred  wrote:
> 
> I'm further guessing that "xhci_hcd" has something to do with USB

Yup: 
https://en.wikipedia.org/wiki/Extensible_Host_Controller_Interface#Virtualization_support

> If so I don't know what it would be...

My guess: you have USB-attached storage that’s waking up when you wiggle the 
mouse, and it’s crashing the bus, kicking the kernel driver over, so the system 
reboots to protect itself.

If not storage, then something else sufficiently complicated, which wakes up 
when you wake the system.

I’d exclude things like optical drives, unless they have disks in them at the 
time this happens.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Trying to find gcc 5

2020-12-04 Thread Warren Young
On Dec 3, 2020, at 5:26 PM, mark  wrote:
> 
> 4's ancient, move to another distro"

Do you mean GCC 4.8.5 from CentOS 7, or GCC 4.47 from CentOS 6, or GCC 4.2.1 
from CentOS 5?

If we’re talking about CentOS 6, then even Red Hat agrees with the Calibre 
folks: it’s now officially past time to get off CentOS 6, as of last week.  
CentOS 5?  That and 3-4 years gone now.

If you’re speaking of CentOS 7, then we’re talking about a 5-year-old compiler, 
which I wouldn’t call “ancient,” but I’m not surprised that pre-built 
unofficial binaries aren’t targeting it any more, either.

A key pillar of the 10 year support value proposition is that the providers of 
the toolchains will be building new ancillary packages for you with those 
tools, but that only applies for packages in the distro.  I don’t see how you 
can expect that non-Red Hat organizations would be constrained in the same way. 
They didn’t agree to that deal.

I suggesting that you build Calibre yourself, or find someone who has done so 
atop CentOS 7.

Beware: the most recent major release of Calibre also requires Python 3.  They 
finally cut off all Python 2 support.

Alternately, upgrade to CentOS 8, which uses GCC 8.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Replacing SW RAID-1 with SSD RAID-1

2020-11-24 Thread Warren Young
On Nov 24, 2020, at 10:05 AM, Simon Matter  wrote:
> 
> Why is a layered approach
> worse than a fully included solution like ZFS?

Just one reason is that you lose visibility of lower-level elements from the 
top level.

You gave the example of a bad block in a RAID.  What current RHEL type systems 
can’t tell you when that happens is which file is affected.

ZFS not only can tell you that, deleting or replacing the file will fix the 
array. That’s the bottom-most layer (disk surface) telling the top-most layer 
(userspace) there’s a problem, and user-space fixing it by telling the 
bottom-most layer to check again.

Because ZFS is CoW, this isn’t forcing the drive to rewrite that sector, it’s a 
new set of sectors being brought into use, and the old ones released.  The 
sector isn’t retried until the filesystem reassigns those sectors.

Red Hat is attempting to fix all this with Stratis, but it’s looking to take 
years and years for them to get there.  ZFS is ready today.

> The tools differ but some
> complexity always remains.

In my experience, ZFS hides a lot of complexity, and it is exceedingly rare to 
need to take a peek behind the curtains.

(And if you do, there’s the zdb command.)

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Replacing SW RAID-1 with SSD RAID-1

2020-11-24 Thread Warren Young
On Nov 24, 2020, at 10:43 AM, cen...@niob.at wrote:
> 
> On 24/11/2020 18:32, John Pierce wrote:
>> zpool create newpool mirror sdb sdc mirror sdd sde mirror sdf sdg mirror
>> sdh sdi spare sdj sdk
>> zfs create -o mountpoint=/var/lib/pgsql-11 newpool/postgres11
> 
> This *might* be a valid answer if zfs was supported on plain CentOS...

Since we’re talking about CentOS, “support” here must mean community support, 
as opposed to commercial support, so:


https://openzfs.github.io/openzfs-docs/Getting%20Started/RHEL%20and%20CentOS.html
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssacli start rebuild?

2020-11-14 Thread Warren Young
On Nov 14, 2020, at 5:56 AM, hw  wrote:
> 
> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
>> On Nov 11, 2020, at 2:01 PM, hw  wrote:
>>> I have yet to see software RAID that doesn't kill the performance.
>> 
>> When was the last time you tried it?
> 
> I'm currently using it, and the performance sucks.

Be specific.  Give chip part numbers, drivers used, whether this is on-board 
software RAID or something entirely different like LVM or MD RAID, etc.  For 
that matter, I don’t even see that you’ve identified whether this is CentOS 6, 
7 or 8.  (I hope it isn't older!)

> Perhaps it's
> not the software itself or the CPU but the on-board controllers
> or other components being incable handling multiple disks in a
> software raid.  That's something I can't verify.

Sure you can.  Benchmark RAID-0 vs RAID-1 in 2, 4, and 8 disk arrays.

In a 2-disk array, a proper software RAID system should give 2x a single disk’s 
performance for both read and write in RAID-0, but single-disk write 
performance for RAID-1.

Such values should scale reasonably as you add disks: RAID-0 over 8 disks gives 
8x performance, RAID-1 over 8 disks gives 4x write but 8x read, etc.

These are rough numbers, but what you’re looking for are failure cases where 
it’s 1x a single disk for read or write.  That tells you there’s a bottleneck 
or serialization condition, such that you aren’t getting the parallel I/O you 
should be expecting.

>> Why would you expect that a modern 8-core Intel CPU would impede I/O
> 
> It doesn't matter what I expect.

It *does* matter if you know what the hardware’s capable of.

TLS is a much harder problem than XOR checksumming for traditional RAID, yet it 
imposes [approximately zero][1] performance penalty on modern server hardware, 
so if your CPU can fill a 10GE pipe with TLS, then it should have no problem 
dealing with the simpler calculations needed by the ~2 Gbit/sec flat-out max 
data rate of a typical RAID-grade 4 TB spinning HDD.

Even with 8 in parallel in the best case where they’re all reading linearly, 
you’re still within a small multiple of the Ethernet case, so we should still 
expect the software RAID stack not to become CPU-bound.

And realize that HDDs don’t fall into this max data rate case often outside of 
benchmarking.  Once you start throwing ~5 ms seek times into the mix, the CPU’s 
job becomes even easier.

[1]: https://stackoverflow.com/a/548042/142454

> 
>>> And where
>>> do you get cost-efficient cards that can do JBOD?
>> 
>> $69, 8 SATA/SAS ports: https://www.newegg.com/p/0ZK-08UH-0GWZ1
> 
> That says it's for HP.  So will you still get firmware updates once
> the warranty is expired?  Does it exclusively work with HP hardware?
> 
> And are these good?

You asked for “cost-efficient,” which I took to be a euphemism for “cheapest 
thing that could possibly work.”

If you’re willing to spend money, then I fully expect you can find JBOD cards 
you’ll be happy with.

Personally, I get servers with enough SFF-8087 SAS connectors on them to 
address all the disks in the system.  I haven’t bothered with add-on SATA cards 
in years.

I use ZFS, so absolute flat-out benchmark speed isn’t my primary consideration. 
 Data durability and data set features matter to me far more.

>>> What has HP been thinking?
>> 
>> That the hardware vs software RAID argument is over in 2020.
> 
> Do you have a reference for that, like a final statement from HP?

Since I’m not posting from an hpe.com email address, I think it’s pretty 
obvious that that is my opinion, not an HP corporate statement.

I base it on observing the Linux RAID market since the mid-90s.  The massive 
consolidation for hardware RAID is a big part of it.  That’s what happens when 
a market becomes “mature,” which is often the step just prior to “moribund.”

> Did they stop developing RAID controllers, or do they ship their
> servers now without them

Were you under the impression that HP was trying to provide you the best 
possible technology for all possible use cases, rather than make money by 
maximizing the ratio of cash in vs cash out?

Just because they’re serving it up on a plate doesn’t mean you hafta pick up a 
fork.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssacli start rebuild?

2020-11-11 Thread Warren Young
On Nov 11, 2020, at 7:04 PM, Warren Young  wrote:
> 
> zpool mount -d /dev/disk/by-partlabel

Oops, I’m mixing the zpool and zfs commands.  It’d be “zpool import”.

And you do this just once: afterward, the automatic on-boot import brings the 
drives back in using the names they had before, so when you’ve got some 
low-skill set of remote hands in front of the machine, and you’re looking at a 
failure indication in zpool status, you just say “Swap out the drive in the 
third cage, left side, four slots down.”
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssacli start rebuild?

2020-11-11 Thread Warren Young
On Nov 11, 2020, at 6:37 PM, Valeri Galtsev  wrote:
> 
> how do you map failed software RAID drive to physical port of, say, 
> SAS-attached enclosure.

With ZFS, you set a partition label on the whole-drive partition pool member, 
then mount the pool with something like “zpool mount -d 
/dev/disk/by-partlabel”, which then shows the logical disk names in commands 
like “zpool status” rather than opaque “/dev/sdb3” type things.

It is then up to you to assign sensible drive names like “cage-3-left-4” for 
the 4th drive down on the left side of the third drive cage.  Or, maybe your 
organization uses asset tags, so you could label the disk the same way, 
“sn123456”, which you find by looking at the front of each slot.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssacli start rebuild?

2020-11-11 Thread Warren Young
On Nov 11, 2020, at 2:01 PM, hw  wrote:
> 
> I have yet to see software RAID that doesn't kill the performance.

When was the last time you tried it?

Why would you expect that a modern 8-core Intel CPU would impede I/O in any 
measureable way as compared to the outdated single-core 32-bit RISC CPU 
typically found on hardware RAID cards?  These are the same CPUs, mind, that 
regularly crunch through TLS 1.3 on line-rate fiber Ethernet links, a much 
tougher task than mediating spinning disk I/O.

> And where
> do you get cost-efficient cards that can do JBOD?

$69, 8 SATA/SAS ports: https://www.newegg.com/p/0ZK-08UH-0GWZ1

Search for “LSI JBOD” for tons more options.  You may have to fiddle with the 
firmware to get it to stop trying to do clever RAID stuff, which lets you do 
smart RAID stuff like ZFS instead.

> What has HP been thinking?

That the hardware vs software RAID argument is over in 2020.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dnsmasq centos 7

2020-10-31 Thread Warren Young
On Oct 31, 2020, at 1:22 PM, Strahil Nikolov via CentOS  
wrote:
> 
> Are you sure you have opened 53/udp ?

Good call, but you left out the “how”:

$ sudo firewall-cmd --add-service dns
$ sudo firewall-cmd --add-service dns --permanent

Without the second command, it affects the runtime firewall only, and without 
the first, it doesn’t take effect until the next reboot.

To the OP: DNS needs both TCP *and* UDP service on port 53.  Your telnet test 
is incomplete, and in fact covers only some of the lesser-used code paths in 
DNS servers. (Zone transfers, etc.)  Most DNS service needs UDP only, and 
expects that to work; there is no fallback to TCP if UDP fails.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question regarding cent OS 7.8.2003 compatibility with large SAS disks

2020-09-25 Thread Warren Young
On Sep 24, 2020, at 10:25 PM, Amey Abhyankar  wrote:
> 
> I referred = https://wiki.centos.org/About/Product
> But slightly confused with the 'maximum file size' row for ext4 FS.

CentOS 8 defaults to XFS, not to ext4, so that row has no bearing on your use 
case.

Although I wouldn’t recommend overriding this default, if you did, I don’t see 
why you’re worried in the first place. 12 TB < 50 TB, the limit for partitions 
created by the stock versions of e4fsprogs included with CentOS.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] fdisk boot partition

2020-07-01 Thread Warren Young
On Jun 30, 2020, at 1:25 PM, John Pierce  wrote:
> 
> On Tue, Jun 30, 2020 at 12:12 PM Jerry Geis  wrote:
> 
>> I am trying to use CentOS 8 host to boot an image (OS X) that  I created
>> using dd.
>> 
>> First I tried fdisk -l image_file.img ...
> 
> 
> fdisk has been deprecated for quite a long time, I think parted is the
> preferred command line tool now.

Even to the extent that it works, it’d only support MBR partitioning, and you 
almost certainly want GPT for a macOS boot image.

…and then “bootable” flags go out the window anyway, because EFI doesn’t care 
about that.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] how does autofs deal with stuck NFS mounts and suspending to RAM?

2020-05-18 Thread Warren Young
On May 18, 2020, at 5:13 AM, hw  wrote:
> 
> Is there a better alternative for mounting remote file systems over 
> unreliable 
> connections?

I don’t have a good answer for you, because if you’d asked me without all this 
backstory whether NFS or SSHFS is more tolerant of bad connections, I’d have 
told you SSHFS.

NFS comes out of the "Unix lab” world, where all of the computers are 
hard-wired to nearby servers.  It gets really annoyed when packet loss starts 
happening, and since it’s down in the kernel, that can mean the whole box locks 
up until NFS gets happy again.

NFS is that way on purpose: it’s often used to provide critical file service 
(e.g. root-on-NFS) so if file I/O stops happening it *must* block and wait out 
the failure, else all I/O dependent on NFS starts failing.

Some of this affects SSHFS as well.  To some extent, the solution to the 
broader problem is “Dropbox” et al.  That is, a solution that was designed 
around the idea that connectivity might not be constant.

This is also while DVCSes like Git have become popular.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 8 NIS

2020-04-08 Thread Warren Young
On Apr 8, 2020, at 6:42 PM, Mark LaPierre  wrote:
> 
> Does anyone know where I can get NIS for CentOS 8?

$ dnf provides ypserv
...
ypserv-4.0-6.20170331git5bfba76.el8.x86_64 : The NIS (Network Information 
Service) server
Repo: AppStream
Matched from:
Provide: ypserv = 4.0-6.20170331git5bfba76.el8
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] (SOLVED) YUM (DNF) Possible Confusion Centos 8

2020-01-30 Thread Warren Young
On Jan 30, 2020, at 9:37 AM, Johnny Hughes  wrote:
> 
> On 1/24/20 8:02 AM, Simon Matter via CentOS wrote:
>> 
>> I've never really understood how hiding those solutions behind a wall is a
>> good thing in/for the OpenSource world. Looks like I'm not alone :-)
> 
> A good thing is the ability for someone to be able to pay people actual
> money so that CentOS can actually exist.  There is no CentOS (or
> Scientfic Linux or Oracle Linux) without RHEL.  There is no RHEL if Red
> Hat can not make money.
> 
> If one is not smart enough to support their own install .. the answer is
> .. buy RHEL.

While I think I agree with you in principle, this case doesn’t really fall into 
the category you’re outlining, for several reasons.

1. dnf is complaining that "nothing provides module(perl:5.26)” when Perl is 
evidently installed.  How did it get there if nothing provides it?  I don’t see 
how this is anything but a packaging bug.  If the financial stability of the 
Red Hat subsidiary of IBM, Inc. depends on people paying them for advice on how 
to get around broad-based bugs Red Hat created — or at least allowed to pass QA 
— that’s a perverse incentive that will ultimately damage the subsidiary.

2. I would think that the “support” you speak of, the lifeblood of the Red Hat 
subsidiary, is principally one-to-one, where the recipient is getting value 
they could not easily receive any other way.  That should exclude problems so 
common that they end up in the knowledge base.  That’s one-to-many, which means 
the per-recipient value of the information is amortized.

The bottom line is that I think the community here is acting as an auxiliary QA 
arm for Red Hat, benefiting their paying customers.  Our payment?  I’ll take 
some more CentOS, thanks. :)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] YUM (DNF) Possible Confusion Centos 8

2020-01-22 Thread Warren Young
On Jan 22, 2020, at 11:04 AM, david  wrote:
> 
> After the reboot, issue as root:
>  yum -y install perl chrony perl-libwww-perl perl-App-cpanminus gcc

Thank you for boiling that down.  We’ve been seeing the symptom here, too, but 
the trigger was somewhere inside a thousand-like shell script.

We do similar things to what the above test does, so we have all of those 
packages installed in our default install except for perl-libwww-perl.

Note that the symptom doesn’t happen on “yum upgrade”, only on certain other 
operations.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [CentOS-announce] Release for CentOS Linux 8 (1911)

2020-01-16 Thread Warren Young
On Jan 16, 2020, at 12:06 PM, Lamar Owen  wrote:
> 
> ...or maybe even 8.1.1911 (which is part of the name of the DVD ISO file), 
> but officially it's CentOS 8 (1911).

$ lsb_release -a
LSB Version::core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:CentOS Linux release 8.1.1911 (Core) 
Release:8.1.1911
Codename:   Core

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemctl behaves like it is being piped to less in centos 8?

2019-12-13 Thread Warren Young
On Dec 13, 2019, at 9:39 AM, Warren Young  wrote:
> 
> You can fix the symptom with the software as delivered:

Also, there’s this bit in the default .bashrc on EL8:

# Uncomment the following line if you don't like systemctl's auto-paging 
feature:
# export SYSTEMD_PAGER=
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemctl behaves like it is being piped to less in centos 8?

2019-12-13 Thread Warren Young
On Dec 13, 2019, at 9:27 AM, Stephen John Smoogen  wrote:
> 
> It seems this became the default at some point.
> 
> systemctl -l --no-pager
> 
> is the way to get it without that. The whole does it use a pager, does
> it ellipse, etc has been a long fight where various people complain
> enough to get it one way or another. It usually goes with the group
> that complains the nicest versus the ones who complain the worst :).

You can fix the symptom with the software as delivered:

 $ export SYSTEMD_LESS='-FR'
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] DebugInfo repo broken on purpose

2019-12-12 Thread Warren Young
This line in /etc/yum.repos.d/CentOS-Debuginfo.repo 

baseurl=http://debuginfo.centos.org/$releasever/$basearch/

…causes commands like “yum search --enablerepo=* foo” to fail with the obscure 
error

Error: Failed to synchronize cache for repo 'base-debuginfo'

Apparently this is because the debug info RPMs aren’t hosted there any more, 
per the page at the top of the site.  However, when I edit that file to point 
to the Facebook mirror linked from the top of the debuginfo.centos.org site, I 
get the same error, even after assorted remediations: dnf makecache, dnf 
update, etc.

Any ideas on how to fix it, hopefully in a way that lands in a CentOS 8.next, 
so it doesn’t have to be fixed manually?

And in the meantime, is there a short syntax for “search all repos other than 
the debuginfo ones”?  I could list every repo in a comma-separated list, but 
ugh.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 as a guest VM

2019-12-03 Thread Warren Young
On Dec 3, 2019, at 1:11 PM, Jerry Geis  wrote:
> 
> Are there other options that this single byte CR over socket is not getting
> seen by my application.

Sure, but without the code, you’re reducing me to blind speculation.  I’m 
offering free debugging services here.

You also haven’t answered the question of whether the VM qualifier in the 
subject line actually affects the symptom.

If the problem only occurs under VMware, are we talking about ESXi or the 
Workstation/Fusion flavor?  (Type 1 vs Type 2 hypervisor?)

And if it also occurs when you run the receiver on C7 on bare metal, then 
you’ve got a generic sockets problem, not a VM problem.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 as a guest VM

2019-12-03 Thread Warren Young
On Dec 3, 2019, at 12:20 PM, Jerry Geis  wrote:
> 
> int flag = 1;
> if(setsockopt(fd, IPPROTO_TCP, TCP_NODELAY,  , sizeof(flag)) < 0)

So first, I said “don’t do that,” and then you went and did that. :)

But second, I’m guessing you did this on the receiving side, where it has no 
effect under any conditions.  The Angle algorithm is about delaying the first 
packet on the sender’s side in anticipation of shortly receiving more data that 
can go in the same packet.

Additionally, Nagle’s algorithm only works when there’s unacknowledged data, 
not on the first packet out on a new conn.

Now that we’ve dispensed with Nagle, let’s get down to the actual issue.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 as a guest VM

2019-12-03 Thread Warren Young
On Dec 3, 2019, at 9:18 AM, David G. Miller  wrote:
> 
> On 12/3/19 8:46 AM, Jerry Geis wrote:
>> I am experiencing an issue that my process does not wake out of a select()
>> call when a single character is received in an input file descriptor when
>> running as a VMware guest.

You imply but don’t say that this doesn’t happen when the app is running on 
bare metal.  Is that the case?

>> Anyone ever experienced this ?

No, and I’ve been writing sockets-type code since the days when it wasn’t clear 
whether BSD sockets would win out over AT TLI/XTI/STREAMS.

>> once in a while that character is "delayed" getting to my input buffer.

That’s probably the Nagle algorithm:

https://en.wikipedia.org/wiki/Nagle%27s_algorithm

It’s intentional.  You almost never want to disable it.

> Perl apps can either be line buffered or character buffered

I think that’s controlled by the kernel’s terminal driver code, not by Perl.  
Perl is just giving you an alternate configuration to the underlying termios() 
or whatever call controls this.

Anyway, you have to go out of your way to get line-buffered sockets on Linux.  
One way is to bind a socket to a pty, as ssh does, bringing the terminal I/O 
code into it again, but I doubt Jerry’s doing that.

I’d bet Jerry's app is just making assumptions about the way TCP works that 
just aren’t true.

Jerry, please show your sockets setup code and the skeletonized read loop.  I’m 
talking socket(), bind(), setsockopt(), etc.  I want to see every sockets call. 
 Your app logic you’re free to keep hidden away.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] printer only prints one page, if anything

2019-11-13 Thread Warren Young
On Nov 13, 2019, at 6:42 PM, H  wrote:
> 
> Anyone else has the same problem? How do I research this? 

I don’t have any specific guidance for you, but I can tell you what’s happening.

The PDF document is being rasterized before being printed, and then some stage 
along the line is dithering the rasterized form to spread out the errors 
(aliasing) that occur when you rasterize vector art — which includes virtually 
all printed text!

The fix therefore is to either:

1. If printing to B, rasterize to the printer’s native resolution in 1-bit 
B, not to grayscale.  If you rasterize to the wrong resolution, it has to be 
resized which usually creates grayscale, and if you rasterize to the right 
resolution but it's antialiased, it still won’t print sharply.

2. Send it through to the printer to be rasterized there instead.  The most 
durable form is Postscript, but if you don’t have a Postscript capable printer, 
or if it’s just too slow on that mode, pick the right language.

The trick is finding out which of the above failure modes is happening in your 
case.  There’s probably 2-4 stages in the CentOS printing process where data is 
handed off from one process to another before it hits the actual printer, and 
they can all be doing these sorts of mistranslation.

That’s why Postscript was such a great idea: the producing application wrote 
out the Postscript rendition of the doc, and it was just sent as unprocessed 
data until it hit the printer, no reinterpretation along the way.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] easy way to stop old ssl's

2019-10-15 Thread Warren Young
On Oct 15, 2019, at 12:26 PM, Markus Falb  wrote:
> 
> I guess that I would get A rating from ssllabs.

None of my CentOS systems have Internet-facing HTTP, much less HTTPS, so I 
volunteer you to test it and report back. :)

> I read you saying that FIPS 140-2 is not good enough. Apart from age, why?

It requires that a conforming application speak only protocols that NIST has 
approved, and even then, you can only get FIPS 140-2 certification by 
submitting the software to a third-party validation service, which is very 
expensive and very time consuming.  (I’m seeing numbers like 9 months and US 
$100,000.)  After going through all of that, you aren’t allowed to make *any* 
changes to the covered parts of the software without going through another 
validation process.  

Let’s say you’re a software vendor and someone discovers a vulnerability not 
caught by the FIPS certification process.  You’re a good citizen, so you fix it 
quickly and release that fix promptly.  Then you must re-file for a new 
certification (more $$$) and then wait for the independent testing lab and NIST 
to take months to re-certify your software.  Meanwhile, those insisting on FIPS 
mode have to use the known-vulnerable version — which probably has a public CVE 
filed against it, thus cluing potential attackers into the problem — because 
the new one isn’t FIPS-certified yet.

For another example, elliptic curve crypto is currently getting very popular 
for various reasons, but not all common curve parameters are NIST-certifiable 
under FIPS 140-2.  If you must communicate with an ECC service using 
non-certified params, you either cannot run your app in FIPS mode or you have 
to separately get the other end to become FIPS-certified, which means 
abandoning those params, which might be better than what you can get under FIPS.

Again, I invite you to do a web search for people running into trouble trying 
to get FIPS-mode apps to communicate with non-FIPS-mode apps.  It’s not hard to 
find people running into problems here.

Here’s some I found:


https://blogs.technet.microsoft.com/secguide/2014/04/07/why-were-not-recommending-fips-mode-anymore/
https://blogs.oracle.com/security/fips-the-crypto-catch-22
https://bugs.chromium.org/p/chromium/issues/detail?id=194867

If giants like Microsoft, Google, and Oracle are having trouble getting and 
maintaining their FIPS certifications, what hope do us little guys have?

If you don’t like responses from big corporations, here’s some clueful 
developers discussing the problems:

https://news.ycombinator.com/item?id=7635321

I don’t have a problem with independent testing and such per se, but when it’s 
a regulatory gatekeeper to what software *can* be written and used, it’s a 
problem when it comes to security.  If we’ve learned anything about security in 
these past decades, it’s that fast reaction to vulnerabilities is critical.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] easy way to stop old ssl's

2019-10-12 Thread Warren Young
On Oct 12, 2019, at 4:06 AM, Markus Falb  wrote:
> 
> On 11.10.19 22:40, Warren Young wrote:
>> Just ship a new HTTPS configuration to each server.
> 
> Instead of configuring every application separataly it would be nice if
> "accepted levels of security" could be set system wide.

…which implies that there is some authority that defines “accepted level” the 
way you’d do it if you could be bothered to think through all of the use cases, 
combinations, and implications.

Who is that central organization?  Are you sure their notions match your own?

> With 8 it seems there is such a thing
> 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening
> 
> Although I believe that FIPS mode is also available in 7

That’s FIPS 140-2, a standard from 2001, which is three TLS standards ago.

FIPS 140-3 just barely became effective a few weeks ago, which means it won’t 
be considered for inclusion in RHEL until 9, which I don’t expect to appear 
until 3-4 years from now, by which time FIPS 140-2 will be around 21 years old.

So, we not only have a situation where adopting FIPS 140-2 requires that you 
use badly outdated security technologies, it also means you might not be able 
to communicate with those that do support modern standards, if they’ve dropped 
compatibility with 2001 era tech sometime in the last 18 years.

If we can be well-guided by past events, there’s a better than 50/50 chance 
that any given person on this list won’t even be in IT any more when FIPS 140-4 
comes out.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] easy way to stop old ssl's

2019-10-11 Thread Warren Young
On Oct 11, 2019, at 2:52 PM, isdtor  wrote:
> 
>> Yes, breaking changes.  Doing this *will* cut off support for older 
>> browsers.  On purpose.
> 
> Old browsers aren't really the problem. Even ff 45 (?) from CentOS5 will 
> happily access a TLSv1.2-only server.

IE 10 and older won’t, though: https://caniuse.com/#feat=tls1-2

> The problem is user that have old versions of software installed with no 
> TLSv1.2 support. SVN, python 2.7 scripts, etc.

Also true.  There’s a lot of stuff still linked to OpenSSL 1.0.0 and 0.98.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] easy way to stop old ssl's

2019-10-11 Thread Warren Young
On Oct 11, 2019, at 12:12 PM, Jerry Geis  wrote:
> 
> is there a script that is available that can be ran to bring
> a box up to current "accepted" levels ?

I don’t know why you’d use a script for this at all.  Just ship a new HTTPS 
configuration to each server.  Apache loads all *.conf files in its 
configuration directory, so you might be able to just add another file to the 
existing config set.  If not, then replace the existing config file instead.

If you’re asking for a pre-crafted config, there are bunches of them floating 
around:

   https://httpd.apache.org/docs/2.4/ssl/ssl_howto.html
   
https://www.sslshopper.com/article-how-to-disable-weak-ciphers-and-ssl-2.0-in-apache.html
   https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html

etc.

I’m also surprised by the premise implied by the question, which is that a 
stable OS vendor would switch HTTPS configurations for you on a point upgrade.  
That’s pretty much the anti-Red Hat position.  If you want local breaking 
changes like this, you develop and test it locally, then deploy the change 
locally.

Yes, breaking changes.  Doing this *will* cut off support for older browsers.  
On purpose.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [PROVENANCE INTERNET] Re: Missing packages in centos8 mirrors or do I miss something?

2019-09-26 Thread Warren Young
On Sep 26, 2019, at 10:13 AM, LAHAYE Olivier  wrote:
> 
> That would have helped me a lot if I had knew that docbook-utils-pdf would be 
> droppedI would have searched for an alternative long ago.

It looks like that package does a combination of xsltproc + FOP.  On EL7, you 
can get both with:

$ sudo yum install fop libxslt

I don’t have an 8 box here to check, but I fully expect that xsltproc is in 8, 
being a necessary component of GNOME.  It isn’t on your list of removed 
packages.

FOP is on that list, but it isn’t really necessary to have an RPM for it, since 
it’s a Java package: just get the distro, unpack it somewhere. and put its 
“fop” wrapper script in the PATH:

https://xmlgraphics.apache.org/fop/

Alternately, you could rebuild Fedora’s package on CentOS 8:

   https://apps.fedoraproject.org/packages/fop

I expect this difference means this package will move to EPEL eventually, once 
someone gets around to it.  EPEL 8 looks rather thin compared to EPEL 7 at the 
moment.

You might find my documentation on multi-step DocBook processing helpful:

https://tangentsoft.com/mysqlpp/dir?name=doc/userman=trunk

The tutorial on that page plus the xsltproc and fo2pdf calls in the Makefile 
linked to that page plus the fo2pdf script also linked on that page should 
amount to a complete replacement for your existing db2pdf method.

As to why Red Hat abandoned this particular package, I’d guess that this sort 
of multi-step processing chain is more common, being more powerful.  For one 
thing, it gets you to HTML as well as to PDF.  For another, use of XML 
technologies orthogonal to DocBook like XIncludes often require going the 
step-by-step processing route.

Also, RHEL8 is purposely paring down much of what goes into core, pushing the 
rest off into Software Collections, EPEL, etc.  That was a major point of the 
original announcement back in December (?) of last year.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 5 file system read only issue

2019-08-21 Thread Warren Young
On Aug 21, 2019, at 7:35 AM, Xinhuan Zheng  wrote:
> 
> my $s = IO::Select->new( $fh );
> if ( $io->can_write( 10 ) {

That’s not designed to do what you hope.  select(2) is a system call intended 
for use on network socket handles, not file handles.  Since socket handles and 
file handles are compatible on a Unix type system (including CentOS) the call 
doesn’t fail, but it *cannot* report the information you’re hoping to get.

I would first try calling the -w operator:

   print_to_file() if -w $fh;
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Giving full administrator privileges through sudo on production systems

2019-08-16 Thread Warren Young
On Aug 15, 2019, at 11:04 PM, Bagas Sanjaya  wrote:
> 
> Based on above cases, is it OK to give group of random users full 
> administrator privileges using sudo, by adding them to sudoers with ALL 
> privileges? Should sudoers call customer service number instead of sysadmin 
> when something breaks?

sudo is a tool for expressing and enforcing a site’s policies regarding 
superuser privilege.

If your sudo configuration expresses and enforces those policies the way you 
want it to, then the configuration is correct.  If it does not, then fix it.

sudo doesn’t tell you what your policies should be.

We can suggest policies to you, but not based only on the information you’ve 
just given us.  To properly advise you, we’d need to know your threat models, 
the risk assessments, and more.  In short, we’d have to become your system 
administrators.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] another bizarre thing...

2019-08-06 Thread Warren Young
On Aug 6, 2019, at 8:48 PM, Fred Smith  wrote:
> 
> Setting up as you described earlier, is there a way to allow only
> a single program to drop core?

Of course.

The * in the limits.d file is a “domain” value you can adjust to suit:


https://www.thegeekdiary.com/understanding-etc-security-limits-conf-file-to-set-ulimit/

You’d have to read the systemd docs to figure out the defaults for LimitCore, 
but I suspect you don’t get cores until you set this on a per-service basis.

You can also adjust the sysctl pattern path to put cores somewhere secure.  
That’s the normal use of absolute paths: put the cores into a dropbox directory 
that only root can read but anyone can write to.

Also, I should point out that my first step, removing ABRT, is a heavy-handed 
method. Maybe what you *actually* want to do is learn to cooperate with ABRT 
rather than rip it out entirely.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Win 10 and C 6 CUPS

2019-08-06 Thread Warren Young
On Aug 6, 2019, at 4:41 PM, mark  wrote:
> 
> Is there any way, other than installing CUPS on windows, to get the damn Win 
> laptop to print to my C 6 box, which has CUPS running and a USB laserjet?

Share it over Samba:

   https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Print_Server

You still need to find a driver for the printer on the Windows side, which is 
not a given, but you might be able to make do with a generic driver such as 
PCL5 or PostScript.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question on server speed

2019-08-06 Thread Warren Young
On Aug 6, 2019, at 2:43 PM, Peter Larsen  wrote:
> 
> you may also find that the
> "max speed" in the specification is far from what you get out of your
> hardware.

“May?” :)

That’s about like saying Honda Civics can go 0-60 in 2.8 seconds…when dropped 
off a cliff nose down so you get the benefit of the car’s well-designed 
aerodynamics.

https://www.quora.com/How-fast-will-a-free-fall-take-me-from-0-60-mph

> use eSATA if you need it externally

I’ve had a lot of problems with eSATA, mainly in the delicacy of the cabling 
and connectors.  It’s almost as bad a standard as USB.

Proper Thunderbolt is far more robust and reliable, but you do pay for that.

…which is why USB sucks.  We expect to pay $20 for an 8-port hub, then gripe 
when half the ports die when the cat walks by due to the static EM field it 
emits.

I once went looking for reliable $100 USB hubs, but I couldn’t find anything 
outside the industrial sphere.  There’s the tragedy of USB: I can’t pay more to 
get better quality even if I wanted to, because the market punishes those that 
try.  They’d get 1-star reviews on Amazon because they’re too expensive!
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] another bizarre thing...

2019-08-06 Thread Warren Young
On Aug 6, 2019, at 7:59 AM, Fred Smith  wrote:
> 
> On Tue, Aug 06, 2019 at 05:27:54AM -0600, Warren Young wrote:
>> On Aug 5, 2019, at 6:57 PM, Fred Smith  wrote:
>>> 
>>> no core file (yes, ulimit is configured)
> 
> yeah, I meant "ulimit -c unlimited" is in effect.

That only affects the shell it’s set for, which isn’t generally important for a 
service, since we no longer start services via shell scripts in the systemd 
world.

> I had no idea systemd had made such a drastic change.

This isn’t a systemd change, it’s a *system* change.  The only reason systemd 
is involved is that it also has its own defaults, just as your shell does, 
overridden by the ulimit command.  Steps 1-3 remove the system limits, then 4 & 
5 remove the systemd limits under that, which can affect your service, if it’s 
being started via systemd.

> or is it that
> someone at RH decided to make it (nearly) impossible to do? I fail
> to see how it is beneficial to anyone to make it so hard to get
> core dump files.

Core dumps are a security risk.  They’re memory images of running processes.  
If you configure your server like I give in my recipe, every process that drops 
core will create a world-readable file in /tmp showing that process’s memory 
state, which means you can recover everything it was doing at the time of the 
crash.

So, if you can find a way to make, say, PAM or sshd drop core, you’ll get live 
login details in debuggable form, available to anyone who can log into that box.

You definitely want core dumps off by default.

Making core dumps enabled by default is about as sensible as enabling rsh by 
default.

https://en.wikipedia.org/wiki/Remote_Shell

We stopped doing that on production servers about 20-30 years ago, for more or 
less the same reason.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question on server speed

2019-08-06 Thread Warren Young
On Aug 6, 2019, at 7:04 AM, Stephen John Smoogen  wrote:
> 
> On Mon, 5 Aug 2019 at 19:49, Warren Young  wrote:
> 
>> On Aug 5, 2019, at 11:25 AM, Stephen John Smoogen 
>> wrote:
>>> 
>>> On Mon, 5 Aug 2019 at 13:17, Jerry Geis  wrote:
>>> 
>>>> an external SSD disk USB3 connected and the machine "freezes"... Why is
>>>> that?
>>>> 
>>> You may have a motherboard which is routing a lot through a single USB
>>> controller.
>> 
>> Ridiculous if true.  Modern OSes solved the blocking I/O problem decades
>> ago.
>> 
> The OS can only do as much as the hardware allows.

Back when we had serial mice, the most common data rate was 1200 bps.  That’s 
0.0012 Mbit/sec.  If your 480 Mbit/sec USB-2 or 5/10/20 Mbit/sec USB-3 bus is 
so jammed up that it can’t trickle through that much data per second from the 
mouse while an SSD on the same bus is blocked on I/O, it’s dreck hardware.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] another bizarre thing...

2019-08-06 Thread Warren Young
On Aug 6, 2019, at 5:35 AM, Pete Biggs  wrote:
> 
> On Tue, 2019-08-06 at 05:27 -0600, Warren Young wrote:
>> On Aug 5, 2019, at 6:57 PM, Fred Smith  wrote:
>>> no core file (yes, ulimit is configured)
>> 
>> That’s nowhere near sufficient.  To restore classic core file dumps
>> on CentOS 7, you must:
>> 
> 
> I was under the impression that a SIGKILL doesn't trigger a core dump
> anyway. It just kills the process.

True; you need SIGABRT to force a core to drop.

I posted that because if all he did was set the shell’s ulimit value, the lack 
of core files proves nothing, because there’s half a dozen other things that 
could be preventing them from dropping.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] another bizarre thing...

2019-08-06 Thread Warren Young
On Aug 5, 2019, at 6:57 PM, Fred Smith  wrote:
> 
> no core file (yes, ulimit is configured)

That’s nowhere near sufficient.  To restore classic core file dumps on CentOS 
7, you must:

1. Remove Red Hat’s ABRT system, which wants to catch all of this and handle it 
directly.  Say something like “sudo yum remove abrt*”


2. Override the default sysctl telling where core dumps land by writing this 
file, /etc/sysctl.d/10-core.conf:

kernel.core_pattern = /tmp/core-%e-%p
kernel.core_uses_pid = 1
fs.suid_dumpable = 2

Then apply those settings with “sudo sysctl —system”.

I don’t remember what the default is, which this overrides, but I definitely 
didn’t want it.

You can choose any pattern you like, just remember what permissions the service 
runs under, because that’s the permission needed by the process that actually 
dumps the core to make the file hit the disk.  That’s why I chose /tmp in this 
example: anyone can write there.


3. Raise the limits by writing the following to 
/etc/security/limits.d/10-core.conf:

* hard core unlimited
* soft core unlimited

If this is what you meant by “ulimit,” then great, but I suspect you actually 
meant “ulimit -c unlimited”, but I believe until you do the above, the ulimit 
CLI app can have no effect.  You have to log out and back in to make this take 
effect.

Once the above is done, “ulimit -c unlimited” can take effect, but it’s of no 
value at all in conjunction with systemd services, for example, since those 
don’t run under a standard shell, so your .bash_profile and such aren’t even 
exec’d.


4. If your program is launched via systemd, then you must edit 
/etc/systemd/system.conf and set

DefaultLimitCORE=infinity

then say “sudo systemctl daemon-reeexec”

Case matters; “Core” won’t work.  Ask me how I know. :)


5. If you have a systemd unit file for your service, you have to set a related 
value in there as well:

LimitCore=infinity

You need both because #4 sets the system-wide cap, while this sets the 
per-service value, which can go no higher than the system cap.


6. Restart the service to apply the above two changes.


Yes, it really is that difficult to enable classic core dumps on CentOS 7.  
You’re welcome. :)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question on server speed

2019-08-05 Thread Warren Young
On Aug 5, 2019, at 11:25 AM, Stephen John Smoogen  wrote:
> 
> On Mon, 5 Aug 2019 at 13:17, Jerry Geis  wrote:
> 
>> Why is it that "all" I am really doing at the moment is copying things to
>> an external SSD disk USB3 connected and the machine "freezes"... Why is
>> that?
>> 
> You may have a motherboard which is routing a lot through a single USB
> controller. When that happens your graphical workstation will slow down
> because your keyboard and mouse events and some other polling has to
> complete before it can do the next thing.

Ridiculous if true.  Modern OSes solved the blocking I/O problem decades ago.

Consider: you may have a gigabit Ethernet connection to the Internet, and it is 
probably throttled to a small fraction of that speed by your ISP, yet you can 
be pumping hundreds of giga*bytes* per second to your SSD while your browser is 
blocked waiting for the remote server to respond.  Further, while one site is 
being slow to respond, another background Internet task can use the idled 
Internet connection.

This is a symptom of a real problem, and it may be well worth chasing it to the 
ground.

Jerry: Try another I/O channel for the same copy.  For example, what happens if 
you rsync the same file set to a remote machine, with the USB SSD connected to 
*that* machine?  Or, if you have a Thunderbolt or FireWire option, try that 
instead.  It might even be worth dropping a PCIe card into the machine for an 
alternative I/O path just to help diagnose this.  If nothing else, it might 
solve the problem.

USB is a terrible standard, emblematic of everything wrong with the PC world.  
We were sold USB-C as the grand unification of Thunderbolt and classic USB, but 
what we actually got are 6+ different and partially incompatible flavors of 
USB-C!


https://people.kernel.org/bleung/how-many-kinds-of-usb-c-to-usb-c-cables-are-there
https://news.ycombinator.com/item?id=20444326
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] how to increase DNS reliability?

2019-07-26 Thread Warren Young
On Jul 25, 2019, at 5:42 PM, Nataraj  wrote:
> 
> On 7/25/19 4:31 PM, Nataraj wrote:
>> It doesn't really help those clients I can not run name servers on,
>> though.
> 
> Another alternative is to look at the multicast dns (mdns) protocol.

That’s for allowing a device to self-advertise its own name, along with other 
things, like available services.  If you have such devices, then configuring 
the other machines on the network to pay attention to such advertisements 
allows them to see the new names and services when they appear.

…And much more importantly, when they *disappear*, since many 
ZeroConf/Bonjour/Avahi/mDNS speaking devices are mobile and aren’t always 
available.

This protocol is one common way for network printers to advertise their 
services, for example.  (The other common way is SMB/CIFS.)

> I'm pretty sure it's inplemented in avahi daemon

Yes, that’s an implementation of mDNS for POSIX type systems.  

> If your client supports
> it then I would think that all you have to do is enable it.

I’m not sure how this is relevant here.  For mDNS to be the solution to the 
OP’s problems, he’d have to also have mDNS multicasts going out advertising 
services, so the Avahi daemon would have something to offer when a compatible 
program comes along looking for services to connect to.

I suppose you could use mDNS in datacenter type environments, but it’s a long 
way away from the protocol’s original intent.

You could imagine a load balancer that paid attention to mDNS advertisements to 
decide who’s available at the moment.  But I don’t know of any such 
implementation.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] using RedHat binary packages?

2019-07-03 Thread Warren Young
On Jul 3, 2019, at 12:43 AM, Harald Dunkel  wrote:
> 
> Are RedHat's binary RPMs "poisoned" somehow, making it impossible for
> CentOS to redistribute RedHat's *binary* packages without going to jail?

RHEL binaries are only available to those with a RHEL subscription.  I don’t 
see anything in skimming through the RHEL license agreement that says that a 
RHEL subscription licensee couldn’t make a copy of those binaries for others, 
but since the act of producing those binaries is a work product, I don’t 
believe Red Hat needs license terms to bring a lawsuit aiming to prevent that: 
simple copyright law should suffice in pretty much every country that matters.

Since the core CentOS employees are now working for Red Hat, it may be possible 
for them to negotiate a separate agreement to avoid this, but what would be Red 
Hat’s incentive to do that?  Faster access to binaries is a selling point of a 
RHEL subscription.

So, if you want the binaries now, you know how to get them.

Even if you sweep aside the copyright and licensing issues somehow, the CentOS 
project *should* be able to bootstrap itself.  It’s not really free software if 
you can’t build it from source, and that requires work to produce.

The fact that it takes months of work by several talented people to produce a 
working and repeatable CentOS build system should tell you that its results are 
copyright protectable IP.  The fact that they’re willing to do this without 
charge to us should never be taken for granted.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Was, Re: raid 5 install, is ZFS

2019-07-02 Thread Warren Young
On Jul 1, 2019, at 9:44 AM, mark  wrote:
> 
> it was on Ubuntu, but that shouldn't make a difference, I would think

Indeed not.  It’s been years since the OS you were using implied a large set of 
OS-specific ZFS features.

There are still differences among the implementations, but the number of those 
is getting smaller as the community converges on ZoL as the common base.

Over time, the biggest difference among ZFS implementations will be time-based: 
a ZFS pool created in 2016 will have fewer feature flags than one created in 
2019, so the 2019 pool won’t import on older OSes.

> I pulled one drive, to simulate a drive failure, and it
> rebuilt with the hot spare. Then I pushed the drive I'd pulled back in...
> and it does not look like I've got a hot spare. zpool status shows
> config:

I think you’re expecting more than ZFS tries to deliver here.  Although it’s 
filesystem + RAID + volume manager, it doesn’t also include storage device 
management features.

If you need this kind of thing to just happen automagically, you probably want 
to configure zed:

https://zfsonlinux.org/manpages/0.8.0/man8/zed.8.html

But, if you can spare human cycles to deal with it, you don’t need zed.

What’s happened here is that you didn’t tell ZFS that the disk is no longer 
part of the pool, so that when it came back, ZFS says, “Hey, I recognize that 
disk!  It belonged to me once.  It must be mine again.”  But then it goes and 
tries to fit it into the pool and finds that there are no gaps to stick it into.

So, one option is to remove that replaced disk from the pool, then reinsert it 
as the new hot spare:

$ sudo zpool remove export1 sdb
$ sudo zpool add export1 spare sdb

The first command removes the ZFS header info from the disk, and the second 
puts it back on, marking it as a spare.

Alternately, you can relieve your prior hot spare (sdl) from its new duty — 
“new sdb” — putting sdb back in its prior place:

$ sudo zpool replace export1 sdl sdb

That does a full resilver of the replacement disk, a cost you already paid for 
with the hot spare failover, but it does have the advantage of keeping the 
disks in alphabetical order by /dev name, as you’d probably expect.

But, rather than get exercised about whether putting sdl between sda and sdc 
makes sense, I’d strongly encourage you to get away from raw /dev/sd? names.  
The fastest path in your setup to logical device names is:

$ sudo zpool export export1
$ sudo zpool import -d /dev/disk/by-serial export1

All of the raw /dev/sd? names will change to /dev/disk/by-serial/* names, which 
I find to be the most convenient form for determining which disk is which when 
swapping out failed disks.  It doesn’t take a very smart set of remote “hands” 
at a site to read serial numbers off of disks to determine which is the faulted 
disk.

The main problem with that scheme is that pulling disks to read their labels 
works best with the pool exported.  If you want to be able to do device 
replacement with the pool online, you need some way to associate particular 
disks with their placement in the server’s drive bays.

To get there, you’d have to be using GPT-partitioned disks.  ZFS normally does 
that these days, creating one big partition that’s optimally-aligned, which you 
can then label with gdisk’s “c” command.

Having done that, then you can do “zfs import -d /dev/disk/by-partlabel” 
instead, which gets you the logical disk naming scheme I’ve spoken of twice in 
the other thread.

If you must use whole-disk vdevs, then I’d at least write the last few digits 
of each drive’s serial number on the drive cage or the end of the drive itself, 
so you can just tell the tech “remove the one marked ab212”.

Note by the way that all of this happened because you reintroduced a 
ZFS-labeled disk into the pool.  That normally doesn’t happen.  Normally, a 
replacment is a brand new disk, without any ZFS labeling on it, so you’d jump 
straight to the “zpool add” step.  The prior hot spare took over, so now you’re 
just giving the pool a hot spare again.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] raid 5 install

2019-07-02 Thread Warren Young
On Jul 1, 2019, at 10:10 AM, Valeri Galtsev  wrote:
> 
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev  wrote:
>>> 
>>> RAID function, which boils down to simple, short, easy to debug well 
>>> program.
> 
> I didn't intend to start software vs hardware RAID flame war

Where is this flame war you speak of?  I’m over here having a reasonable 
discussion.  I’ll continue being reasonable, if that’s all right with you. :)

> Now, commenting with all due respect to famous person who Warren Young 
> definitely is.

Since when?  I’m not even Internet Famous.

>> RAID firmware will be harder to debug than Linux software RAID, if only 
>> because of easier-to-use tools.
> 
> I myself debug neither firmware (or "microcode", speaking the language as it 
> was some 30 years ago)

There is a big distinction between those two terms; they are not equivalent 
terms from different points in history.  I had a big digression explaining the 
difference, but I’ve cut it as entirely off-topic.

It suffices to say that with hardware RAID, you’re almost certainly talking 
about firmware, not microcode, not just today, but also 30 years ago.  
Microcode is a much lower level thing than what happens at the user-facing 
product level of RAID controllers.

> In both cases it is someone else who does the debugging.

If it takes three times as much developer time to debug a RAID card firmware as 
it does to debug Linux MD RAID, and the latter has to be debugged only once 
instead of multiple times as the hardware RAID firmware is reinvented again and 
again, which one do you suppose ends up with more bugs?

> You are speaking as the person who routinely debugs Linux components.

I have enough work fixing my own bugs that I rarely find time to fix others’ 
bugs.  But yes, it does happen once in a while.

> 1. Linux kernel itself, which is huge;

…under which your hardware RAID card’s driver runs, making it even more huge 
than it was before that driver was added.

You can’t zero out the Linux kernel code base size when talking about hardware 
RAID.  It’s not like the card sits there and runs in a purely isolated 
environment.

It is a testament to how well-debugged the Linux kernel is that your hardware 
RAID card runs so well!

> All of the above can potentially panic kernel (as they all run in kernel 
> context), so they all affect reliability of software RAID, not only the chunk 
> of software doing software RAID function.

When the kernel panics, what do you suppose happens to the hardware RAID card?  
Does it keep doing useful work, and if so, for how long?

What’s more likely these days: a kernel panic or an unwanted hardware restart?  
And when that happens, which is more likely to fail, a hardware RAID without 
BBU/NV storage or a software RAID designed to be always-consistent?

I’m stripping away your hardware RAID’s advantage in NV storage to keep things 
equal in cost: my on-board SATA ports for your stripped-down hardware RAID 
card.  You probably still paid more, but I’ll give you that, since you’re using 
non-commodity hardware.

Now that they’re on even footing, which one is more reliable?

> hardware RAID "firmware" program being small and logically simple

You’ve made an unwarranted assumption.

I just did a blind web search and found this page:

   
https://www.broadcom.com/products/storage/raid-controllers/megaraid-sas-9361-8i#downloads

…on which we find that the RAID firmware for the card is 4.1 MB, compressed.

Now, that’s considered a small file these days, but realize that there are no 
1024 px² icon files in there, no massive XML libraries, no language 
internationalization files, no high-level language runtimes… It’s just millions 
of low-level highly-optimized CPU instructions.

From experience, I’d expect it to take something like 5-10 person-years to 
reproduce that much code.

That’s far from being “small and logically simple.”

> it usually runs on RISC architecture CPU, and introduce bugs programming for 
> RISC architecture IMHO is more difficult that when programming for i386 and 
> amd64 architectures.

I don’t think I’ve seen any such study, and if I did, I’d expect it to only be 
talking about assembly language programming.

Above that level, you’re talking about high-level language compilers, and I 
don’t think the underlying CPU architecture has anything to do with the error 
rates in programs written in high-level languages.

I’d expect RAID firmware to be written in C, not assembly language, which means 
the CPU the has little or nothing to do with programmer error rates.

Thought experiment: does Linux have fewer bugs on ARM than on x86_64?

I even doubt that you can dig up a study showing that assembly language 
programming on CISC is significantly more error-prone than RISC programming in 
the first place.  My experienc

Re: [CentOS] raid 5 install

2019-07-01 Thread Warren Young
On Jul 1, 2019, at 9:10 AM, mark  wrote:
> 
> ZFS with a zpoolZ2

You mean raidz2.

> which we set up using the LSI card set to JBOD

Some LSI cards require a complete firmware re-flash to get them into “IT mode” 
which completely does away with the RAID logic and turns them into dumb SATA 
controllers. Consequently, you usually do this on the lowest-end models, since 
there’s no point paying for the expensive RAID features on the higher-end cards 
when you do this.

I point this out because there’s another path, which is to put each disk into a 
single-target “JBOD”, which is less efficient, since it means each disk is 
addressed indirectly via the RAID chipset, rather than as just a plain SATA 
disk.

You took the first path, I hope?

We gave up on IT-mode LSI cards when motherboards with two SFF-8087 connectors 
became readily available, giving easy 8-drive arrays.  No need for the extra 
board any more.

> took about 3 days and
> 8 hours for backing up a large project, while the same o/s, but with xfs
> on an LSI-hardware RAID 6, took about 10 hours less. Hardware RAID is
> faster.

I doubt the speed difference is due to hardware vs software.  The real 
difference you tested there is ZFS vs XFS, and you should absolutely expect to 
pay some performance cost with ZFS.  You’re getting a lot of features in trade.

I wouldn’t expect the difference to be quite that wide, by the way.  That 
brings me back to my guess about IT mode vs RAID JBOD mode on your card.

Anyway, one of those compensating benefits are snapshot-based backups.

Before starting the first backup, set a ZFS snapshot.  Do the backup with a 
“zfs send” of the snapshot, rather than whatever file-level backup tool you 
were using before.  When that completes, create another snapshot and send 
*that* snapshot.  This will complete much faster, because ZFS uses the two 
snapshots to compute the set of changed blocks between the two snapshots and 
sends only the changed blocks.

This is a sub-file level backup, so that if a 1 kB header changes in a 2 GB 
data file, you send only one block’s worth of data to the backup server, since 
you’ll be using a block size bigger than 1 kB, and that header — being a 
*header* — won’t straddle two blocks.  This is excellent for filesystems with 
large files that change in small areas, like databases.

You might say, “I can do that with rsync already,” but with rsync, you have to 
compute this delta on each backup, which means reading all of the blocks on 
*both* sides of the backup.  ZFS snapshots keep that information continuously 
as the filesystem runs, so there is nothing to compute at the beginning of the 
backup.

rsync’s delta compression primarily saves time only when the link between the 
two machines is much slower than the disks on either side, so that the delta 
computation overhead gets swamped by the bottleneck’s delays.

With ZFS, the inter-snapshot delta computation is so fast that you can use it 
even when you’ve got two servers sitting side by side with a high-bandwidth 
link between them.

Once you’ve got a scheme like this rolling, you can do backups very quickly, 
possibly even sub-minute.

And you don’t have to script all of this yourself.  There are numerous 
pre-built tools to automate this.  We’ve been happy users of Sanoid, which does 
both the automatic snapshot and automatic replication parts:

https://github.com/jimsalterjrs/sanoid

Another nice thing about snapshot-based backups is that they’re always 
consistent: just as you can reboot a ZFS based system at any time and have it 
reboot into a consistent state, you can take a snapshot and send it to another 
machine, and it will be just as consistent.

Contrast something like rsync, which is making its decisions about what to send 
on a per-file basis, so that it simply cannot be consistent unless you stop all 
of the apps that can write to the data store you’re backing up.

Snapshot based backups can occur while the system is under a heavy workload.  A 
ZFS snapshot is nearly free to create, and once set, it freezes the data blocks 
in a consistent state.  This benefit falls out nearly for free with a 
copy-on-write filesystem.

Now that you’re doing snapshot-based backups, you’re immune to crypto malware, 
as long as you keep your snapshots long enough to cover your maximum detection 
window. Someone just encrypted all your stuff?  Fine, roll it back.  You don’t 
even have to go to the backup server.

> when one fails, "identify" rarely works, which means use smartctl
> or MegaCli64 (or the lsi script) to find the s/n of the drive, then
> guess…

It’s really nice when you get a disk status report and the missing disk is 
clear from the labels:

   left-1:  OK
   left-2:  OK
   left-4:  OK
   right-1: OK
   right-2: OK
   right-3: OK
   right-4: OK

Hmmm, which disk died, I wonder?  Gotta be left-3!  No need to guess, the 
system just told you in human terms, rather than in abstract hardware terms.

Re: [CentOS] raid 5 install

2019-07-01 Thread Warren Young
On Jul 1, 2019, at 8:26 AM, Valeri Galtsev  wrote:
> 
> RAID function, which boils down to simple, short, easy to debug well program.

RAID firmware will be harder to debug than Linux software RAID, if only because 
of easier-to-use tools.

Furthermore, MD RAID only had to be debugged once, rather that once per 
company-and-product line as with hardware RAID.

I hope you’re not assuming that hardware RAID has no bugs.  It’s basically a 
dedicated CPU running dedicated software that’s difficult to upgrade.

> if kernel (big and buggy code) is panicked, current RAID operation will never 
> be finished which leaves the mess.

When was the last time you had a kernel panic?  And of those times, when was 
the last time it happened because of something other than a hardware or driver 
fault?  If it wasn’t for all this hardware doing strange things, the kernel 
would be a lot more stable. :)

You seem to be saying that hardware RAID can’t lose data.  You’re ignoring the 
RAID 5 write hole:

https://en.wikipedia.org/wiki/RAID#WRITE-HOLE

If you then bring up battery backups, now you’re adding cost to the system.  
And then some ~3-5 years later, downtime to swap the battery, and more 
downtime.  And all of that just to work around the RAID write hole.

Copy-on-write filesystems like ZFS and btrfs avoid the write hole entirely, so 
that the system can crash at any point, and the filesystem is always consistent.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] raid 5 install

2019-07-01 Thread Warren Young
On Jul 1, 2019, at 7:56 AM, Blake Hudson  wrote:
> 
> I've never used ZFS, as its Linux support has been historically poor.

When was the last time you checked?

The ZFS-on-Linux (ZoL) code has been stable for years.  In recent months, the 
BSDs have rebased their offerings from Illumos to ZoL.  The macOS port, called 
O3X, is also mostly based on ZoL.

That leaves Solaris as the only major OS with a ZFS implementation not based on 
ZoL.

> 1) A single drive failure in a RAID4 or 5 array (desktop IDE)

Can I take by “IDE” that you mean “before SATA”, so you’re giving a data point 
something like twenty years old?

> 2) A single drive failure in a RAID1 array (Supermicro SCSI)

Another dated tech reference, if by “SCSI” you mean parallel SCSI, not SAS.

I don’t mind old tech per se, but at some point the clock on bugs must reset.

> We had to update the BIOS to boot from the working drive

That doesn’t sound like a problem with the Linux MD raid feature.  It sounds 
like the system BIOS had a strange limitation about which drives it was willing 
to consider bootable.

> and possibly grub had to be repaired or reinstalled as I recall

That sounds like you didn’t put GRUB on all disks in the array, which in turn 
means you probably set up the RAID manually, rather than through the OS 
installer, which should take care of details like that for you.

> 3) A single drive failure in a RAID 4 or 5 array (desktop IDE) was not 
> clearly identified and required a bit of troubleshooting to pinpoint which 
> drive had failed.

I don’t know about Linux MD RAID, but with ZFS, you can make it tell you the 
drive’s serial number when it’s pointing out a faulted disk.

Software RAID also does something that I haven’t seen in typical PC-style 
hardware RAID: marry GPT partition drive labels to array status reports, so 
that instead of seeing something that’s only of indirect value like “port 4 
subunit 3” you can make it say “left cage, 3rd drive down”.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] raid 5 install

2019-06-28 Thread Warren Young
On Jun 28, 2019, at 8:46 AM, Blake Hudson  wrote:
> 
> Linux software RAID…has only decreased availability for me. This has been due 
> to a combination of hardware and software issues that are are generally 
> handled well by HW RAID controllers, but are often handled poorly or 
> unpredictably by desktop oriented hardware and Linux software.

Would you care to be more specific?  I have little experience with software 
RAID, other than ZFS, so I don’t know what these “issues” might be.

I do have a lot of experience with hardware RAID, and the grass isn’t very 
green on that side of the fence, either.  Some of this will repeat others’ 
points, but it’s worth repeating, since it means they’re not alone in their 
pain:


0. Hardware RAID is a product of the time it was produced.  My old parallel IDE 
and SCSI RAID cards are useless because you can’t get disks with that port type 
any more; my oldest SATA and SAS RAID cards can’t talk to disks bigger than 2 
TB; and of those older hardware RAID cards that still do work, they won’t 
accept a RAID created by a controller of another type, even if it’s from the 
same company.  (Try attaching a 3ware 8000-series RAID to a 3ware 9000-series 
card, for example.)

Typical software RAID never drops backwards compatibility.  You can always 
attach an old array to new hardware.  Or even new arrays to old hardware, 
within the limitations of the hardware, and those limitations aren’t the 
software RAID’s fault.


1. Hardware RAID requires hardware-specific utilities.  Many hardware RAID 
systems don’t work under Linux at all, and of of those that do, not all provide 
sufficiently useful Linux-side utilities.  If you have to reboot into the RAID 
BIOS to fix anything, that’s bad for availability.


2. The number of hardware RAID options is going down over time.  Adaptec’s 
almost out of the game, 3ware was bought by LSI and then had their products all 
but discontinued, and most of the other options you list are rebadged LSI or 
Adaptec.  Eventually it’s going to be LSI or software RAID, and then LSI will 
probably get out of the game, too.  This market segment is dying because 
software RAID no longer has any practical limitations that hardware can fix.


3. When you do get good-enough Linux-side utilities, they’re often not 
well-designed.  I don’t know anyone who likes the megaraid or megacli64 
utilities.  I have more experience with 3ware’s tw_cli, and I never developed 
facility with it beyond pidgin, so that to do anything even slightly uncommon, 
I have to go back to the manual to piece the command together, else risk 
roaching the still-working disks.

By contrast, I find the zfs and zpool commands well-designed and easy to use.  
There’s no mystery why that should be so: hardware RAID companies have their 
expertise in hardware, not software.  Also, “man zpool” doesn’t suck. :)

That coin does have an obverse face, which is that young software RAID systems 
go through a phase where they have to re-learn just how false, untrustworthy, 
unreliable, duplicitous, and mendacious the underlying hardware can be.  But 
that expertise builds up over time, so that a mature software RAID system copes 
quite well with the underlying hardware’s failings.

The inverse expertise in software design doesn’t build up on the hardware RAID 
side.  I assume this is because they fire the software teams once they’ve 
produced a minimum viable product, then re-hire a new team when their old 
utilities and monitoring software gets so creaky that it has to be rebuilt from 
scratch.  Then you get a *new* bag of ugliness in the world.

Software RAID systems, by contrast, evolve continuously, and so usually tend 
towards perfection.

The same problem *can* come up in the software RAID world: witness how much 
wheel reinvention is going on in the Stratis project!  The same amount of 
effort put into ZFS would have been a better use of everyone’s time.

That option doesn’t even exist on the hardware RAID side, though.  Every 
hardware RAID provider must develop their command line utilities and monitoring 
software de novo, because even if the Other Company open-sourced its software, 
that other software can’t work with their proprietary hardware.


4. Because hardware RAID is abstracted below the OS layer, the OS and 
filesystem have no way to interact intelligently with it.

ZFS is at the pinnacle of this technology here, but CentOS is finally starting 
to get this through Stratis and the extensions Stratis has required to XFS and 
LVM.  I assume btrfs also provides some of these benefits, though that’s on 
track to becoming off-topic here.

ZFS can tell you which file is affected by a block that’s bad across enough 
disks that redundancy can’t fix it.  This gives you a new, efficient, recovery 
option: restore that file from backup or delete it, allowing the underlying 
filesystem to rewrite the bad block on all disks.  With hardware RAID, fixing 
this requires picking one disk as the 

Re: [CentOS] is "list_del corruption" fix available in Centos ?

2019-05-17 Thread Warren Young
On May 17, 2019, at 9:53 AM, John Hodrien  wrote:
> 
> On Fri, 17 May 2019, James Szinger wrote:
> 
>> On Fri, May 17, 2019 at 3:17 AM John Hodrien  wrote:
>>> RHEL advice would clearly be not to use btrfs.
>> 
>> I'm curious, is there anything in RHEL 8 that would replace BTRFS or
>> ZFS?  I'm experimenting with BTRFS on one system and the snapshot and
>> subvolume features are nice.
> 
> I assume Stratis is the general answer.

It looks like built-in RAID didn’t land in this version, which means we’re 
another 3-5 years from anything that actually works like ZFS or btrfs which are 
LVM + RAID + filesystem.

According to the docs, you have to lay Stratis over MDRAID in EL8 to get 
storage redundancy:


https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/managing-layered-local-storage-with-stratis_managing-file-systems

Snapshots with Stratis appear to work more sanely than in LVM2, so that’s 
something, at least.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RHEL 8 released

2019-05-09 Thread Warren Young
On May 9, 2019, at 9:38 AM, Johnny Hughes  wrote:
> 
> As an initial goal, we would love to have a release in a month ..

That’s about the average time for 7.x releases, which I assume are far less 
work to get out than a point-zero.

Is this goal realistic?

I’d consider any ship date before about September to be “success.”
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Issues trying to change the selinux context

2019-05-08 Thread Warren Young
On May 8, 2019, at 11:44 AM, mark  wrote:
> 
> Warren Young wrote:
>> On May 8, 2019, at 11:04 AM, mark  wrote:
>> 
>>> 
>>> semanage fcontext -m -t lib_t "/path/smwa/webagent/bin/*.so”
>> 
>> Glob expansion doesn’t happen in double quotes.  Not in Bash, anyway.
> 
> Huh? I thought it didn't occur in single quotes, but did occur in quotes.

Nope.  The only difference I’m aware of is that single quotes prevent variable 
expansion (echo "hello $USER") while double quotes do not.

> semanage insists on having the filespec in quotes.

No, it just takes a single file spec, which may name a single file or it may be 
a regex.  This, the parens issue, and the symptom you notice with looking at 
the permission on individual contained files are explained in point 2 of the 
section "Changing a Directory's Type” here:


https://docs.fedoraproject.org/en-US/Fedora/11/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-SELinux_Contexts_Labeling_Files-Persistent_Changes_semanage_fcontext.html

   
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Issues trying to change the selinux context

2019-05-08 Thread Warren Young
On May 8, 2019, at 11:04 AM, mark  wrote:
> 
> semanage fcontext -m -t lib_t "/path/smwa/webagent/bin/*.so”

Glob expansion doesn’t happen in double quotes.  Not in Bash, anyway.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Issues trying to change the selinux context

2019-05-08 Thread Warren Young
On May 8, 2019, at 9:31 AM, mark  wrote:
> 
> semanage -fcontext -a -t lib_t "//smwa/webagent/bin(/.*).so”

[snip]

> What am I doing wrong?

-fcontext isn’t an option, it’s a verb; drop the dash.

Also, I’m confused by the parens in your file path.  Whether your shell is or 
not is a different question.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Permissions on nginx logs

2019-05-07 Thread Warren Young
On May 7, 2019, at 7:14 AM, Bee.Lists  wrote:
> 
> As per the “7” comment, I always listen to good advice, but usually that 
> advice gets completely derailed with someone saying “nobody should ever be 
> root…”, etc.  Best stated, “some people never let their kids play outside”.  
> I have a neighbour like that.  

Your CentOS box is nowhere near as well-defended as an unattended human child.  
The child has millions of years of evolution providing it with an active 
self-improving immune system, a mammal’s agility, and an apex predator’s 
cunning.

If you want a human analogue to a CentOS box, it’s closer to a premature baby 
in a neonatal intensive care unit.  It requires constant inputs from the 
caregivers and strict adherence to basic guidance like “Don’t open all the 
doors leading outside at once” to keep these cared-for creations alive.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Permissions on nginx logs

2019-05-06 Thread Warren Young
On May 6, 2019, at 10:14 AM, Bee.Lists  wrote:
> 
> I will give 770 a try.

Try 750 first.  You don’t need write access to do what you’re asking.

Also, the group membership change won’t take effect until you log out and back 
in.

>  Nobody going to flip now that a single “7” has been posted?

There is a clear analogue to herd immunity here:

https://en.wikipedia.org/wiki/Herd_immunity

When sysadmins of Internet-attached hosts do things to make those hosts less 
secure, that makes them easier to take over, which means the botnets and stolen 
databases get bigger, which puts the rest of us on the Internet at greater risk.

So yeah, I think the rest of us do have some say in how you manage your 
systems’ security.  Not total, of course, but you should not dismiss good 
advice as “flipping.”

In this particular case, the risk is that there is some credential or other 
sensitive info logged by nginx which is now easier for an attacker to get at.  
Those logs are hidden away for that reason and more.

How big that risk is only you can say at this point.  If you’ve got a purely 
static web site, for instance, there’s probably nothing important in that log, 
but if it’s acting as a reverse proxy for a back-end service, nginx might be 
logging passwords and such.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum groupinstall 'GNOME Desktop' fails

2019-04-17 Thread Warren Young
On Apr 17, 2019, at 11:45 AM, Johnny Hughes  wrote:
> 
> yum groupinstall "GNOME Desktop" "X Window System"
> 
> That should work

I did that plus the other two groups referenced above, each group added 
separately.

I’ve just tried a “groupremove” on everything I added and then added just the 
two groups you mention, both at the same time, and now it works.  Strange.

Thanks!
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] yum groupinstall "GNOME Desktop" fails

2019-04-17 Thread Warren Young
I’ve got a CentOS 7 VM here that was installed with one of the CLI-only 
presets.  To answer a question in another thread here, I wanted to install a 
GNOME desktop environment in it, so I went searching and found the standard 
instructions for doing that.

The problem is that rebooting the VM gives me a black screen after it finishes 
the text booting parts.  There’s an X cursor, and wiggling the mouse moves it, 
but I get no gray login screen.

Pressing Ctrl-Alt-F2 and logging into a text console works.  Saying “startx” 
brings X back up again — indicating that it does find the proper X drivers and 
such — but now I get just a few default icons on the desktop, but no top and 
bottom toolbars:

https://imgur.com/IMI9Gh9

So, the GNOME shell is also broken.

I’ve repeated the yum groupinstalls for “GNOME Desktop”, “Server with GUI” and 
“Graphical Administration Tools.”  yum claims everything is fine.

Short of installing another VM with a GUI enabled from the start, I’m stuck.

For what it’s worth, the VM system is Parallels, and I do have the Parallels 
Tools installed.

The same symptoms occur when booting into single-user mode and trying to startx 
from there.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Outliner

2019-04-14 Thread Warren Young
On Apr 14, 2019, at 4:42 AM, H  wrote:
> 
> Ideally it should allow saving files in txt, OO and markdown formats…

Since you included Markdown in the list, my initial question was why don’t you 
just write in that format, since the Markdown list features capture most of 
what I want in an outliner.  Then I saw in a later post that you’re using an 
editor (Geany) without intelligent formatting for Markdown.

So that’s my recommendation: switch to a text editor that does intelligent 
things with Markdown like continuing the list when you hit Enter from within a 
list item, adding a level to the list when you hit Tab within a list, returning 
to the prior level with a Shift-Tab, auto-indenting list items when you hit the 
editor’s wrapping limits, etc.

I’m not sure what distinction you’re trying to make by listing “txt” output 
along with Markdown, so I don’t know what transform to suggest.

As for “OO”, I assume that means OpenOffice, in which case what you actually 
mean is ODF, its file format.  And for that, I suggest that you use Pandoc, 
which will get Markdown into that format and many more:

$ pandoc --to odt x.md > x.odt
$ pandoc --list-output-formats

As for the actual editor, there are several choices.  The first one I reached 
for was VSCodium, which is Microsoft Visual Studio Code with the branding, 
telemetry and non-FOSS licensed stuff stripped out.  (Shades of CentOS vs RHEL…)

I’m working with a text-only CentOS VM here and couldn’t get a GUI running on 
it — a problem I’ll take up in a separate thread — so I’ll just point you at 
the VSCodium Linux install instructions and hope they work for you there:

https://github.com/VSCodium/vscodium/releases

Once you’ve got VSCodium running, you’ll need to install the “Markdown All In 
One” plugin.  (Ctrl-Shift-P, install, search for Markdown, select first option 
[currently] listed.)  That will do as described above: auto-number, 
auto-indent, Tab/Shift-Tab to change indent level, etc.

The availability of such plugins is a large part of the reason Code is taking 
over so much of the programmer’s text editor world.  Give it a try.

If VSCodium doesn’t work on CentOS, you could try Visual Studio Code, the 
original project, which probably has better packaging:

   https://code.visualstudio.com/docs/setup/linux

I used that for probably a few years before VSCodium came along.  Don’t be 
scared by the branding: it shares almost nothing with Visual Studio other than 
branding and a parent organization.

If you really want a CLI-only experience, I got a suitable setup working with 
Vim and the Bullets plugin:

   https://github.com/dkarter/bullets.vim

Instead of Tab and Shift-Tab to change indent levels it uses Ctrl-T and Ctrl-D, 
which I find odd, but that’s the sort of affordance you have to give up on when 
you’re working in an ANSI terminal.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] When should I reboot?

2019-04-13 Thread Warren Young
On Apr 13, 2019, at 2:32 AM, Kenneth Porter  wrote:
> 
> I reboot when I yum update to a new kernel or systemd, which seems to come 
> out about once a month.

You can use similar logic as in Tony Mountfield’s answer to put off reboots in 
those cases as well.

If the reason for the kernel update is a bug in a Realtek NIC driver but your 
systems all use Intel NICs, you don’t need to reboot.

Let’s get concrete.  Just a few days ago, this CVE was filed against the Linux 
kernel:

   https://nvd.nist.gov/vuln/detail/CVE-2019-11191

I assume CentOS doesn’t ship any a.out binaries, so this bug is of no 
consequence to most CentOS systems.  For it to matter to your systems, your 
threat model must either allow:

1. Arbitrary code upload by someone with root privileges so they can setuid a 
newly uploaded a.out binary.  The only way such a situation is not already Game 
Over would be something like a VPS host where there are multiple “root” 
privilege levels.  If you’re not running such a hosting service, you probably 
don’t care about this bug.

2. Local staff to create a.out binaries and setuid them.  But why would that 
happen?  That’s two very uncommon conditions back to back.  On top of that, the 
threat model then must include the ability for your attacker to run one of 
these binaries; if the threat model is network outsiders only and these are not 
network services, the bug *still* doesn’t affect you.


Now let’s take systemd.

Systemd isn’t a single binary, and most of those binaries don’t run 
continuously.  (On a near-stock CentOS 7 VM I have here, only 5 of the 41 
programs under bin/ in the systemd RPM are running right now.)  If the systemd 
component being updated doesn’t run continuously or can safely be restarted 
individually, you don’t need to reboot.  The component might not be running at 
upgrade time, or it might be easily restarted if it is running.


The glibc updates can also be put off, depending on the bug in question and the 
system’s threat model.  If you deem that the only threats worth responding to 
are those from the network, with everything internal to the server being deemed 
“good,” then the questions become “What’s listening to the network, can it/they 
be restarted, and which ones use affected glibc facilities?”

Let’s take a recent glibc CVE as an example:

https://nvd.nist.gov/vuln/detail/CVE-2019-9169

If your network listening services aren’t doing case-insensitive POSIX regex 
matches, this bug cannot affect them, so under our stated threat model, the 
network services don’t need to be restarted, much less the whole system.

If you have network-listening services that *are* doing case-insensitive POSIX 
regex matches, then I assume the bug must only be happening with *particular* 
regexes, else we’d have learned of this bug decades ago, so your threat model 
must also allow the attacker to provide the regex.  That excludes, for example, 
regexes in your Apache configuration file, unless you’re running a shared web 
hosting service and allow arbitrary changes to the Apache config.

> I know the glibc update was mainly to handle the new Japanese calendar

Not “mainly,” “solely.”

> So my question is more about how shared libraries work and whether anything 
> bad would happen with different forks of running services (mainly the mail 
> suite with dovecot and the various content scanners launched by sendmail) 
> running different versions of the library based on when they were started. 

As Tony said, each running binary continues with its prior copy of glibc and 
newly launched binaries get the new one.

Unless you’ve got a set of binaries that can end up with different glibc 
underpinnings and they are passing around Japanese date strings with the 
assumption that they agree on their interpretation, I can’t see how this can 
affect you.

Every CVE does not affect everybody, but Red Hat has to respond to most[*] of 
those affecting the code bases behind the binaries they build, ship, and 
support, because chances are, not doing so will affect some nontrivial subset 
of their user base.  

But whether the bug affects *you* is a wholly different question.  That’s why 
they publish these advisories, and why those advisories often link to further 
information.  You have to be willing and able to absorb and analyze this 
information if you don’t want to fall back on generic advice like “Reboot on 
every kernel, glibc, or systemd update.”

On the other hand, maybe you have good reason to reboot once a month or so 
anyway, and all of this provides a convenient excuse: “Because security.”  You 
might be subject to an uptime SLA that excludes security reboots, which let you 
slip other maintenance downtime into those reboot windows.


[*] I assume there are conditions that would lead Red Hat to ignore a CVE that 
does affect code it ships, but I have no ready examples.  If it happens, I 
trust their judgement.
___
CentOS mailing 

Re: [CentOS] In place upgrade of RHEL 8 Beta to CentOS 8?

2019-04-10 Thread Warren Young
On Apr 10, 2019, at 11:12 AM, Leon Fauster via CentOS  wrote:
> 
> Any expectation for the date of the C8 release :-)??

It took 6 months to go from first beta to first release of RHEL 7, and this 
beta is now about 5 months old.  So next month for sure. :)

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Kernel panic after removing SW RAID1 partitions, setting up ZFS.

2019-04-10 Thread Warren Young
On Apr 10, 2019, at 9:38 AM, Benjamin Smith  wrote:
> 
> For some reason, you *cannot* have a partition of type GPT and expect 
> Linux to boot. (WT F/H?!?) 

I believe you were trying to make use of a facility invented as part of the GPT 
Protective Partition feature without understanding it first:

   https://en.wikipedia.org/wiki/GUID_Partition_Table#Protective_MBR_(LBA_0)

As a normal user, there is no good cause to be changing an MBR partition’s type 
to GPT in this way.  It’s a feature that only GPT partitioning tools should be 
making use of, and then only to prevent legacy OSes from interfering with 
actual GPT partitioning schemes.

In other words, you’ve mislead the boot loader into trying to seek out an 
*actual* GPT partition table, which doesn’t exist, giving the symptom you saw.

I’ve never used ZFS with MBR partitions.  Normally I feed it whole disks, in 
which case the ZoL zpool implementation will create GPT partition tables and 
give the first partition code BF01.  That means type BF *might* be the correct 
value on MBR.

I suspect you could just as well use type 83 (Linux generic) for this, since 
that doesn’t refer to any specific file system.  Properly-written utilities do 
metadata probing to figure out what tools to use with it, so putting ZFS on a 
type 83 MBR partition should be harmless, since only ZFS tools will admit to 
being able to do anything with it.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Seek some help on operating system drivers. Thanks!

2019-04-05 Thread Warren Young
On Apr 5, 2019, at 8:33 AM, Warren Young  wrote:
> 
> On Apr 5, 2019, at 8:24 AM, JICHUN LIU  wrote:
>> 
>> When I use sensors-detect:
>> Sorry, no sensors were detected.
> 
> Instead, try:
> 
>$ sudo ipmitool sensor list

You’ll probably need to install that first, by the way:

$ sudo yum install ipmitool

You can modify the list command like so to show only the relevant information 
here:

$ sudo ipmitool sensor list | grep -i fan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Seek some help on operating system drivers. Thanks!

2019-04-05 Thread Warren Young
On Apr 5, 2019, at 8:24 AM, JICHUN LIU  wrote:
> 
> When I use sensors-detect:
> Sorry, no sensors were detected.

Instead, try:

$ sudo ipmitool sensor list

There are thresholds you can set to make the fans behave sanely, but those of 
us who don’t own such a server already won’t be able to advise until we see 
that output.

There are also HP-proprietary tools you can try:

https://serverfault.com/a/630565/18096

I prefer using the built-in OS facilities myself.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CPAN not working, or is it?

2019-03-11 Thread Warren Young
On Mar 11, 2019, at 6:16 PM, Bruce Ferrell  wrote:
> 
> What I've learned to  do when I have this sort of issue is to pop out of CPAN 
> and into ~/.cpan/build.

If you mean that you do that manually, you don’t have to.  The “look” command 
in the cpan shell or the --look option to cpanm does that automatically.  

That is, it unpacks the module and drops you into a sub-shell where you can 
work with the module manually.

Back when I still had to support CentOS 5, I’d occasionally have to do this to 
get some modules to build because they required local edits or configuration 
overrides that the upstream developers didn’t want to support, its Perl being 
deemed too old to bother with any more.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CPAN not working, or is it?

2019-03-11 Thread Warren Young
On Mar 11, 2019, at 5:24 PM, Pete Biggs  wrote:
> I use both RPM and CPAN and install into system locations.

That’s the advice I’m responding to: you don’t need to install CPAN modules 
only to system locations to make Perl-based programs work.  CPAN’s defaults on 
CentOS 7 are perfectly usable as-is, because CPAN is part of Perl, and Perl 
knows how to search multiple locations for a given package, including in CPAN’s 
default location *when* you run it as root.

> CPAN will install the modules in ~/perl5/

That’s only true when you run cpan or cpanm as a normal user.  If you run it as 
root, cpanm puts modules in a directory Perl finds by default — 
/usr/local/share/perl5 on CentOS 7 — but which doesn’t conflict with CPAN 
modules installed via RPM.

~/perl5 isn’t in @INC by default, though, so that’s why Perl fails to find 
modules installed there by default.

However, there are a bunch of ways to adjust @INC to suit local needs, so 
installing to ~/perl5 isn’t always a problem:

   
https://perlmaven.com/how-to-change-inc-to-find-perl-modules-in-non-standard-locations

A useful variant on one of those methods is to put lib/ in a known location 
relative to the main program, then use the FindBin module to adjust @INC at run 
time via “use lib”:

https://perldoc.perl.org/FindBin.html

The second class of problems I think I covered adequately in my first reply.

> All I was trying to do was to explain to Gary *why* his CPAN installs
> weren't working as expected, and that it is because of the possibility
> of CPAN over-writing RPM installed modules

That’s why you don’t want to be telling CPAN to install to system locations in 
the first place.  RPMs and cpanm-installed libraries should never conflict on 
disk.

They may conflict in their dependency graphs, of course, which again takes us 
back to my advice in the prior reply for managing dependencies.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CPAN not working, or is it?

2019-03-11 Thread Warren Young
On Mar 11, 2019, at 8:01 AM, Gary Stainburn  
wrote:
> 
> Anyone got any ideas what  I need to do?

First, use cpanm instead of the old cpan shell:

$ sudo yum install perl-App-cpanminus

It has a number of advantages:

1. It’s much smarter about chasing dependencies, which is your core problem 
here.

2. It autoconfigures, not requiring all that outdated stuff about “where is 
your FTP program,” “what mirror should I use,” and such.

3. It’s easy to feed it a cpanfile with a curated set of dependencies for a 
one-command “install everything I need here” upgrade:

https://metacpan.org/pod/distribution/Module-CPANfile/lib/cpanfile.pod

4. When it fails, it keeps its output in a log file that’s easy to send to 
mailing lists when asking for help.  Hint. :)

> I started off as always by using RPM's for everything I possibly can.

I try to do that, too.  The claim in another reply that RPM CPAN modules and 
cpan- or cpanm-installed modules cannot work together is incorrect.

What is correct is that the CentOS-provided RPMs are often sufficiently 
outdated that they no longer work with the latest releases that cpanm wants to 
download by default.  The older the CentOS installation, the greater the chance 
of this happening, and the greater the chance that it will happen to a module 
that’s so deeply tied into your dependency tree that it seems to break 
everything.

There are three main solutions:

1. Tell cpanm to download an older version that works with the other 
RPM-installed CPAN modules:

$ sudo cpanm Foo::Bar@1.2.3

1b: Same thing, but in cpanfile syntax:

requires 'Foo::Bar', ‘== 1.2.3';

cpanfile is very flexible and can be given much more granular rules; see the 
link above.

2. Selectively remove CPAN modules installed via RPM that conflict and let 
cpanm upgrade you to the latest.  This works as long as the newer modules are 
upwards compatible with the remaining RPM-installed CPAN modules.

3. Use a system like App::Fatpacker or Carton to bundle your app and its 
dependencies into a self-contained bundle that doesn’t depend on system CPAN 
modules:

https://metacpan.org/release/Carton
https://metacpan.org/pod/App::FatPacker

I prefer using Carton with cpanfile, but others prefer the way App::Fatpacker 
works.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Network Bridge - error

2019-03-06 Thread Warren Young
On Mar 6, 2019, at 1:44 AM, Larry Sevilla  wrote:
> 
> systemctl status network.service
> https://pastebin.com/KzBqJN65

…which says brctl doesn’t exist, so if you go searching for it, you find that 
it’s in the bridge-utils package.  So, your next step is:

$ sudo yum install bridge-utils

I have no idea whether that solves your entire problem, having never done 
anything with CloudStack, but it’ll at least get you one more step down the 
path.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Systemd's [ OK ] in green beginning of line

2019-03-04 Thread Warren Young
On Mar 4, 2019, at 10:15 PM, Jobst Schmalenbach  wrote:
> 
> I have to (re-)write many bootup scripts to move a bunch of servers from 
> CentOS6 to CentOS7
> 
> In sysvinit the "echo_success" and "echo_failure" used to do this.
> 
> What is the equivalent for systemd?

First off, processes started by systemd unit files don’t necessarily echo 
anything at all, and when they do, that output is captured by journald, not 
sent straight to the system console.

Systemd infers success or failure of the started service in one of several ways:

- process exit code
- contents of a PID file
- existence of a FIFO
- etc.

Your code in the unit file tells systemd which method to use.

Systemd prints out the green “OK” when the rules you’ve set up in the unit file 
match actual conditions on the server.

This is also how systemd can provide automatic service restarts: you’ve told it 
what it looks like when a service is “up”, so it can implicitly detect “down” 
and restart the dead service for you.  In the old scheme, that’s something you 
had to hand-code, and it was outside of the scope of SysVInit besides, so most 
services didn’t have that ability.  With systemd, you can set this up with a 
line or three in the unit file, and systemd then handles it for you.

You should start reading here:


https://www.freedesktop.org/software/systemd/man/systemd.service.html#Options

The Type value you select affects which “up” detection methods are sensible.  
For example, if you say your service is a “forking” type, it doesn’t make sense 
for systemd to be told to look for a nonzero exit status.

Read the rest of that page as well.  There’s a lot of power to take advantage 
of in systemd.

Resist the temptation to just launch your old shell script via systemd.  It’s 
better to rewrite its imperative logic in declarative form within the unit 
file, so you get full benefit of systemd’s power.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Setting GDM resolution without knowing the monitor specs

2019-03-01 Thread Warren Young
On Feb 21, 2019, at 4:42 PM, Gianluca Cecchi  wrote:
> 
> Create a file 00-monitor.conf under /etc/x11/xorg.conf.d
> 
> Something like this below, using conservative range values for horiz and
> vert syncs

This works fine here on our test monitors, with the exception that the first 
‘x’ needs to be capitalized.

Thanks again!

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Setting GDM resolution without knowing the monitor specs

2019-02-22 Thread Warren Young
On Feb 21, 2019, at 12:00 PM, Warren Young  wrote:
> 
> remotely talking someone through changing ifcfg-noisenoise via nano is a 
> minor nightmare, especially now that Confusing Network Device Naming is the 
> default.

A relevant war story might help here.

We were upgrading an old CentOS 5 box in the field.  They refused to ship it 
back to us, and they refused to buy a whole new box, but they had to have the 
newest software.

This being CentOS, “yum upgrade” wasn’t going to get us to CentOS 7.  What to 
do?

So, I logged into it remotely, poked around a bit, and got it to divulge the 
motherboard, CPU, etc. that we’d used on it, and I found that we had a 
nearly-identical box sitting around powered off locally, it having given us 
many years of useful service and then been retired.  Same motherboard, same 
CPU, same RAM, probably even bought within the same year.

So, I dropped a fresh system drive into that box, loaded CentOS 7 and all of 
our stuff onto it, configured the network and everything else under /etc the 
same as the box in the field, and shipped the drive out to the customer.

They put the drive in, booted it up, and it didn’t reappear on their network.  
No remote access, no presence on the LAN.  It wouldn’t even ping.

After a ridiculous amount of remote troubleshooting, it turned out that these 
two motherboards — despite having the same model number and EFI firmware 
version — had a slght difference: the first NIC appeared as enp2s0 and the 
second as enp3s0 on one motherboard, but as enp3s0 and enp4s0 on the other!  
So, one network config wasn’t being applied, and the second was being applied 
to the wrong NIC.

And here I thought the point of [CNDN][1] was to make such replacements more 
reliable than the plug-and-pray logic behind ethN.

This is the sort of reason why I need non-Linux sysadmin types to be able to 
change IPs in the field.


[1]: https://en.wikipedia.org/wiki/Consistent_Network_Device_Naming
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Setting GDM resolution without knowing the monitor specs

2019-02-22 Thread Warren Young
On Feb 21, 2019, at 11:13 PM, Simon Matter via CentOS  wrote:
> 
>> On Feb 21, 2019, at 4:42 PM, Gianluca Cecchi 
>> wrote:
>>> 
>>> [root@desktop xorg.conf.d]# cat 00-monitor.conf
> 
> Why not just ship it with text mode login and get rid of all the video
> problems? With a 800x600 resolution I doubt they can do a lot with the GUI
> anyway.

Because I can talk a Windows admin through an IP change over the phone easier 
via Settings > Network than with nano in /etc/sysconfig/network-scripts.  I’ve 
tried it both ways several times, and it’s not even a close thing.

Once I’ve got remote access, the screen gets turned back off, never to turn 
back on for perhaps years at a time.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Setting GDM resolution without knowing the monitor specs

2019-02-21 Thread Warren Young
On Feb 21, 2019, at 4:42 PM, Gianluca Cecchi  wrote:
> 
> [root@desktop xorg.conf.d]# cat 00-monitor.conf

Thanks!  We’ll be building another server next week, so I’ll try this then.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Setting GDM resolution without knowing the monitor specs

2019-02-21 Thread Warren Young
We had a complaint recently from a customer that received a server we shipped 
out that their monitor just showed a black screen.  It turns out that they’d 
hooked it up to an ancient POS with 800x600 as its best resolution, and gdm in 
CentOS 7 apparently assumes at least 1024x768.  It was apparently a major 
hassle for them to hunt down a reasonable monitor for us.

Since we almost never use the GUI on these things, we thought we’d solve that 
problem going forward by shipping these boxes out to use a fixed 800x600, so 
they’ll work on any monitor the user plugs in.

The first problem you run into is that the Settings > Devices > Display GUI 
only affects the login you do it under.  GDM doesn’t pay attention to this, not 
even to the configuration set as root!

Then you go web searching and find solutions involving xrandr and advice to 
just put it in /etc/gdm/Init/Default, but that doesn’t seem to work.

I then got s close when I found the advice to set the monitor configuration 
in the GUI, then copy ~/.config/monitors.xml to /var/lib/gdm/.config.  That 
works…until you plug in a different monitor, because that file has a hard tie 
to the monitor brand, model and *serial number*!

The monitors.xml file looks like this:


  


  0
  0
  yes
  

  VGA-1
  SAM
  SyncMaster
  H1AK50


  800
  600
  60

  

  


I have tried:

1. Removing the  section entirely.  gdm then gives me a black 
screen on “systemctl restart gdm” from a text console.

2. Filling out all fields except  with values like “default”, 
“generic”, “any”, “*”, etc.  GDM decides you couldn’t possibly mean for this 
configuration to apply to that monitor, since it differs, so it ignores you and 
does whatever it likes instead.

3. Lots and lots of web searching for documentation on the file format to see 
if I can find out if there is a different way of specifying “any monitor, I 
don’t care”.

I was about to go reading the source code, but I thought I’d ask here first.

Keep in mind that we ship these systems out without knowing what monitor 
they’ll plug in, and once the systems are out in the field, it’s often 
difficult to get remote access to them.  That means we can’t just log in with 
SSH, probe it with xrandr, and fix it remotely; we’re probably having the 
customer log in via the GUI to fix the problem that prevents us from SSHing in 
in the first place!

It’s fine in our case to pin the configuration to “VGA-1”, because we know what 
connectors are on the server itself.  We just don’t know what the customer will 
plug into it.

I assume there’s a minimum screen resolution for CentOS 7, probably 1024x768, 
but we’re installing these systems with a reasonable screen.  We only need this 
low resolution post-install, and then only so the customer can pull up a 
terminal and maybe run the Network settings tool.  For that, 800x600 is fine.  
You do have to drag the Settings app around a bit to see everything, but it 
works.

Our current fallback is Ctrl-Alt-F2 and nano, which is tolerable for some 
things — “Okay, type ‘ifconfig’…now find the line that has ‘inet’ in it, and 
read that back to me…’ — but remotely talking someone through changing 
ifcfg-noisenoise via nano is a minor nightmare, especially now that Confusing 
Network Device Naming is the default.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7 basic install, HATE

2019-02-15 Thread Warren Young
On Feb 15, 2019, at 11:08 AM, mark  wrote:
> 
> To say "spend $20..." does not relate to "have to find a workaround to do
> it *today*", nor to "this is a  work system, I'm not driving out to
> Microcenter to buy one”.

What’s your hourly rate?  How much did *not* driving out to Microcenter cost 
your employer?

If you’re salaried, there’s the opportunity costs: what work did you *not* do 
while trying to save that $20 and hour round trip?

RHEL drops old hardware constantly, roughly aligning with its ~10 year support 
window.  It doesn’t surprise me that the early Matrox cards have fallen out of 
support by now.

The last such deprecation to bite me was the 3ware 8000 series cards, last 
supported on EL5 or 6.  When resuscitating such systems, we either have to 
stick with the old OS or upgrade them to 9000 series cards — which won’t attach 
8000 series RAID sets — or switch array technologies entirely.

Doubtless you can throw heroic efforts at getting old X drivers to build with 
current software, but is that a good use of your time, given the alternatives?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-15 Thread Warren Young
On Feb 15, 2019, at 10:00 AM, mark  wrote:
> 
> Warren Young wrote:
>> 
>> The cheapest RAID-friendly drives we’re buying these days are about US
>> $37/TB in low quantities.
> 
> $38/tb? Google shopping shows me a 4TB WD Red at $110.

5400 RPM.

Red Pros are $170 at NewEgg, and we’re using WD Golds at $199.  That’s $50/TB, 
but the $37/TB mark is for higher capacity drives.

Even if we take your numbers and halve them again to get miracle high-quantity 
pricing, the payoff time at the OP’s wished-for $1/TB/year is about 14 years, 
and we haven’t even added in ancillary costs like the enclosure, redundancy, 
power, cooling, networking, staff, drive replacement…

> A two-drive esata bay is under $100.

…which won’t hold 50 TB of data.

Even a 4-drive enclosure isn’t enough, since even with single redundancy, the 
largest drives are 15/16 TB, depending on the technology, so that only gets you 
45 or 48 TB.  And then you’ve got to work out how to use those SMR or MAMR 
drives efficiently.

Stepping back to standard technology 10 TB drives requires 7 of them to get 50 
TB with dual redundancy, so even with miracle pricing, you’re probably talking 
about something like $750 for the raw hardware, which gets paid back in ~15 
years on the OP’s schedule, and then only if all 7 drives last 15 years!

Tell ’im ’e’s *dreamin’!*.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-15 Thread Warren Young
On Feb 15, 2019, at 7:56 AM, Yan Li  wrote:
> 
> G Suite Business tier. Buy five users and you get unlimited Google Drive
> storage. That's $50/month.

So, you’re already 12x higher than his budget, and it’ll be going up 20% in 
early April.

On top of that, there’s certainly a transfer rate limit.  I couldn’t find a 
reliable source saying what that limit is, but I found a related limit for G 
Suite here:

https://support.google.com/a/answer/1071518

If that applies to Google Drive as well, it’ll take about 182 years to send 50 
TB.

I can say from personal experience that Google is a bit stingy about such 
things.  They give G Suite basic users 30 GB of storage, but if you try to put 
tens of GB in it, you can only pull that all down a few times a month before 
that user’s account gets locked.  That happened to us with one user that kept 
blowing up his laptop, requiring a rebuild, and thus a re-download of the 
entire IMAP archive he insisted on keeping in the cloud.

If they’re doing that to us, 3 orders of magnitude down from the OP’s target 
value, I think he’ll have a bad time trying to put 50 TB into a single Google 
Drive account.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-15 Thread Warren Young
On Feb 15, 2019, at 1:14 AM, Turritopsis Dohrnii Teo En Ming 
 wrote:
> 
>>> Could you recommend affordable and reliable cloud storage for 50 TB of data?
>> 
> My budget is around USD$50 per year.

The cheapest RAID-friendly drives we’re buying these days are about US $37/TB 
in low quantities.

A big data warehouser will be getting a substantial price break on their 
drives, but even halving the payoff time, you’re still asking the cloud storage 
provider to accept a payoff time in the 18 year range.  And that’s ignoring the 
cost of rack space, computers to run the drives, networking, bandwidth, staff, 
redundancy, drive turnover...

There’s nothing magical about The Cloud that makes everything cheaper.  They 
still have to buy the same components you and I do, then they have to pay 
someone to manage it all, someone else to house it all, etc.

You’re *dreaming*.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7 basic install, HATE

2019-02-14 Thread Warren Young
On Feb 14, 2019, at 1:47 PM, mark  wrote:
> 
> Any suggestions?

Do a text install: https://unix.stackexchange.com/a/361935/138
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7, firewalld and rich rules

2019-02-01 Thread Warren Young
On Jan 31, 2019, at 3:25 PM, mark  wrote:
> 
> Warren Young wrote:
>> 
>> ...there aren’t automatic programming
>> language conversion tools...
> 
> You mean like the one I meant to use 25 or so years ago, basic2c?

All right, so it’s a bad example, but it’s bad both directions.

The problem of firewall rule conversion isn’t about protecting billions of 
dollars of investment in development by moving from a disfavored, underpowered 
programming language to a faster, better, and rising language.  The economic 
incentive for a firewall rule conversion tool is much smaller.

I don’t think it’s entirely uneconomic to solve this problem.  I see two 
plausible options:

1. Find everyone who has this problem, have them all chip in $1-5, and you’ll 
probably have enough to pay for the development of a tool at least as faithful 
as that BASIC to C translator you mentioned.  We’ve got the crowdfunding 
platforms to make this possible.

2. Find a single organization that’s got this problem badly enough that they’re 
willing to fund the development of this tool from their internal IT/development 
budget.  You might stretch it to two organizations resulting in a pair of 
collaborating developers, but beyond that, you’ve got too many cooks in the 
kitchen for the size of the problem, so you go to #1.

If you’re like me, both look like hard solutions, which is probably a better 
answer to your question than my language translator attempt.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7, firewalld and rich rules

2019-01-31 Thread Warren Young
On Jan 31, 2019, at 11:12 AM, mark  wrote:
> 
> Why would *ANYONE* think that everyone should just start from scratch,
> taking all the time in the world to get it converted?

If the conversion were simple enough to be easily automated, the new system is 
probably no more than just a syntactic difference away from the old, and thus 
does not provide any interesting new functionality or change in existing 
functionality.

It’s much the same as asking why there aren’t automatic programming language 
conversion tools: we wouldn’t need more than one programming language if they 
all mapped 1:1 to each other, short of going down to the machine code level and 
back up the technology stack.

Pretty much all the other major competing OSes have had at least one 
incompatible shift in their firewall implementations over the years, even that 
supposed bastion of ultimate stability, FreeBSD.  I take that as a sign that 
those designing firewall schemes in the early 1990s didn’t have magical levels 
of foresight when doing their work, so that replacements had to be incompatible 
to provide the functionality we now expect.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Firewalld and iptables

2018-12-14 Thread Warren Young
On Dec 14, 2018, at 3:57 PM, Jon LaBadie  wrote:
> 
>  : Bad rule (does a matching rule exist in that chain?).

That makes sense: the old iptables service installed several default chains, 
and firewalld does as well, but they’re not named the same, and I doubt there’s 
a 1:1 mapping between them.

That’s part of why I advised you to use one or the other, not both.  Another 
reason is that their persistent rule stores use entirely different file 
formats, in different locations.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Firewalld and iptables

2018-12-14 Thread Warren Young
On Dec 14, 2018, at 3:14 PM, Warren Young  wrote:
> 
>alias fcp="sudo firewall-cmd —permanent"

These commands are top-of-mind for me at the moment because I just configured a 
Raspberry Pi based network appliance at home, and installed firewalld on it for 
the purpose because I like it so much better than raw iptables commands.

The default OS for the Raspberry Pi is derived from Debian, which has firewalld 
in its package repository but doesn’t use it by default.  I found it well worth 
going out of my way to get firewalld on that Pi.

Good stuff!
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Firewalld and iptables

2018-12-14 Thread Warren Young
On Dec 14, 2018, at 2:30 PM, Jon LaBadie  wrote:
> 
> After a recent large update, firewalld's status contains
> many lines of the form:
> 
>  WARNING: COMMAND_FAILED: '/usr/sbin/iptables…

What’s the rest of the command?

> Checking iptables.service status shows it to be masked.

That’s probably from package iptables-services, which isn’t installed by 
default on purpose. It’s the legacy service from before firewalld was made the 
default.  Use one or the other, not both.

I strongly recommend that you use firewalld instead of the old iptables 
service.  While firewalld is based on the kernel’s iptables facility, it gives 
many additional capabilities that you don’t get with raw iptables commands.  
The old service was little more than a macro system for raw iptables commands.  

Contrast firewalld, which:

1. Integrates with systemd.

2. Understands “services” as opposed to just raw port numbers and such.  This 
is especially valuable with complex services like NFS.  With iptables…well, you 
forgot to allow rpc.portmap again, didn’t you? :)

3. Updates the firewall in real time, live.  You can do this by hand with raw 
iptables commands, but the syntax for inserting new rules at the right place in 
the tables is awkward.  Firewalld automates all of that for you.

4. Add --permanent to a new rule to save it securely to the rule set loaded on 
startup.  Contrast iptables, where you’re often running across blog articles 
and such that talk about saving such rules separately to /etc/rc.local and 
such, or editing /etc/sysconfig/iptables directly.

5. If you prefer the “update global config and reload” pattern of the old 
system-config-firewall scheme from CentOS 6, you can do that with firewalld, 
too:

alias fcp="sudo firewall-cmd --permanent"
fcp --add-service foo
fcp --add-port somename/tcp
…more fcp commands here…
sudo firewall-cmd --reload


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Tools/mechanisms for the management of access permissions in big filebased datasets

2018-11-28 Thread Warren Young
On Nov 28, 2018, at 2:36 AM, Frank Thommen  wrote:
> 
> Our problem is more the management side.  Effectively we are looking for a 
> tool that helps us manage these permissions

I want ACLs to work.  There’s a real problem to solve, which is that the old 
user:group rwx Unix permission system doesn’t let you express common wishes 
like “Angel & Bobby own this file, and groups Cookie and Danish can read and 
write it, and user Egbert can write it.”

The problem is, ACLs are hidden by default with respect to “ls -l”, and when 
you do make them visible with getfacl, you now have a complex mental parsing 
problem to solve before you understand the meaning of the ACL.  Add in ACL 
inheritance and you’ve got a real mess.

Make a facility hidden and complex, and you pretty much guarantee that few will 
use that facility, and those who do will at times create messes they can’t 
properly understand.  A security mechanism that’s most often underused, 
misapplied, or both is a bad system.

FOSS is good at solving such problems, so the only way I can see that tools to 
solve this problem don’t exist is that few actually use ACLs, perhaps because 
of the reasons above.

Who here uses ACLs to good effect?  Are you using more than just 
getfacl/setfacl to do it?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: good free email service ?

2018-11-09 Thread Warren Young
On Nov 9, 2018, at 9:22 AM, Vic Chester  wrote:
> 
> https://protonmail.com/

Aside from semi-charitable organizations like that, I wouldn’t expect good free 
email service to exist.  It’s seriously complicated to run a 
properly-configured email server.

The last time I looked into it, there were something like 24 separate RFCs an 
SMTP-only server had to implement, and much of that complexity spills over into 
the administration side, such as DKIM setup.  Then you have everything outside 
of the protocol such as spam filtering, blacklist/greylist/whitelist 
maintenance, TLS key updates, OS updates, etc.

Expect to pay for what you use, either by throwing a whole lot of your own time 
at it or paying someone to spend that time on your behalf.  Unless you’re doing 
this for educational or professional reasons, where the time spent is paid back 
handsomely, it’s probably a better trade to pay someone to handle it for you.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Stupid C7 firewall question

2018-10-25 Thread Warren Young
On Oct 25, 2018, at 9:04 AM, Valeri Galtsev  wrote:
> 
> suppose I made configuration of some machine, which then I am going to 
> replicate just by using kickstart when building new machines. What should I 
> add to kickstart configuration file to make my configured firewalld part 
> reproduced on all newly built machines?

We use shell scripts here to bring new machines up, not kickstart, but my first 
result in a web search looks useful:

https://firewalld.org/documentation/man-pages/firewall-offline-cmd.html
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


  1   2   3   4   5   6   7   >