[CentOS] CentOS 7 on PPC64le booting from a MD RAID volume

2017-09-08 Thread Tom Leach
I've been trying to install CentOS 7 AltArch ppc64le onto a new Power 8 
system and I want to configure mdraid for the volumes.  I can get 
everything working if I install to a single disk, but when I configure 
for RAID 1, the system fails to boot.

So, is mdraid1 supported for booting a Power 8 system?
Tom Leach
le...@coas.oregonstate.edu
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-es] AYUDA

2017-09-08 Thread Wilmer Arambula
Yo uso (postfix + postfixmysql + dovecot + clamav + spamassaing + mariadb +
dkim + dmarck) y por supuesto el registro spf todo bajo sll y funciona a
las mil maravillas,

Slds,

El vie., 8 sept. 2017 a las 17:40, Rhamyro Alcoser A. ()
escribió:

> Saludos cordiales estimado y muy respetados amigos, por favor su valiosa
> opinión, que características debe tener un servidor de correo medio decente
> sobre Centos 7 o Debian; me refiero a servicios, seguridades sobre el
> servidor para q me dure un buen tiempo y que no me este dando problemas.
>
> Muchas gracias por favor sus opiniones.
>
> --
>
> *Rhamyro Alcoser A.*
>
> *ITIL & Systems Development*
>
> *Skype: *rhamyr...@outlook.com 
>
> *Quito - Ecuador *
>
>
> *¿Qué, pues, diremos a esto? Si Dios es por nosotros, ¿quién contra
> nosotros?, Rm 8:31*
> ___
> CentOS-es mailing list
> CentOS-es@centos.org
> https://lists.centos.org/mailman/listinfo/centos-es
>
___
CentOS-es mailing list
CentOS-es@centos.org
https://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread John R Pierce

On 9/8/2017 2:36 PM, Valeri Galtsev wrote:

With all due respect, John, this is the same as hard drive cache is not
backed up power wise for a case of power loss. And hard drives all lie
about write operation completed before data actually are on the platters.
So we can claim the same: hard drives are not suitable for RAID. I implied
to find out from experts in what respect they claim SSDs are unsuitable
for hardware RAID as opposed to mechanical hard drives.

Am I missing something?


major difference is, SSD's do a LOT more write buffering as their 
internal write blocks are on the order of a few 100KB, also they 
extensively reorder data on the media, both for wear leveling and to 
minimize physical block writes so there's really no way the host and/or 
controller can track whats going on.


enterprise hard disks do NOT do hidden write buffering, its all fully 
managable via SAS or SATA commands.   desktop drives tend to lie about 
it to achieve better performance. I do NOT use desktop drives in raids.


...


And one may want to adjust stripe size to be resembling SSDs
internals, as default is for hard drives, right?


as the SSD physical data blocks have no visible relation to logical 
block numbers or CHS, its not practical to do this. I'd use a fairly 
large stripe size, like 1MB, so more data can be sequentially written to 
the same device (even tho the device will scramble it all over as it 
sees fit).



--
john r pierce, recycling bits in santa cruz

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS-es] AYUDA

2017-09-08 Thread Rhamyro Alcoser A.
Saludos cordiales estimado y muy respetados amigos, por favor su valiosa
opinión, que características debe tener un servidor de correo medio decente
sobre Centos 7 o Debian; me refiero a servicios, seguridades sobre el
servidor para q me dure un buen tiempo y que no me este dando problemas.

Muchas gracias por favor sus opiniones.

-- 

*Rhamyro Alcoser A.*

*ITIL & Systems Development*

*Skype: *rhamyr...@outlook.com 

*Quito - Ecuador *


*¿Qué, pues, diremos a esto? Si Dios es por nosotros, ¿quién contra
nosotros?, Rm 8:31*
___
CentOS-es mailing list
CentOS-es@centos.org
https://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Valeri Galtsev

On Fri, September 8, 2017 3:06 pm, John R Pierce wrote:
> On 9/8/2017 12:52 PM, Valeri Galtsev wrote:
>> Thanks. That seems to clear fog a little bit. I still would like to hear
>> manufacturers/models here. My choices would be: Areca or LSI (bought out
>> by Intel, so former LSI chipset and microcode/firmware) and as SSD
>> Samsung
>> Evo SATA III. Does anyone who used these in hardware RAID can offer any
>> bad experience description?
>
>
> Does the Samsung EVO have supercaps and write-back buffer protection? 
> if not, it is in NO way suitable for reliable use in a raid/server
> environment.

With all due respect, John, this is the same as hard drive cache is not
backed up power wise for a case of power loss. And hard drives all lie
about write operation completed before data actually are on the platters.
So we can claim the same: hard drives are not suitable for RAID. I implied
to find out from experts in what respect they claim SSDs are unsuitable
for hardware RAID as opposed to mechanical hard drives.

Am I missing something?

>
> as far as raiding SSDs go, the ONLY raid I'd use with them is raid1
> mirroring (or if more than 2, raid10 striped mirrors). And I'd probably
> do it with OS based software raid, as thats more likely to support SSD
> trim than a hardware raid card, plus allows the host to monitor the SSDs
> via SMART, which a hardware raid card probably hides.

Good, thanks. My 3ware RAIDs through their 3dm daemon do warn me about
SMART status: fail (meaning the drive though working should according to
SMART be replaced ASAP). Not certain off hand about LSI ones (one should
be able to query them through command line client utility).

>
> I'd also make sure I undercommit the size of the SSD, so if its a 500GB
> SSD, I'd make absolutely sure to never have more than 300-350GB of data
> on it.   if its part of a stripe set, the only way to ensure this is to
> partition it so the raid slice is only 300-350GB.

Great point! And one may want to adjust stripe size to be resembling SSDs
internals, as default is for hard drives, right?

Thanks, John, that was instructive!

Valeri

>
>
> --
> john r pierce, recycling bits in santa cruz
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>



Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Gordon Messmer

On 09/08/2017 11:06 AM, hw wrote:
Make a test and replace a software RAID5 with a hardware RAID5.  Even 
with
only 4 disks, you will see an overall performance gain.  I´m guessing 
that
the SATA controllers they put onto the mainboards are not designed to 
handle

all the data --- which gets multiplied to all the disks --- and that the
PCI bus might get clogged.  There´s also the CPU being burdened with the
calculations required for the RAID, and that may not be displayed by 
tools
like top, so you can be fooled easily. 



That sounds like a whole lot of guesswork, which I'd suggest should 
inspire slightly less confidence than you are showing in it.


RAID parity calculations are accounted under a process named 
md_raid.  You will see time consumed by that code under 
all of the normal process accounting tools, including total time under 
"ps" and current time under "top". Typically, your CPU is vastly faster 
than the cheap processors on hardware RAID controllers, and the 
advantage will go to software RAID over hardware.  If your system is CPU 
bound, however, and you need that extra fraction of a percent of CPU 
cycles that go to calculating parity, hardware might offer an advantage.


The last system I purchased had its storage controller on a PCIe 3.0 x16 
port, so its throughput to the card should be around 16GB/s.  Yours 
might be different.  I should be able to put roughly 20 disks on that 
card before the PCIe bus is the bottleneck.  If this were a RAID6 
volume, a hardware RAID card would be able to support sustained writes 
to 22 drives vs 20 for md RAID.  I don't see that as a compelling 
advantage, but it is potentially an advantage for a hypothetical 
hardware RAID card.


When you are testing your 4 disk RAID5 array, microbenchmarks like 
bonnie++ will show you a very significant advantage toward the hardware 
RAID as very small writes are added to the battery-backed cache on the 
card and the OS considers them complete.  However, on many cards, if the 
system writes data to the card faster than the card writes to disks, the 
cache will fill up, and at that point, the system performance can 
suddenly and unexpectedly plummet.  I've fun a few workloads where that 
happened, and we had to replace the system entirely, and use software 
RAID instead.  Software RAID's performance tends to be far more 
predictable as the workload increases.


Outside of microbenchmarks like bonnie++, software RAID often offers 
much better performance than hardware RAID controllers. Having tested 
systems extensively for many years, my advice is this:  there is no 
simple answer to the question of whether software or hardware RAID is 
better.  You need to test your specific application on your specific 
hardware to determine what configuration will work best.  There are some 
workloads where a hardware controller will offer better write 
performance, since a battery backed write-cache can complete very small 
random writes very quickly.  If that is not the specific behavior of 
your application, software RAID will very often offer you better 
performance, as well as other advantages.  On the other hand, software 
RAID absolutely requires a monitored UPS and tested auto-shutdown in 
order to be remotely reliable, just as a hardware RAID controller 
requires a battery backed write-cache, and monitoring of the battery state.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Jon Pruente
On Fri, Sep 8, 2017 at 2:52 PM, Valeri Galtsev 
wrote:

>
> manufacturers/models here. My choices would be: Areca or LSI (bought out
> by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
>

Intel only purchased the networking component of LSI, Axxia, from Avago.
The RAID division was merged into Broadcom (post Avago merger).
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread John R Pierce

On 9/8/2017 12:52 PM, Valeri Galtsev wrote:

Thanks. That seems to clear fog a little bit. I still would like to hear
manufacturers/models here. My choices would be: Areca or LSI (bought out
by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
Evo SATA III. Does anyone who used these in hardware RAID can offer any
bad experience description?



Does the Samsung EVO have supercaps and write-back buffer protection?  
if not, it is in NO way suitable for reliable use in a raid/server 
environment.


as far as raiding SSDs go, the ONLY raid I'd use with them is raid1 
mirroring (or if more than 2, raid10 striped mirrors). And I'd probably 
do it with OS based software raid, as thats more likely to support SSD 
trim than a hardware raid card, plus allows the host to monitor the SSDs 
via SMART, which a hardware raid card probably hides.


I'd also make sure I undercommit the size of the SSD, so if its a 500GB 
SSD, I'd make absolutely sure to never have more than 300-350GB of data 
on it.   if its part of a stripe set, the only way to ensure this is to 
partition it so the raid slice is only 300-350GB.



--
john r pierce, recycling bits in santa cruz

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Valeri Galtsev

On Fri, September 8, 2017 12:56 pm, hw wrote:
> Valeri Galtsev wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.r...@5-cent.us wrote:
 hw wrote:
> Mark Haney wrote:
 
>> BTRFS isn't going to impact I/O any more significantly than, say,
>> XFS.
>
> But mdadm does, the impact is severe.  I know there are ppl saying
> otherwise, but I´ve seen the impact myself, and I definitely
> don´t
> want
> it on that particular server because it would likely interfere with
> other services.
 
 I haven't really been following this thread, but if your requirements
 are
 that heavy, you're past the point that you need to spring some money
 and
 buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought
 them
 more recently?
>>>
>>> Heavy requirements are not required for the impact of md-RAID to be
>>> noticeable.
>>>
>>> Hardware RAID is already in place, but the SSDs are "extra" and, as I
>>> said,
>>> not suited to be used with hardware RAID.
>>
>> Could someone, please, elaborate on the statement that "SSDs are not
>> suitable for hardware RAID".
>
> When you search for it, you´ll find that besides wearing out undesirably
> fast --- which apparently can be contributed mostly to less overcommitment
> of the drive --- you may also experience degraded performance over time
> which can be worse than you would get with spinning disks, or at least not
> much better.

Thanks. That seems to clear fog a little bit. I still would like to hear
manufacturers/models here. My choices would be: Areca or LSI (bought out
by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
Evo SATA III. Does anyone who used these in hardware RAID can offer any
bad experience description?

I am kind of shying away from "crap" hardware which in a long run is more
expensive, even though looks cheaper (Pricegrabber is your enemy - I would
normally say to my users). So, I never would consider using poorly/cheaply
designed hardware in some setup (e.g. hardware RAID based storage) one
expects performance from. Am I still taking chance hitting "bad" hardware
RAID + SSD combination? Just curious where we actually stand.

Thanks again for fruitful discussion!

Valeri

>
> Add to that the firmware being designed for an entirely different
> application
> and having bugs, and your experiences with surprisingly incompatible
> hardware,
> and you can imagine that using an SSD not designed for hardware RAID
> applications with hardware RAID is a bad idea.  There is a difference like
> night and day between "consumer hardware" and hardware you can actually
> use,
> and that is not only the price you pay for it.
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>



Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Johnny Hughes
On 09/07/2017 12:57 PM, hw wrote:
> 
> Hi,
> 
> is there anything that speaks against putting a cyrus mail spool onto a
> btrfs subvolume?

This is what Red Hat says about btrfs:

The Btrfs file system has been in Technology Preview state since the
initial release of Red Hat Enterprise Linux 6. Red Hat will not be
moving Btrfs to a fully supported feature and it will be removed in a
future major release of Red Hat Enterprise Linux.

The Btrfs file system did receive numerous updates from the upstream in
Red Hat Enterprise Linux 7.4 and will remain available in the Red Hat
Enterprise Linux 7 series. However, this is the last planned update to
this feature.





signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread m . roth
Mark Haney wrote:
> On 09/08/2017 01:31 PM, hw wrote:
>> Mark Haney wrote:
>>

>> Probably with the very expensive SSDs suited for this ...
> Possibly, but that's somewhat irrelevant.  I've taken off the shelf SSDs
> and hardware RAID'd them.  If they work for the hell I put them through
> (processing weather data), they'll work for the type of service you're
> saying you have.

> Not true at all.  Maybe 5 years ago SSDs were hit or miss with hardware
> RAID.  Not anymore.  It's just another drive to the system, the
> controllers don't know the difference between a SATA HDD and a SATA SSD.
> Couple that with the low volume of mail, and you should be fine for HW
> RAID.

Actually, with the usage you're talking about, I'm surprised you're using
SATA and not SAS.

   mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Mark Haney

On 09/08/2017 01:31 PM, hw wrote:

Mark Haney wrote:

I/O is not heavy in that sense, that´s why I said that´s not the 
application.
There is I/O which, as tests have shown, benefits greatly from low 
latency, which
is where the idea to use SSDs for the relevant data has arisen from.  
This I/O
only involves a small amount of data and is not sustained over long 
periods of time.
What exactly the problem is with the application being slow with 
spinning disks is
unknown because I don´t have the sources, and the maker of the 
application refuses

to deal with the problem entirely.

Since the data requiring low latency will occupy about 5% of the 
available space on
the SSDs and since they are large enough to hold the mail spool for 
about 10 years at
its current rate of growth besides that data, these SSDs could be well 
used to hold

that mail spool.
See, this is the kind of information that would have made this thread 
far shorter.  (Maybe.)  The one thing that you didn't explain is whether 
this application is the one /using/ the mail spool or if you're adding 
Cyrus to that system to be a mail server.

BTRFS isn't going to impact I/O any more significantly than, say, XFS.


But mdadm does, the impact is severe.  I know there are ppl saying 
otherwise,

but I´ve seen the impact myself, and I definitely don´t want it on that
particular server because it would likely interfere with other 
services.  I don´t
know if the software RAID of btrfs is better in that or not, though, 
but I´m
seeing btrfs on SSDs being fast, and testing with the particular 
application has

shown a speedup of factor 20--30.
I never said anything about MD RAID.  I trust that about as far as I 
could throw it.  And having had 5 surgeries on my throwing shoulder 
wouldn't be far.


How else would I create a RAID with these SSDs?

I´ve been using md-RAID for years, and it always worked fine.

That is the crucial improvement.  If the hardware RAID delivers 
that, I´ll use
that and probably remove the SSDs from the machine as it wouldn´t 
even make sense
to put temporary data onto them because that would involve software 
RAID.
Again, if the idea is to have fast primary storage, there are pretty 
large SSDs available now and I've hardware RAIDED SSDs before without 
trouble, though not for any heavy lifting, it's my test servers at 
home. Without an idea of the expected mail traffic, this is all 
speculation.


The SSDs don´t need to be large, and they aren´t.  They are already 
greatly oversized at

512GB nominal capacity.

There´s only a few hundred emails per day.  There is no special 
requirement for their
storage, but there is a lot of free space on these SSDs, and since the 
email traffic is
mostly read-only, it won´t wear out the SSDs.  It simply would make 
sense to put the

mail spool onto these SSDs.

It does have serious stability/data integrity issues that XFS 
doesn't have.  There's no reason not to use SSDs for storage of 
immediate data and mechanical drives for archival data storage.


As for VMs we run a huge Zimbra cluster in VMs on VPC with large 
primary SSD volumes and even larger (and slower) secondary volumes 
for archived mail.  It's all CentOS 6 and works very well.  We 
process 600 million emails a month on that virtual cluster.  All 
EXT4 inside LVM.


Do you use hardware RAID with SSDs?

We do not here where I work, but that was setup LONG before I arrived.


Probably with the very expensive SSDs suited for this ...
Possibly, but that's somewhat irrelevant.  I've taken off the shelf SSDs 
and hardware RAID'd them.  If they work for the hell I put them through 
(processing weather data), they'll work for the type of service you're 
saying you have.


If the SSDs you have aren't suitable for hardware RAID, then they 
aren't good for production level mail spools, IMHO.  I mean, you're 
talking like you're expecting a metric buttload of mail traffic, so 
it stands to reason you'll need really beefy hardware.  I don't think 
you can do what you seem to need on budget hardware. Personally, and 
solely based on this thread alone, if I was building this in-house, 
I'd get a decent server cluster together and build a FC or iSCSI SAN 
to a Nimble storage array with Flash/SSD front ends and large HDDs in 
the back end.  This solves virtually all your problems.  The servers 
will have tiny SSD boot drives (which I prefer over booting from the 
SAN) and then everything else gets handled by the storage back-end.


If SSDs not suitable for RAID usage aren´t suitable for production 
use, then basically
all SSDs not suitable for RAID usage are SSDs that can´t be used for 
anything that
requires something less volatile than a ramdisk.  Experience with such 
SSDs contradicts

this so far.
Not true at all.  Maybe 5 years ago SSDs were hit or miss with hardware 
RAID.  Not anymore.  It's just another drive to the system, the 
controllers don't know the difference between a SATA HDD and a SATA SSD. 
Couple that with the low volume 

Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

m.r...@5-cent.us wrote:

Mark Haney wrote:

On 09/08/2017 09:49 AM, hw wrote:

Mark Haney wrote:




It depends, i. e. I can´t tell how these SSDs would behave if large
amounts of data would be written and/or read to/from them over extended
periods of time because I haven´t tested that.  That isn´t the
application, anyway.


If your I/O is going to be heavy (and you've not mentioned expected
traffic, so we can only go on what little we glean from your posts),
then SSDs will likely start having issues sooner than a mechanical drive
might.  (Though, YMMV.)  As I've said, we process 600 million messages a
month, on primary SSDs in a VMWare cluster, with mechanical storage for
older, archived user mail.  Archived, may not be exactly correct, but
the context should be clear.


One thing to note, which I'm aware of because I was recently spec'ing out
a Dell server: Dell, at least, offers two kinds of SSDs, one for heavy
write, I think it was, and one for equal r/w. You might dig into that.


But mdadm does, the impact is severe.  I know there are ppl saying
otherwise, but I´ve seen the impact myself, and I definitely don´t want
it on that particular server because it would likely interfere with other
services.  I don´t know if the software RAID of btrfs is better in that
or not, though, but I´m seeing btrfs on SSDs being fast, and testing
with the particular application has shown a speedup of factor 20--30.


Odd, we've never seen anything like that. Of course, we're not handling
the kind of mail you are... but serious scientific computing hits storage
hard, also.


I never said anything about MD RAID.  I trust that about as far as I
could throw it.  And having had 5 surgeries on my throwing shoulder
wouldn't be far.


Why? We have it all over, and have never seen a problem with it. Nor have
I, personally, as I have a RAID 1 at home.



Make a test and replace a software RAID5 with a hardware RAID5.  Even with
only 4 disks, you will see an overall performance gain.  I´m guessing that
the SATA controllers they put onto the mainboards are not designed to handle
all the data --- which gets multiplied to all the disks --- and that the
PCI bus might get clogged.  There´s also the CPU being burdened with the
calculations required for the RAID, and that may not be displayed by tools
like top, so you can be fooled easily.

Graphics cards have acceleration in hardware for a reason.  What was the last
time you tried to do software rendering, and what frame rates did you get? :)
Offloading the I/O to a designated controller gives you room for the things
you actually want to do, similar to a graphics card.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread m . roth
hw wrote:
> Mark Haney wrote:
>> On 09/08/2017 09:49 AM, hw wrote:
>>> Mark Haney wrote:

> Probably with the very expensive SSDs suited for this ...

>>>
>>> That´s because I do not store data on a single disk, without
>>> redundancy, and the SSDs I have are not suitable for hardware RAID.

That's a biggie: are these SSDs consumer grade, or enterprise grade? It
was common knowledge 8-9 years ago that you *never* want consumer grade in
anything that mattered, other than maybe a home PC - they wear out much
sooner.

But then, you can't really use consumer grade h/ds in a server. We like
the NAS-rated ones, like WD Red, which are about 1.33% the price of
consumer grade, and solid... and a lot less than the enterprise-grade,
which are about 3x consumer grade.

 mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

Valeri Galtsev wrote:


On Fri, September 8, 2017 9:48 am, hw wrote:

m.r...@5-cent.us wrote:

hw wrote:

Mark Haney wrote:



BTRFS isn't going to impact I/O any more significantly than, say, XFS.


But mdadm does, the impact is severe.  I know there are ppl saying
otherwise, but I´ve seen the impact myself, and I definitely don´t
want
it on that particular server because it would likely interfere with
other services.


I haven't really been following this thread, but if your requirements
are
that heavy, you're past the point that you need to spring some money and
buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them
more recently?


Heavy requirements are not required for the impact of md-RAID to be
noticeable.

Hardware RAID is already in place, but the SSDs are "extra" and, as I
said,
not suited to be used with hardware RAID.


Could someone, please, elaborate on the statement that "SSDs are not
suitable for hardware RAID".


When you search for it, you´ll find that besides wearing out undesirably
fast --- which apparently can be contributed mostly to less overcommitment
of the drive --- you may also experience degraded performance over time
which can be worse than you would get with spinning disks, or at least not
much better.

Add to that the firmware being designed for an entirely different application
and having bugs, and your experiences with surprisingly incompatible hardware,
and you can imagine that using an SSD not designed for hardware RAID
applications with hardware RAID is a bad idea.  There is a difference like
night and day between "consumer hardware" and hardware you can actually use,
and that is not only the price you pay for it.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread m . roth
Mark Haney wrote:
> On 09/08/2017 09:49 AM, hw wrote:
>> Mark Haney wrote:

>>
>> It depends, i. e. I can´t tell how these SSDs would behave if large
>> amounts of data would be written and/or read to/from them over extended
>> periods of time because I haven´t tested that.  That isn´t the
>> application, anyway.
>
> If your I/O is going to be heavy (and you've not mentioned expected
> traffic, so we can only go on what little we glean from your posts),
> then SSDs will likely start having issues sooner than a mechanical drive
> might.  (Though, YMMV.)  As I've said, we process 600 million messages a
> month, on primary SSDs in a VMWare cluster, with mechanical storage for
> older, archived user mail.  Archived, may not be exactly correct, but
> the context should be clear.
>
One thing to note, which I'm aware of because I was recently spec'ing out
a Dell server: Dell, at least, offers two kinds of SSDs, one for heavy
write, I think it was, and one for equal r/w. You might dig into that.
>>
>> But mdadm does, the impact is severe.  I know there are ppl saying
>> otherwise, but I´ve seen the impact myself, and I definitely don´t want
>> it on that particular server because it would likely interfere with other
>> services.  I don´t know if the software RAID of btrfs is better in that
>> or not, though, but I´m seeing btrfs on SSDs being fast, and testing
>> with the particular application has shown a speedup of factor 20--30.

Odd, we've never seen anything like that. Of course, we're not handling
the kind of mail you are... but serious scientific computing hits storage
hard, also.

> I never said anything about MD RAID.  I trust that about as far as I
> could throw it.  And having had 5 surgeries on my throwing shoulder
> wouldn't be far.

Why? We have it all over, and have never seen a problem with it. Nor have
I, personally, as I have a RAID 1 at home.


  mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

Mark Haney wrote:

On 09/08/2017 09:49 AM, hw wrote:

Mark Haney wrote:

I hate top posting, but since you've got two items I want to comment on, I'll 
suck it up for now.


I do, too, yet sometimes it´s reasonable.  I also hate it when the lines
are too long :)


I'm afraid you'll have to live with it a bit longer.  Sorry.

Having SSDs alone will give you great performance regardless of filesystem.


It depends, i. e. I can´t tell how these SSDs would behave if large amounts of
data would be written and/or read to/from them over extended periods of time 
because
I haven´t tested that.  That isn´t the application, anyway.


If your I/O is going to be heavy (and you've not mentioned expected traffic, so 
we can only go on what little we glean from your posts), then SSDs will likely 
start having issues sooner than a mechanical drive might.  (Though, YMMV.)  As 
I've said, we process 600 million messages a month, on primary SSDs in a VMWare 
cluster, with mechanical storage for older, archived user mail.  Archived, may 
not be exactly correct, but the context should be clear.


I/O is not heavy in that sense, that´s why I said that´s not the application.
There is I/O which, as tests have shown, benefits greatly from low latency, 
which
is where the idea to use SSDs for the relevant data has arisen from.  This I/O
only involves a small amount of data and is not sustained over long periods of 
time.
What exactly the problem is with the application being slow with spinning disks 
is
unknown because I don´t have the sources, and the maker of the application 
refuses
to deal with the problem entirely.

Since the data requiring low latency will occupy about 5% of the available 
space on
the SSDs and since they are large enough to hold the mail spool for about 10 
years at
its current rate of growth besides that data, these SSDs could be well used to 
hold
that mail spool.


BTRFS isn't going to impact I/O any more significantly than, say, XFS.


But mdadm does, the impact is severe.  I know there are ppl saying otherwise,
but I´ve seen the impact myself, and I definitely don´t want it on that
particular server because it would likely interfere with other services.  I 
don´t
know if the software RAID of btrfs is better in that or not, though, but I´m
seeing btrfs on SSDs being fast, and testing with the particular application has
shown a speedup of factor 20--30.

I never said anything about MD RAID.  I trust that about as far as I could 
throw it.  And having had 5 surgeries on my throwing shoulder wouldn't be far.


How else would I create a RAID with these SSDs?

I´ve been using md-RAID for years, and it always worked fine.


That is the crucial improvement.  If the hardware RAID delivers that, I´ll use
that and probably remove the SSDs from the machine as it wouldn´t even make 
sense
to put temporary data onto them because that would involve software RAID.

Again, if the idea is to have fast primary storage, there are pretty large SSDs 
available now and I've hardware RAIDED SSDs before without trouble, though not 
for any heavy lifting, it's my test servers at home. Without an idea of the 
expected mail traffic, this is all speculation.


The SSDs don´t need to be large, and they aren´t.  They are already greatly 
oversized at
512GB nominal capacity.

There´s only a few hundred emails per day.  There is no special requirement for 
their
storage, but there is a lot of free space on these SSDs, and since the email 
traffic is
mostly read-only, it won´t wear out the SSDs.  It simply would make sense to 
put the
mail spool onto these SSDs.


It does have serious stability/data integrity issues that XFS doesn't have.  
There's no reason not to use SSDs for storage of immediate data and mechanical 
drives for archival data storage.

As for VMs we run a huge Zimbra cluster in VMs on VPC with large primary SSD 
volumes and even larger (and slower) secondary volumes for archived mail.  It's 
all CentOS 6 and works very well.  We process 600 million emails a month on 
that virtual cluster.  All EXT4 inside LVM.


Do you use hardware RAID with SSDs?

We do not here where I work, but that was setup LONG before I arrived.


Probably with the very expensive SSDs suited for this ...




I can't tell you what to do, but it seems to me you're viewing your setup from 
a narrow SSD/BTRFS standpoint.  Lots of ways to skin that cat.


That´s because I do not store data on a single disk, without redundancy, and
the SSDs I have are not suitable for hardware RAID.  So what else is there but
either md-RAID or btrfs when I do not want to use ZFS?  I also do not want to
use md-RAID, hence only btrfs remains.  I also like to use sub-volumes, though
that isn´t a requirement (because I can use directories instead and loose the
ability to make snapshots).


If the SSDs you have aren't suitable for hardware RAID, then they aren't good 
for production level mail spools, IMHO.  I mean, you're talking like you're 
expecting a metric buttload 

Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Stephen John Smoogen
On 8 September 2017 at 12:13, Valeri Galtsev  wrote:
>
> On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
>> On 8 September 2017 at 11:00, Valeri Galtsev 
>> wrote:
>>>
>>> On Fri, September 8, 2017 9:48 am, hw wrote:
 m.r...@5-cent.us wrote:
> hw wrote:
>> Mark Haney wrote:
> 
>>> BTRFS isn't going to impact I/O any more significantly than, say,
>>> XFS.
>>
>> But mdadm does, the impact is severe.  I know there are ppl saying
>> otherwise, but I´ve seen the impact myself, and I definitely
>> don´t
>> want
>> it on that particular server because it would likely interfere with
>> other services.
> 
> I haven't really been following this thread, but if your requirements
> are
> that heavy, you're past the point that you need to spring some money
> and
> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought
> them
> more recently?

 Heavy requirements are not required for the impact of md-RAID to be
 noticeable.

 Hardware RAID is already in place, but the SSDs are "extra" and, as I
 said,
 not suited to be used with hardware RAID.
>>>
>>> Could someone, please, elaborate on the statement that "SSDs are not
>>> suitable for hardware RAID".
>>>
>>
>> It will depend on the type of SSD and the type of hardware RAID. There
>> are at least 4 different classes of SSD drives with different levels
>> of cache, write/read performance, number of lifetime writes, etc.
>> There are also multiple types of hardware RAID. A lot of hardware RAID
>> will try to even out disk usage in different ways. This means 'moving'
>> the heavily used data from slow parts to fast parts etc etc.
>
> Wow, you learn something every day ;-) Which hardware RAIDs do these
> moving of data (manufacturer/model, please - believe it or not I never
> heard of that ;-). And "slow part" and "fast part" of what are data being
> moved between?
>
> Thanks in advance for tutorial!
>

I thought it was HP who had these, but I can't find it.. which means
without references... I get an F. My apologies on that. Thank you for
keeping me honest.

-- 
Stephen J Smoogen.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Valeri Galtsev

On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
> On 8 September 2017 at 11:00, Valeri Galtsev 
> wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.r...@5-cent.us wrote:
 hw wrote:
> Mark Haney wrote:
 
>> BTRFS isn't going to impact I/O any more significantly than, say,
>> XFS.
>
> But mdadm does, the impact is severe.  I know there are ppl saying
> otherwise, but I´ve seen the impact myself, and I definitely
> don´t
> want
> it on that particular server because it would likely interfere with
> other services.
 
 I haven't really been following this thread, but if your requirements
 are
 that heavy, you're past the point that you need to spring some money
 and
 buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought
 them
 more recently?
>>>
>>> Heavy requirements are not required for the impact of md-RAID to be
>>> noticeable.
>>>
>>> Hardware RAID is already in place, but the SSDs are "extra" and, as I
>>> said,
>>> not suited to be used with hardware RAID.
>>
>> Could someone, please, elaborate on the statement that "SSDs are not
>> suitable for hardware RAID".
>>
>
> It will depend on the type of SSD and the type of hardware RAID. There
> are at least 4 different classes of SSD drives with different levels
> of cache, write/read performance, number of lifetime writes, etc.
> There are also multiple types of hardware RAID. A lot of hardware RAID
> will try to even out disk usage in different ways. This means 'moving'
> the heavily used data from slow parts to fast parts etc etc.

Wow, you learn something every day ;-) Which hardware RAIDs do these
moving of data (manufacturer/model, please - believe it or not I never
heard of that ;-). And "slow part" and "fast part" of what are data being
moved between?

Thanks in advance for tutorial!

Valeri

> On an SSD
> all these extra writes aren't needed and so if the hardware RAID
> doesn't know about SSD technology it will wear out the SSD quickly.
> Other hardware raid parts that can cause faster failures on SSD's are
> where it does test writes all the time to see if disks are bad etc.
> Again if you have gone with commodity SSD's this will wear out the
> drive faster than expected and boom bad disks.
>
> That said, some hardware RAID's are supposedly made to work with SSD
> drive technology. They don't do those extra writes, they also assume
> that the disks underneath will read/write in near constant time so
> queueing of data is done differently. However that stuff costs extra
> money and not usually shipped in standard OEM hardware.
>
>
>> Thanks.
>> Valeri
>>
>>>
>>> It remains to be tested how the hardware RAID performs, which may be
>>> even
>>> better than the SSDs.
>>> ___
>>> CentOS mailing list
>>> CentOS@centos.org
>>> https://lists.centos.org/mailman/listinfo/centos
>>>
>>
>>
>> 
>> Valeri Galtsev
>> Sr System Administrator
>> Department of Astronomy and Astrophysics
>> Kavli Institute for Cosmological Physics
>> University of Chicago
>> Phone: 773-702-4247
>> 
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
>
>
>
> --
> Stephen J Smoogen.
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>



Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Stephen John Smoogen
On 8 September 2017 at 11:00, Valeri Galtsev  wrote:
>
> On Fri, September 8, 2017 9:48 am, hw wrote:
>> m.r...@5-cent.us wrote:
>>> hw wrote:
 Mark Haney wrote:
>>> 
> BTRFS isn't going to impact I/O any more significantly than, say, XFS.

 But mdadm does, the impact is severe.  I know there are ppl saying
 otherwise, but I´ve seen the impact myself, and I definitely don´t
 want
 it on that particular server because it would likely interfere with
 other services.
>>> 
>>> I haven't really been following this thread, but if your requirements
>>> are
>>> that heavy, you're past the point that you need to spring some money and
>>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them
>>> more recently?
>>
>> Heavy requirements are not required for the impact of md-RAID to be
>> noticeable.
>>
>> Hardware RAID is already in place, but the SSDs are "extra" and, as I
>> said,
>> not suited to be used with hardware RAID.
>
> Could someone, please, elaborate on the statement that "SSDs are not
> suitable for hardware RAID".
>

It will depend on the type of SSD and the type of hardware RAID. There
are at least 4 different classes of SSD drives with different levels
of cache, write/read performance, number of lifetime writes, etc.
There are also multiple types of hardware RAID. A lot of hardware RAID
will try to even out disk usage in different ways. This means 'moving'
the heavily used data from slow parts to fast parts etc etc. On an SSD
all these extra writes aren't needed and so if the hardware RAID
doesn't know about SSD technology it will wear out the SSD quickly.
Other hardware raid parts that can cause faster failures on SSD's are
where it does test writes all the time to see if disks are bad etc.
Again if you have gone with commodity SSD's this will wear out the
drive faster than expected and boom bad disks.

That said, some hardware RAID's are supposedly made to work with SSD
drive technology. They don't do those extra writes, they also assume
that the disks underneath will read/write in near constant time so
queueing of data is done differently. However that stuff costs extra
money and not usually shipped in standard OEM hardware.


> Thanks.
> Valeri
>
>>
>> It remains to be tested how the hardware RAID performs, which may be even
>> better than the SSDs.
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
>>
>
>
> 
> Valeri Galtsev
> Sr System Administrator
> Department of Astronomy and Astrophysics
> Kavli Institute for Cosmological Physics
> University of Chicago
> Phone: 773-702-4247
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos



-- 
Stephen J Smoogen.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Valeri Galtsev

On Fri, September 8, 2017 9:48 am, hw wrote:
> m.r...@5-cent.us wrote:
>> hw wrote:
>>> Mark Haney wrote:
>> 
 BTRFS isn't going to impact I/O any more significantly than, say, XFS.
>>>
>>> But mdadm does, the impact is severe.  I know there are ppl saying
>>> otherwise, but I´ve seen the impact myself, and I definitely don´t
>>> want
>>> it on that particular server because it would likely interfere with
>>> other services.
>> 
>> I haven't really been following this thread, but if your requirements
>> are
>> that heavy, you're past the point that you need to spring some money and
>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them
>> more recently?
>
> Heavy requirements are not required for the impact of md-RAID to be
> noticeable.
>
> Hardware RAID is already in place, but the SSDs are "extra" and, as I
> said,
> not suited to be used with hardware RAID.

Could someone, please, elaborate on the statement that "SSDs are not
suitable for hardware RAID".

Thanks.
Valeri

>
> It remains to be tested how the hardware RAID performs, which may be even
> better than the SSDs.
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>



Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Mark Haney

On 09/08/2017 09:49 AM, hw wrote:

Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment 
on, I'll suck it up for now.


I do, too, yet sometimes it´s reasonable.  I also hate it when the lines
are too long :)


I'm afraid you'll have to live with it a bit longer.  Sorry.
Having SSDs alone will give you great performance regardless of 
filesystem.


It depends, i. e. I can´t tell how these SSDs would behave if large 
amounts of
data would be written and/or read to/from them over extended periods 
of time because

I haven´t tested that.  That isn´t the application, anyway.


If your I/O is going to be heavy (and you've not mentioned expected 
traffic, so we can only go on what little we glean from your posts), 
then SSDs will likely start having issues sooner than a mechanical drive 
might.  (Though, YMMV.)  As I've said, we process 600 million messages a 
month, on primary SSDs in a VMWare cluster, with mechanical storage for 
older, archived user mail.  Archived, may not be exactly correct, but 
the context should be clear.





BTRFS isn't going to impact I/O any more significantly than, say, XFS.


But mdadm does, the impact is severe.  I know there are ppl saying 
otherwise,

but I´ve seen the impact myself, and I definitely don´t want it on that
particular server because it would likely interfere with other 
services.  I don´t
know if the software RAID of btrfs is better in that or not, though, 
but I´m
seeing btrfs on SSDs being fast, and testing with the particular 
application has

shown a speedup of factor 20--30.
I never said anything about MD RAID.  I trust that about as far as I 
could throw it.  And having had 5 surgeries on my throwing shoulder 
wouldn't be far.


That is the crucial improvement.  If the hardware RAID delivers that, 
I´ll use
that and probably remove the SSDs from the machine as it wouldn´t even 
make sense

to put temporary data onto them because that would involve software RAID.
Again, if the idea is to have fast primary storage, there are pretty 
large SSDs available now and I've hardware RAIDED SSDs before without 
trouble, though not for any heavy lifting, it's my test servers at home. 
Without an idea of the expected mail traffic, this is all speculation.


It does have serious stability/data integrity issues that XFS doesn't 
have.  There's no reason not to use SSDs for storage of immediate 
data and mechanical drives for archival data storage.


As for VMs we run a huge Zimbra cluster in VMs on VPC with large 
primary SSD volumes and even larger (and slower) secondary volumes 
for archived mail.  It's all CentOS 6 and works very well.  We 
process 600 million emails a month on that virtual cluster.  All EXT4 
inside LVM.


Do you use hardware RAID with SSDs?

We do not here where I work, but that was setup LONG before I arrived.


I can't tell you what to do, but it seems to me you're viewing your 
setup from a narrow SSD/BTRFS standpoint.  Lots of ways to skin that 
cat.


That´s because I do not store data on a single disk, without 
redundancy, and
the SSDs I have are not suitable for hardware RAID.  So what else is 
there but
either md-RAID or btrfs when I do not want to use ZFS?  I also do not 
want to
use md-RAID, hence only btrfs remains.  I also like to use 
sub-volumes, though
that isn´t a requirement (because I can use directories instead and 
loose the

ability to make snapshots).


If the SSDs you have aren't suitable for hardware RAID, then they aren't 
good for production level mail spools, IMHO.  I mean, you're talking 
like you're expecting a metric buttload of mail traffic, so it stands to 
reason you'll need really beefy hardware.  I don't think you can do what 
you seem to need on budget hardware. Personally, and solely based on 
this thread alone, if I was building this in-house, I'd get a decent 
server cluster together and build a FC or iSCSI SAN to a Nimble storage 
array with Flash/SSD front ends and large HDDs in the back end.  This 
solves virtually all your problems.  The servers will have tiny SSD boot 
drives (which I prefer over booting from the SAN) and then everything 
else gets handled by the storage back-end.


In effect this is how our mail servers are setup here.  And they are 
virtual.


I stay away from LVM because that just sucks.  It wouldn´t even have 
any advantage

in this case.

LVM is a joke.  It's always been something I've avoided like the plague.







On 09/08/2017 08:07 AM, hw wrote:


PS:

What kind of storage solutions do people use for cyrus mail spools?  
Apparently
you can not use remote storage, at least not NFS.  That even makes 
it difficult

to use a VM due to limitations of available disk space.

I´m reluctant to use btrfs, but there doesn´t seem to be any 
reasonable alternative.



hw wrote:

Mark Haney wrote:

On 09/07/2017 01:57 PM, hw wrote:


Hi,

is there anything that speaks against putting a cyrus mail spool 
onto a

btrfs subvolume?

I might be the lone voice on 

Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

m.r...@5-cent.us wrote:

hw wrote:

Mark Haney wrote:



BTRFS isn't going to impact I/O any more significantly than, say, XFS.


But mdadm does, the impact is severe.  I know there are ppl saying
otherwise, but I´ve seen the impact myself, and I definitely don´t want
it on that particular server because it would likely interfere with
other services.


I haven't really been following this thread, but if your requirements are
that heavy, you're past the point that you need to spring some money and
buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them
more recently?


Heavy requirements are not required for the impact of md-RAID to be noticeable.

Hardware RAID is already in place, but the SSDs are "extra" and, as I said,
not suited to be used with hardware RAID.

It remains to be tested how the hardware RAID performs, which may be even
better than the SSDs.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread m . roth
hw wrote:
> Mark Haney wrote:

>> BTRFS isn't going to impact I/O any more significantly than, say, XFS.
>
> But mdadm does, the impact is severe.  I know there are ppl saying
> otherwise, but I´ve seen the impact myself, and I definitely don´t want
> it on that particular server because it would likely interfere with
> other services.

I haven't really been following this thread, but if your requirements are
that heavy, you're past the point that you need to spring some money and
buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them
more recently?

   mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] login case sensitivity

2017-09-08 Thread James B. Byrne

On Thu, September 7, 2017 14:07, hw wrote:
> Gordon Messmer wrote:
>> On 09/07/2017 08:11 AM, Stephen John Smoogen wrote:
>>> This was always  problematic because DNS hostnames and
>>> email addresses in the RFC standards were case insensitive
>>
>>
>> Not quite.  SMTP is required to treat the "local-part" of the RCPT
>> argument as case-sensitive, and to preserve case when relaying mail.
>>  The destination is allowed to treat addresses according to local
>> policy, but in general SMTP is case sensitive with regard to the
>> user identifier.
>
> Last time I checked, RFCs said that local parts *should not* be case
> sensitive, and cyrus defaulted to treat them case sensitive, which
> is a default that usually needs to be changed because senders of
> messages tend to not pay any attention to the case sensitiveness
> of recipient addresses at all, which then confuses them like any
> other error.
>
>

https://tools.ietf.org/html/rfc5321

Updated by: 7504DRAFT STANDARD
  Errata Exist
Network Working Group   J. Klensin
Request for Comments: 5321October 2008
Obsoletes: 2821
Updates: 1123
Category: Standards Track


. . .
2.4.  General Syntax Principles and Transaction Model

. . .

   Verbs and argument values (e.g., "TO:" or "to:" in the RCPT command
   and extension name keywords) are not case sensitive, with the sole
   exception in this specification of a mailbox local-part (SMTP
   Extensions may explicitly specify case-sensitive elements).  That is,
   a command verb, an argument value other than a mailbox local-part,
   and free form text MAY be encoded in upper case, lower case, or any
   mixture of upper and lower case with no impact on its meaning.

   __The local-part of a mailbox MUST BE treated as case sensitive.__

   Therefore, SMTP implementations MUST take care to preserve the case
   of mailbox local-parts.  In particular, for some hosts, the user
   "smith" is different from the user "Smith".  However, exploiting the
   case sensitivity of mailbox local-parts impedes interoperability and
   is discouraged.  Mailbox domains follow normal DNS rules and are
   hence not case sensitive.
. . .

Case munging of the local part is handled by the local delivery agent
in my experience.  The Cyrus LMTP service can be, and often is,
configured to force lower case munging (imapd.conf
'lmtp_downcase_rcpt: 1') of the local part. That decision is site
specific.

-- 
***  e-Mail is NOT a SECURE channel  ***
Do NOT transmit sensitive data via e-Mail
 Do NOT open attachments nor follow links sent by e-Mail

James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

Mark Haney wrote:

I hate top posting, but since you've got two items I want to comment on, I'll 
suck it up for now.


I do, too, yet sometimes it´s reasonable.  I also hate it when the lines
are too long :)


Having SSDs alone will give you great performance regardless of filesystem.


It depends, i. e. I can´t tell how these SSDs would behave if large amounts of
data would be written and/or read to/from them over extended periods of time 
because
I haven´t tested that.  That isn´t the application, anyway.


BTRFS isn't going to impact I/O any more significantly than, say, XFS.


But mdadm does, the impact is severe.  I know there are ppl saying otherwise,
but I´ve seen the impact myself, and I definitely don´t want it on that
particular server because it would likely interfere with other services.  I 
don´t
know if the software RAID of btrfs is better in that or not, though, but I´m
seeing btrfs on SSDs being fast, and testing with the particular application has
shown a speedup of factor 20--30.

That is the crucial improvement.  If the hardware RAID delivers that, I´ll use
that and probably remove the SSDs from the machine as it wouldn´t even make 
sense
to put temporary data onto them because that would involve software RAID.


It does have serious stability/data integrity issues that XFS doesn't have.  
There's no reason not to use SSDs for storage of immediate data and mechanical 
drives for archival data storage.

As for VMs we run a huge Zimbra cluster in VMs on VPC with large primary SSD 
volumes and even larger (and slower) secondary volumes for archived mail.  It's 
all CentOS 6 and works very well.  We process 600 million emails a month on 
that virtual cluster.  All EXT4 inside LVM.


Do you use hardware RAID with SSDs?


I can't tell you what to do, but it seems to me you're viewing your setup from 
a narrow SSD/BTRFS standpoint.  Lots of ways to skin that cat.


That´s because I do not store data on a single disk, without redundancy, and
the SSDs I have are not suitable for hardware RAID.  So what else is there but
either md-RAID or btrfs when I do not want to use ZFS?  I also do not want to
use md-RAID, hence only btrfs remains.  I also like to use sub-volumes, though
that isn´t a requirement (because I can use directories instead and loose the
ability to make snapshots).

I stay away from LVM because that just sucks.  It wouldn´t even have any 
advantage
in this case.





On 09/08/2017 08:07 AM, hw wrote:


PS:

What kind of storage solutions do people use for cyrus mail spools?  Apparently
you can not use remote storage, at least not NFS.  That even makes it difficult
to use a VM due to limitations of available disk space.

I´m reluctant to use btrfs, but there doesn´t seem to be any reasonable 
alternative.


hw wrote:

Mark Haney wrote:

On 09/07/2017 01:57 PM, hw wrote:


Hi,

is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?


I might be the lone voice on this, but I refuse to use btrfs for anything, much 
less a mail spool. I used it in production on DB and Web servers and fought 
corruption issues and scrubs hanging the system more times than I can count.  
(This was within the last 24 months.)  I was told by certain mailing lists, 
that btrfs isn't considered production level.  So, I scrapped the lot, went to 
xfs and haven't had a problem since.

I'm not sure why you'd want your mail spool on a filesystem and seems to hate 
being hammered with reads/writes. Personally, on all my mail spools, I use XFS 
or EXT4.  OUr servers here handle 600million messages a month without trouble 
on those filesystems.

Just my $0.02.


Btrfs appears rather useful because the disks are SSDs, because it
allows me to create subvolumes and because it handles SSDs nicely.
Unfortunately, the SSDs are not suited for hardware RAID.

The only alternative I know is xfs or ext4 on mdadm and no subvolumes,
and md RAID has severe performance penalties which I´m not willing to
afford.

Part of the data I plan to store on these SSDs greatly benefits from
the low latency, making things about 20--30 times faster for an important
application.

So what should I do?





___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-docs] Splitting the container pipeline page in the wiki

2017-09-08 Thread Bamacharan Kundu
On Sep 8, 2017 6:05 PM, "Karanbir Singh"  wrote:

hi Bama

since there are other things going on in the container space around
centos, I was thinking maybe we can setup a /Container page, and then
have a /Container/Registry and a /Container/Pipeline page each. the
pipeline page can talk about the service, code and run setup. and the
registry page can talk about howto get content in there, what content is
already there and urls to the user setup and consumer stuff.

would that work for you ?


That sounds great to me. I am already working to unload redundant content
from /ContainerPipeline to https://registry.centos.org UI. so
Container/Registry would contain the introduction and know how to registry
and Container/Pipeline would have content related to service.

for this we need to update few links though. IMO, we can go ahead with
this. I will update all the relevant links by Monday.

Regards
Bamacharan

please excuse typos sent from mobile..


this would also unblock content coming up like /Container/Docker etc

and the /Container page can perhaps just be an index pointing to the
relevant info.

regards,

--
Karanbir Singh, Project Lead, The CentOS Project
+44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS
GnuPG Key : http://www.karan.org/publickey.asc
___
CentOS-docs mailing list
CentOS-docs@centos.org
https://lists.centos.org/mailman/listinfo/centos-docs


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

Matty wrote:

I think it depends on who you ask. Facebook and Netflix are using it
extensively in production:

https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason

Though they have the in-house kernel engineering resources to
troubleshoot problems. When I see quotes like this [1] on the
product's WIKI:

"The parity RAID code has multiple serious data-loss bugs in it. It
should not be used for anything other than testing purposes."


It´s RAID1, not 5/6.  It´s only 2 SSDs.

I do not /need/ to put the mail spool there, but it makes sense because
the data that benefits from the low latency fills about only 5% of them,
and the spool is mostly read, resulting in not so much wear of the SSDs.

I can probably do a test with that data on the hardware RAID, and if
performance is comparable, I rather put it there than on the SSDs.


I'm reluctant to store anything of value on it. Have you considered
using ZoL? I've been using it for quite some time and haven't lost
data.


Yes, and I´m moving away from ZFS because it remains alien, and the
performance is poor.  ZFS wasn´t designed with performance in mind,
and that shows.

It is amazing that SSDs with Linux are still so pointless and that
there is no file system available actually suited for production use
providing features ZFS and btrfs are valued for.  It´s even frustrating
that disk access still continues to defeat performance so much.

Maybe it´s crazy wanting to put data onto SSDs with btrfs because the
hardware RAID is also RAID1, for performance and better resistance against
failures than RAID5 has.  I guess I really shouldn´t do that.

Now I´m looking forward to the test with the hardware RAID.  A RAID1
of 8 disks may yield even better performance than 2 SSDs in software
RAID1 with btrfs.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Mark Haney
I hate top posting, but since you've got two items I want to comment on, 
I'll suck it up for now.


Having SSDs alone will give you great performance regardless of 
filesystem.  BTRFS isn't going to impact I/O any more significantly 
than, say, XFS.  It does have serious stability/data integrity issues 
that XFS doesn't have.  There's no reason not to use SSDs for storage of 
immediate data and mechanical drives for archival data storage.


As for VMs we run a huge Zimbra cluster in VMs on VPC with large primary 
SSD volumes and even larger (and slower) secondary volumes for archived 
mail.  It's all CentOS 6 and works very well.  We process 600 million 
emails a month on that virtual cluster.  All EXT4 inside LVM.


I can't tell you what to do, but it seems to me you're viewing your 
setup from a narrow SSD/BTRFS standpoint.  Lots of ways to skin that cat.



On 09/08/2017 08:07 AM, hw wrote:


PS:

What kind of storage solutions do people use for cyrus mail spools?  
Apparently
you can not use remote storage, at least not NFS.  That even makes it 
difficult

to use a VM due to limitations of available disk space.

I´m reluctant to use btrfs, but there doesn´t seem to be any 
reasonable alternative.



hw wrote:

Mark Haney wrote:

On 09/07/2017 01:57 PM, hw wrote:


Hi,

is there anything that speaks against putting a cyrus mail spool 
onto a

btrfs subvolume?

I might be the lone voice on this, but I refuse to use btrfs for 
anything, much less a mail spool. I used it in production on DB and 
Web servers and fought corruption issues and scrubs hanging the 
system more times than I can count.  (This was within the last 24 
months.)  I was told by certain mailing lists, that btrfs isn't 
considered production level.  So, I scrapped the lot, went to xfs 
and haven't had a problem since.


I'm not sure why you'd want your mail spool on a filesystem and 
seems to hate being hammered with reads/writes. Personally, on all 
my mail spools, I use XFS or EXT4.  OUr servers here handle 
600million messages a month without trouble on those filesystems.


Just my $0.02.


Btrfs appears rather useful because the disks are SSDs, because it
allows me to create subvolumes and because it handles SSDs nicely.
Unfortunately, the SSDs are not suited for hardware RAID.

The only alternative I know is xfs or ext4 on mdadm and no subvolumes,
and md RAID has severe performance penalties which I´m not willing to
afford.

Part of the data I plan to store on these SSDs greatly benefits from
the low latency, making things about 20--30 times faster for an 
important

application.

So what should I do?



--
Mark Haney
Network Engineer at NeoNova
919-460-3330 option 1
mark.ha...@neonova.net
www.neonova.net

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread Matty
I think it depends on who you ask. Facebook and Netflix are using it
extensively in production:

https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason

Though they have the in-house kernel engineering resources to
troubleshoot problems. When I see quotes like this [1] on the
product's WIKI:

"The parity RAID code has multiple serious data-loss bugs in it. It
should not be used for anything other than testing purposes."

I'm reluctant to store anything of value on it. Have you considered
using ZoL? I've been using it for quite some time and haven't lost
data.

- Ryan
http://prefetch.net

[1] https://btrfs.wiki.kernel.org/index.php/RAID56


On Thu, Sep 7, 2017 at 2:12 PM, Mark Haney  wrote:
> On 09/07/2017 01:57 PM, hw wrote:
>>
>>
>> Hi,
>>
>> is there anything that speaks against putting a cyrus mail spool onto a
>> btrfs subvolume?
>>
> I might be the lone voice on this, but I refuse to use btrfs for anything,
> much less a mail spool. I used it in production on DB and Web servers and
> fought corruption issues and scrubs hanging the system more times than I can
> count.  (This was within the last 24 months.)  I was told by certain mailing
> lists, that btrfs isn't considered production level.  So, I scrapped the
> lot, went to xfs and haven't had a problem since.
>
> I'm not sure why you'd want your mail spool on a filesystem and seems to
> hate being hammered with reads/writes.  Personally, on all my mail spools, I
> use XFS or EXT4.  OUr servers here handle 600million messages a month
> without trouble on those filesystems.
>
> Just my $0.02.
> --
>
> Mark Haney
> Network Engineer at NeoNova
> 919-460-3330 option 1
> mark.ha...@neonova.net
> www.neonova.net
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] intel wireless 3165 and CentOS 7.3

2017-09-08 Thread Jerry Geis
I am trying to get wireless working on CentOS 7.3 with intel wireless 3165

ip link
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast state
UP mode DEFAULT qlen 1000
link/ether b8:ae:ed:77:b3:3a brd ff:ff:ff:ff:ff:ff
3: wlan0:  mtu 1500 qdisc noop state DOWN mode DEFAULT
qlen 1000
link/ether 34:02:86:cc:a0:79 brd ff:ff:ff:ff:ff:ff
4: virbr0:  mtu 1500 qdisc noqueue state
DOWN mode DEFAULT qlen 1000
link/ether 52:54:00:70:8f:48 brd ff:ff:ff:ff:ff:ff
5: virbr0-nic:  mtu 1500 qdisc noqueue master virbr0
state DOWN mode DEFAULT qlen 500
link/ether 52:54:00:70:8f:48 brd ff:ff:ff:ff:ff:ff

lspci
00:00.0 Host bridge: Intel Corporation Device 2280 (rev 21)
00:02.0 VGA compatible controller: Intel Corporation Device 22b1 (rev 21)
00:13.0 SATA controller: Intel Corporation Device 22a3 (rev 21)
00:14.0 USB controller: Intel Corporation Device 22b5 (rev 21)
00:1a.0 Encryption controller: Intel Corporation Device 2298 (rev 21)
00:1b.0 Audio device: Intel Corporation Device 2284 (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 22c8 (rev 21)
00:1c.1 PCI bridge: Intel Corporation Device 22ca (rev 21)
00:1c.2 PCI bridge: Intel Corporation Device 22cc (rev 21)
00:1f.0 ISA bridge: Intel Corporation Device 229c (rev 21)
00:1f.3 SMBus: Intel Corporation Device 2292 (rev 21)
02:00.0 Network controller: Intel Corporation Wireless 3165 (rev 81)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

I have created the ifcfg-ESSID and keys-ESSID files. but I dont even thing
the wireless is loading.
ifconfig does not report wlan0 or anything.

What am I missing to get wireless going?

Thanks,

Jerry
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS-docs] Splitting the container pipeline page in the wiki

2017-09-08 Thread Karanbir Singh
hi Bama

since there are other things going on in the container space around
centos, I was thinking maybe we can setup a /Container page, and then
have a /Container/Registry and a /Container/Pipeline page each. the
pipeline page can talk about the service, code and run setup. and the
registry page can talk about howto get content in there, what content is
already there and urls to the user setup and consumer stuff.

would that work for you ?

this would also unblock content coming up like /Container/Docker etc

and the /Container page can perhaps just be an index pointing to the
relevant info.

regards,

-- 
Karanbir Singh, Project Lead, The CentOS Project
+44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS
GnuPG Key : http://www.karan.org/publickey.asc



signature.asc
Description: OpenPGP digital signature
___
CentOS-docs mailing list
CentOS-docs@centos.org
https://lists.centos.org/mailman/listinfo/centos-docs


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw


PS:

What kind of storage solutions do people use for cyrus mail spools?  Apparently
you can not use remote storage, at least not NFS.  That even makes it difficult
to use a VM due to limitations of available disk space.

I´m reluctant to use btrfs, but there doesn´t seem to be any reasonable 
alternative.


hw wrote:

Mark Haney wrote:

On 09/07/2017 01:57 PM, hw wrote:


Hi,

is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?


I might be the lone voice on this, but I refuse to use btrfs for anything, much 
less a mail spool. I used it in production on DB and Web servers and fought 
corruption issues and scrubs hanging the system more times than I can count.  
(This was within the last 24 months.)  I was told by certain mailing lists, 
that btrfs isn't considered production level.  So, I scrapped the lot, went to 
xfs and haven't had a problem since.

I'm not sure why you'd want your mail spool on a filesystem and seems to hate 
being hammered with reads/writes.  Personally, on all my mail spools, I use XFS 
or EXT4.  OUr servers here handle 600million messages a month without trouble 
on those filesystems.

Just my $0.02.


Btrfs appears rather useful because the disks are SSDs, because it
allows me to create subvolumes and because it handles SSDs nicely.
Unfortunately, the SSDs are not suited for hardware RAID.

The only alternative I know is xfs or ext4 on mdadm and no subvolumes,
and md RAID has severe performance penalties which I´m not willing to
afford.

Part of the data I plan to store on these SSDs greatly benefits from
the low latency, making things about 20--30 times faster for an important
application.

So what should I do?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cyrus spool on btrfs?

2017-09-08 Thread hw

Mark Haney wrote:

On 09/07/2017 01:57 PM, hw wrote:


Hi,

is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?


I might be the lone voice on this, but I refuse to use btrfs for anything, much 
less a mail spool. I used it in production on DB and Web servers and fought 
corruption issues and scrubs hanging the system more times than I can count.  
(This was within the last 24 months.)  I was told by certain mailing lists, 
that btrfs isn't considered production level.  So, I scrapped the lot, went to 
xfs and haven't had a problem since.

I'm not sure why you'd want your mail spool on a filesystem and seems to hate 
being hammered with reads/writes.  Personally, on all my mail spools, I use XFS 
or EXT4.  OUr servers here handle 600million messages a month without trouble 
on those filesystems.

Just my $0.02.


Btrfs appears rather useful because the disks are SSDs, because it
allows me to create subvolumes and because it handles SSDs nicely.
Unfortunately, the SSDs are not suited for hardware RAID.

The only alternative I know is xfs or ext4 on mdadm and no subvolumes,
and md RAID has severe performance penalties which I´m not willing to
afford.

Part of the data I plan to store on these SSDs greatly benefits from
the low latency, making things about 20--30 times faster for an important
application.

So what should I do?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] login case sensitivity

2017-09-08 Thread hw

Alexander Dalloz wrote:

Am 07.09.2017 um 20:07 schrieb hw:

Gordon Messmer wrote:

On 09/07/2017 08:11 AM, Stephen John Smoogen wrote:

This was always
problematic because DNS hostnames and email addresses in the RFC
standards were case insensitive



Not quite.  SMTP is required to treat the "local-part" of the RCPT argument as 
case-sensitive, and to preserve case when relaying mail.  The destination is allowed to 
treat addresses according to local policy, but in general SMTP is case sensitive with 
regard to the user identifier.


Last time I checked, RFCs said that local parts *should not* be case sensitive,
and cyrus defaulted to treat them case sensitive, which is a default that 
usually
needs to be changed because senders of messages tend to not pay any attention to
the case sensitiveness of recipient addresses at all, which then confuses them 
like
any other error.


The relevant part from the RFC:

https://www.ietf.org/rfc/rfc5321.txt

2.4.  General Syntax Principles and Transaction Model

   Verbs and argument values (e.g., "TO:" or "to:" in the RCPT command
   and extension name keywords) are not case sensitive, with the sole
   exception in this specification of a mailbox local-part (SMTP
   Extensions may explicitly specify case-sensitive elements).  That is,
   a command verb, an argument value other than a mailbox local-part,
   and free form text MAY be encoded in upper case, lower case, or any
   mixture of upper and lower case with no impact on its meaning.  The
   local-part of a mailbox MUST BE treated as case sensitive.
   Therefore, SMTP implementations MUST take care to preserve the case
   of mailbox local-parts.  In particular, for some hosts, the user
   "smith" is different from the user "Smith".  However, exploiting the
   case sensitivity of mailbox local-parts impedes interoperability and
   is discouraged.  Mailbox domains follow normal DNS rules and are
   hence not case sensitive.


That´s the implementation of the protocol, see my previous post, and:


"
   Any system that includes an SMTP server supporting mail relaying or
   delivery MUST support the reserved mailbox "postmaster" as a case-
   insensitive local name.
"

also from RFC 2821, section 4.5.1.  Of course, this is a special case;
I just can´t find the part wich exactly said that local parts should be
treated case insensitively beyond what I found in 2821.  It´s even possible
that it was changed.

If you really want to treat local parts case sensitive, you can do so.
I´d advise against it.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] login case sensitivity

2017-09-08 Thread hw

Stephen John Smoogen wrote:

On 7 September 2017 at 16:07, Alexander Dalloz  wrote:

Am 07.09.2017 um 20:07 schrieb hw:


Gordon Messmer wrote:


On 09/07/2017 08:11 AM, Stephen John Smoogen wrote:


This was always
problematic because DNS hostnames and email addresses in the RFC
standards were case insensitive




Not quite.  SMTP is required to treat the "local-part" of the RCPT
argument as case-sensitive, and to preserve case when relaying mail.  The
destination is allowed to treat addresses according to local policy, but in
general SMTP is case sensitive with regard to the user identifier.



Last time I checked, RFCs said that local parts *should not* be case
sensitive,
and cyrus defaulted to treat them case sensitive, which is a default that
usually
needs to be changed because senders of messages tend to not pay any
attention to
the case sensitiveness of recipient addresses at all, which then confuses
them like
any other error.



The relevant part from the RFC:

https://www.ietf.org/rfc/rfc5321.txt

2.4.  General Syntax Principles and Transaction Model

   Verbs and argument values (e.g., "TO:" or "to:" in the RCPT command
   and extension name keywords) are not case sensitive, with the sole
   exception in this specification of a mailbox local-part (SMTP
   Extensions may explicitly specify case-sensitive elements).  That is,
   a command verb, an argument value other than a mailbox local-part,
   and free form text MAY be encoded in upper case, lower case, or any
   mixture of upper and lower case with no impact on its meaning.  The
   local-part of a mailbox MUST BE treated as case sensitive.
   Therefore, SMTP implementations MUST take care to preserve the case
   of mailbox local-parts.  In particular, for some hosts, the user
   "smith" is different from the user "Smith".  However, exploiting the
   case sensitivity of mailbox local-parts impedes interoperability and
   is discouraged.  Mailbox domains follow normal DNS rules and are
   hence not case sensitive.
  for maximum interoperability, a host that expects to receive mail

   SHOULD avoid defining mailboxes where the Local-part requires (or
   uses) the Quoted-string form or where the Local-part is case-
   sensitive.





Thanks for the clarification to my original email. I misremembered
RFC821 and thought it was for the entire part..

   Commands and replies are not case sensitive.  That is, a command or
   reply word may be upper case, lower case, or any mixture of upper and
   lower case.  Note that this is not true of mailbox user names.  For
   some hosts the user name is case sensitive, and SMTP implementations
   must take case to preserve the case of user names as they appear in
   mailbox arguments.  Host names are not case sensitive.


RFC2821, section 4.1.2:

"  for maximum interoperability, a host that expects to receive mail
   SHOULD avoid defining mailboxes where the Local-part requires (or
   uses) the Quoted-string form or where the Local-part is case-
   sensitive.
"

It comes down to that case-preservation is demanded from the implementations
of protocols while, pragmatically, local parts are encouraged to be case
insensitive.


More than a decade ago, I argued that the default used by cyrus be changed to
treat local parts case insensitve.  About 2 years ago, that still hadn´t
changed.

So everyone deploying cyrus, be aware.  Other than that, cyrus always worked
flawlessly, and I highly recommend it to everyone needing an IMAP server.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] change network settings of VM depending on logged in user(s)?

2017-09-08 Thread Mark L Sung

> On 7 Sep 2017, at 21:48, hw  wrote:
> 
> [-=X.L.O.R.D=-] wrote:
>> 1guys,
>> I think it can be done via power management feature if PC idle a period of 
>> time, sleep it.
> 
> Access is exclusively via RDP-sessions.  I don´t know if I can get a VM to 
> hibernate
> when idle and to wake up when someone tries to connect; and delying logins as 
> mmight
> occur when the VM needs to wake up first is not an option.  Besides, internet 
> access
> would not be denied at all and only not used when the VM is hibernating.
X—> Yup and just make sure the power-save is set-to never, so RDP session 
should be ready for RDP call-in request from remote.
> 
>> For your message, to disallow an user to have internet access that can be 
>> done via his account profile at local security from Microsoft product 
>> itself. Alternatively GPO policy at the AD. When he logs on to the username 
>> and password, his profile will also download to that PC.
> 
> Thanks, that´s something I need to look into.  It would still be better than 
> no
> restiction at all.
> 
> 
> The intention is to quarantaine windoze machines as much as possible for 
> security
> reasons.  Hibernating doesn´t really help with that unless they´re never used 
> …
> X —> One thing you can append is once user logged off, use a reborn-software 
> to revert to the clean status or previous backup snapshot point.

> 
>> Hope that help!
>> 
>> Xlord
>> 
>> -Original Message-
>> From: CentOS-virt [mailto:centos-virt-boun...@centos.org] On Behalf Of hw
>> Sent: Saturday, September 2, 2017 7:50 PM
>> To: centos-virt@centos.org
>> Subject: Re: [CentOS-virt] change network settings of VM depending on logged 
>> in user(s)?
>> 
>> PJ Welsh wrote:
>>> I'm not a M$ expert, but I've seen enough GPO's to believe there is a way 
>>> to do it through Windows.
>>> PJWelsh
>> 
>> For the whole machine?
>> 
>> So far, I´ve only found information regarding blocking access for particular 
>> users.  I want it the other way round, i. e. the whole maching usually not 
>> having access and allowing access to only a particular user.
>> 
>> Allowing access to only a particular user can (should ideally) involve the 
>> whole machine still not having access.
>> 
>> 
>> Perhaps it seems like an unusual request --- yet the more I think about it, 
>> it seems like it should become the default.  Why should a machine have 
>> internet access all the time rather than only when it´s needed, and when 
>> it´s needed, why not restrict it to exactly what is needed and nothing else.
>> 
>> 
>>> 
>>> On Fri, Sep 1, 2017 at 11:43 AM, hw > 
>>> wrote:
>>> 
>>> 
>>>Hi,
>>> 
>>>is there a way to disable internet access for a windoze 7 VM depending
>>>on which user(s) is/are logged in?
>>> 
>>>It seems windoze 7 doesn´t really support this, especially when you want
>>>to disable internet access for the whole machine, so I´m wondering if
>>>there is a way to do this when the machine is a KVM-VM running on Centos.
>>> 
>>>The whole VM should only have internet access when a particular user logs
>>>in, and preferably for only this particular user.
>>>___
>>>CentOS-virt mailing list
>>>CentOS-virt@centos.org 
>>>https://lists.centos.org/mailman/listinfo/centos-virt
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> CentOS-virt mailing list
>>> CentOS-virt@centos.org
>>> https://lists.centos.org/mailman/listinfo/centos-virt
>>> 
>> 
>> ___
>> CentOS-virt mailing list
>> CentOS-virt@centos.org
>> https://lists.centos.org/mailman/listinfo/centos-virt
>> 
>> ___
>> CentOS-virt mailing list
>> CentOS-virt@centos.org 
>> https://lists.centos.org/mailman/listinfo/centos-virt 
>> 
>> 
> 
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org 
> https://lists.centos.org/mailman/listinfo/centos-virt 
> 
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] updating qemu-img-ev 2.6

2017-09-08 Thread Sandro Bonazzola
2017-09-07 15:04 GMT+02:00 Johnny Hughes :

> On 09/07/2017 04:09 AM, Johan Guldmyr wrote:
> > Hello!
> >
> > http://mirror.centos.org/centos/7/virt/x86_64/kvm-common
> >
> > Latest is now qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64.rpm
> >
> > while http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/
> RHEV/SRPMS/
> >
> > has qemu-kvm-rhev-2.6.0-28.el7_3.12.src.rpm
> >
> > A few questions:
> >  A) Does kvm-common have RPM builds from RHEV's SRPMS?
>

kvm-common qemu-kvm-ev is rebuilt from RHV's SRPMS with a re-branding
patch, see https://gerrit.ovirt.org/80989


> >  B) Is the plan to keep kvm-common up to date with RHEV?
>

Yes, we are waiting CentOS 7.4.1708 to be available on build systems for
building latest available qemu-kvm-ev (currently failing:
http://cbs.centos.org/koji/buildinfo?buildID=19374 )


>
> I am not sure of the answer to your question (the SIG chairman might be
> able to answer the policy question) .. However
>
> We will be moving to 7.4 soon(ish), at which time the older tree (centos
> 7.3.1611) moves into vault .. and the 'qemu-kvm-rhev-2.9.0 el7_4' branch
> will likely start being built.
>
>
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt