On 9/9/2017 9:47 AM, hw wrote:
Isn´t it easier for SSDs to write small chunks of data at a time?
The small chunk might fit into some free space more easily than
a large one which needs to be spread out all over the place.
the SSD collects data blocks being written and when a full flash
Johnny Hughes wrote:
On 09/07/2017 12:57 PM, hw wrote:
Hi,
is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?
This is what Red Hat says about btrfs:
The Btrfs file system has been in Technology Preview state since the
initial release of Red Hat
Gordon Messmer wrote:
On 09/08/2017 11:06 AM, hw wrote:
Make a test and replace a software RAID5 with a hardware RAID5. Even with
only 4 disks, you will see an overall performance gain. I´m guessing that
the SATA controllers they put onto the mainboards are not designed to handle
all the data
> Am 09.09.2017 um 19:22 schrieb hw :
>
> Mark Haney wrote:
>> On 09/08/2017 01:31 PM, hw wrote:
>>> Mark Haney wrote:
>>>
>>> I/O is not heavy in that sense, that´s why I said that´s not the
>>> application.
>>> There is I/O which, as tests have shown, benefits greatly from low
Valeri Galtsev wrote:
Thanks. That seems to clear fog a little bit. I still would like to hear
manufacturers/models here. My choices would be: Areca or LSI (bought out
by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
Evo SATA III. Does anyone who used these in hardware
Mark Haney wrote:
On 09/08/2017 01:31 PM, hw wrote:
Mark Haney wrote:
I/O is not heavy in that sense, that´s why I said that´s not the application.
There is I/O which, as tests have shown, benefits greatly from low latency,
which
is where the idea to use SSDs for the relevant data has arisen
> On Sep 9, 2017, at 12:47 PM, hw wrote:
>
> Isn´t it easier for SSDs to write small chunks of data at a time?
SSDs read/write in large-ish (256k-4M) blocks/pages. Seems to me that drive
blocks and hardware RAID strip size and file system block/cluster/extents sizes
and etc
m.r...@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
Probably with the very expensive SSDs suited for this ...
That´s because I do not store data on a single disk, without
redundancy, and the SSDs I have are not suitable for hardware
John R Pierce wrote:
And one may want to adjust stripe size to be resembling SSDs
internals, as default is for hard drives, right?
as the SSD physical data blocks have no visible relation to logical block
numbers or CHS, its not practical to do this. I'd use a fairly large stripe
size, like
On 9/8/2017 2:36 PM, Valeri Galtsev wrote:
With all due respect, John, this is the same as hard drive cache is not
backed up power wise for a case of power loss. And hard drives all lie
about write operation completed before data actually are on the platters.
So we can claim the same: hard
On Fri, September 8, 2017 3:06 pm, John R Pierce wrote:
> On 9/8/2017 12:52 PM, Valeri Galtsev wrote:
>> Thanks. That seems to clear fog a little bit. I still would like to hear
>> manufacturers/models here. My choices would be: Areca or LSI (bought out
>> by Intel, so former LSI chipset and
On 09/08/2017 11:06 AM, hw wrote:
Make a test and replace a software RAID5 with a hardware RAID5. Even
with
only 4 disks, you will see an overall performance gain. I´m guessing
that
the SATA controllers they put onto the mainboards are not designed to
handle
all the data --- which gets
On Fri, Sep 8, 2017 at 2:52 PM, Valeri Galtsev
wrote:
>
> manufacturers/models here. My choices would be: Areca or LSI (bought out
> by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
>
Intel only purchased the networking component of LSI,
On 9/8/2017 12:52 PM, Valeri Galtsev wrote:
Thanks. That seems to clear fog a little bit. I still would like to hear
manufacturers/models here. My choices would be: Areca or LSI (bought out
by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
Evo SATA III. Does anyone who
On Fri, September 8, 2017 12:56 pm, hw wrote:
> Valeri Galtsev wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.r...@5-cent.us wrote:
hw wrote:
> Mark Haney wrote:
>> BTRFS isn't going to impact I/O any more significantly than, say,
>> XFS.
>
> But
On 09/07/2017 12:57 PM, hw wrote:
>
> Hi,
>
> is there anything that speaks against putting a cyrus mail spool onto a
> btrfs subvolume?
This is what Red Hat says about btrfs:
The Btrfs file system has been in Technology Preview state since the
initial release of Red Hat Enterprise Linux 6.
Mark Haney wrote:
> On 09/08/2017 01:31 PM, hw wrote:
>> Mark Haney wrote:
>>
>> Probably with the very expensive SSDs suited for this ...
> Possibly, but that's somewhat irrelevant. I've taken off the shelf SSDs
> and hardware RAID'd them. If they work for the hell I put them through
>
On 09/08/2017 01:31 PM, hw wrote:
Mark Haney wrote:
I/O is not heavy in that sense, that´s why I said that´s not the
application.
There is I/O which, as tests have shown, benefits greatly from low
latency, which
is where the idea to use SSDs for the relevant data has arisen from.
This I/O
m.r...@5-cent.us wrote:
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
It depends, i. e. I can´t tell how these SSDs would behave if large
amounts of data would be written and/or read to/from them over extended
periods of time because I haven´t tested that. That
hw wrote:
> Mark Haney wrote:
>> On 09/08/2017 09:49 AM, hw wrote:
>>> Mark Haney wrote:
> Probably with the very expensive SSDs suited for this ...
>>>
>>> That´s because I do not store data on a single disk, without
>>> redundancy, and the SSDs I have are not suitable for hardware RAID.
Valeri Galtsev wrote:
On Fri, September 8, 2017 9:48 am, hw wrote:
m.r...@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm does, the impact is severe. I know there are ppl saying
otherwise, but I´ve seen the
Mark Haney wrote:
> On 09/08/2017 09:49 AM, hw wrote:
>> Mark Haney wrote:
>>
>> It depends, i. e. I can´t tell how these SSDs would behave if large
>> amounts of data would be written and/or read to/from them over extended
>> periods of time because I haven´t tested that. That isn´t the
>>
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment on, I'll
suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
I'm afraid you'll have to live
On 8 September 2017 at 12:13, Valeri Galtsev wrote:
>
> On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
>> On 8 September 2017 at 11:00, Valeri Galtsev
>> wrote:
>>>
>>> On Fri, September 8, 2017 9:48 am, hw wrote:
On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
> On 8 September 2017 at 11:00, Valeri Galtsev
> wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.r...@5-cent.us wrote:
hw wrote:
> Mark Haney wrote:
>> BTRFS isn't going
On 8 September 2017 at 11:00, Valeri Galtsev wrote:
>
> On Fri, September 8, 2017 9:48 am, hw wrote:
>> m.r...@5-cent.us wrote:
>>> hw wrote:
Mark Haney wrote:
>>>
> BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm
On Fri, September 8, 2017 9:48 am, hw wrote:
> m.r...@5-cent.us wrote:
>> hw wrote:
>>> Mark Haney wrote:
>>
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
>>>
>>> But mdadm does, the impact is severe. I know there are ppl saying
>>> otherwise, but I´ve seen the
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment
on, I'll suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
I'm afraid you'll have to live with it a bit longer.
m.r...@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm does, the impact is severe. I know there are ppl saying
otherwise, but I´ve seen the impact myself, and I definitely don´t want
it on that particular server
hw wrote:
> Mark Haney wrote:
>> BTRFS isn't going to impact I/O any more significantly than, say, XFS.
>
> But mdadm does, the impact is severe. I know there are ppl saying
> otherwise, but I´ve seen the impact myself, and I definitely don´t want
> it on that particular server because it would
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment on, I'll
suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
Having SSDs alone will give you great performance regardless of filesystem.
It
Matty wrote:
I think it depends on who you ask. Facebook and Netflix are using it
extensively in production:
https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason
Though they have the in-house kernel engineering resources to
troubleshoot
I hate top posting, but since you've got two items I want to comment on,
I'll suck it up for now.
Having SSDs alone will give you great performance regardless of
filesystem. BTRFS isn't going to impact I/O any more significantly
than, say, XFS. It does have serious stability/data integrity
I think it depends on who you ask. Facebook and Netflix are using it
extensively in production:
https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason
Though they have the in-house kernel engineering resources to
troubleshoot problems. When I see
PS:
What kind of storage solutions do people use for cyrus mail spools? Apparently
you can not use remote storage, at least not NFS. That even makes it difficult
to use a VM due to limitations of available disk space.
I´m reluctant to use btrfs, but there doesn´t seem to be any reasonable
Mark Haney wrote:
On 09/07/2017 01:57 PM, hw wrote:
Hi,
is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?
I might be the lone voice on this, but I refuse to use btrfs for anything, much
less a mail spool. I used it in production on DB and Web servers
On 09/07/2017 01:57 PM, hw wrote:
Hi,
is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?
I might be the lone voice on this, but I refuse to use btrfs for
anything, much less a mail spool. I used it in production on DB and Web
servers and fought
Hi,
is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
38 matches
Mail list logo