Awesome idea, thank you all!
Xlord
From: CentOS-docs [mailto:centos-docs-boun...@centos.org] On Behalf Of
Bamacharan Kundu
Sent: Friday, September 8, 2017 9:43 PM
To: Karanbir Singh
Cc: Akemi Yagi (toracat) ; Mail list for wiki articles
On 2017-09-09, Nicolas Kovacs wrote:
>
> So, in other words, there's no need to worry if a little swap is used
> when the system's been running non-stop for a couple months?
As long as your system isn't thrashing swap it's totally fine. From
what you've written it doesn't
On 09/09/2017 04:43 AM, Nicolas Kovacs wrote:
I read up a bit on RAM consumption, and now I wonder if flushing the
memory cache regularly is a good idea.
Typically, no. Whatever pages of memory the system has swapped out are
pages that the kernel's counters indicate are less often used than
On 9/9/2017 9:47 AM, hw wrote:
Isn´t it easier for SSDs to write small chunks of data at a time?
The small chunk might fit into some free space more easily than
a large one which needs to be spread out all over the place.
the SSD collects data blocks being written and when a full flash
Johnny Hughes wrote:
On 09/07/2017 12:57 PM, hw wrote:
Hi,
is there anything that speaks against putting a cyrus mail spool onto a
btrfs subvolume?
This is what Red Hat says about btrfs:
The Btrfs file system has been in Technology Preview state since the
initial release of Red Hat
Gordon Messmer wrote:
On 09/08/2017 11:06 AM, hw wrote:
Make a test and replace a software RAID5 with a hardware RAID5. Even with
only 4 disks, you will see an overall performance gain. I´m guessing that
the SATA controllers they put onto the mainboards are not designed to handle
all the data
> Am 09.09.2017 um 19:22 schrieb hw :
>
> Mark Haney wrote:
>> On 09/08/2017 01:31 PM, hw wrote:
>>> Mark Haney wrote:
>>>
>>> I/O is not heavy in that sense, that´s why I said that´s not the
>>> application.
>>> There is I/O which, as tests have shown, benefits greatly from low
Valeri Galtsev wrote:
Thanks. That seems to clear fog a little bit. I still would like to hear
manufacturers/models here. My choices would be: Areca or LSI (bought out
by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung
Evo SATA III. Does anyone who used these in hardware
Mark Haney wrote:
On 09/08/2017 01:31 PM, hw wrote:
Mark Haney wrote:
I/O is not heavy in that sense, that´s why I said that´s not the application.
There is I/O which, as tests have shown, benefits greatly from low latency,
which
is where the idea to use SSDs for the relevant data has arisen
Le 09/09/2017 à 18:11, hw a écrit :
> It´s nothing unusual and a good thing when it means that stuff not
> needed in memory is swapped out.
>
> What is the advantage of emptying the cache supposed to be? I´ve
> had servers with the uptime-counter flowing over after years of
> uptime and never
> On Sep 9, 2017, at 12:47 PM, hw wrote:
>
> Isn´t it easier for SSDs to write small chunks of data at a time?
SSDs read/write in large-ish (256k-4M) blocks/pages. Seems to me that drive
blocks and hardware RAID strip size and file system block/cluster/extents sizes
and etc
m.r...@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
Probably with the very expensive SSDs suited for this ...
That´s because I do not store data on a single disk, without
redundancy, and the SSDs I have are not suitable for hardware
John R Pierce wrote:
And one may want to adjust stripe size to be resembling SSDs
internals, as default is for hard drives, right?
as the SSD physical data blocks have no visible relation to logical block
numbers or CHS, its not practical to do this. I'd use a fairly large stripe
size, like
Nicolas Kovacs wrote:
Le 09/09/2017 à 15:14, Robert Nichols a écrit :
Every system that runs continuously for more that a few days will have
some pages that were used once when some long-running process started
and were never referenced again. Those pages will eventually migrate out
to swap,
Le 09/09/2017 à 15:14, Robert Nichols a écrit :
> Every system that runs continuously for more that a few days will have
> some pages that were used once when some long-running process started
> and were never referenced again. Those pages will eventually migrate out
> to swap, and that's the best
On 09/09/2017 07:55 AM, Nicolas Kovacs wrote:
Le 09/09/2017 à 14:41, Phil Perry a écrit :
Why were you surprised? Linux systems use the available RAM, surely you
understand that?
I'm surprised because my system used the available RAM and then it even
began to swap.
Of course there is the
Le 09/09/2017 à 14:41, Phil Perry a écrit :
> Why were you surprised? Linux systems use the available RAM, surely you
> understand that?
I'm surprised because my system used the available RAM and then it even
began to swap.
>
> Of course there is the possibility that you have discovered a bug
On 09/09/17 12:43, Nicolas Kovacs wrote:
Hi,
A few days ago I checked the health of my main public server running
CentOS 7, a quad-core machine with 16 GB RAM. It had been running
non-stop for 65 days, hosting only a handful of services (BIND, NTP,
Apache, Postfix, Dovecot) for two domains.
I
Hi,
A few days ago I checked the health of my main public server running
CentOS 7, a quad-core machine with 16 GB RAM. It had been running
non-stop for 65 days, hosting only a handful of services (BIND, NTP,
Apache, Postfix, Dovecot) for two domains.
I was surprised to see that RAM consumption
Zimbra es una buena opción, tiene un instalador bastante sencillo.
Saludos
Wilmer Arambula wrote:
>Yo uso (postfix + postfixmysql + dovecot + clamav + spamassaing + mariadb +
>dkim + dmarck) y por supuesto el registro spf todo bajo sll y funciona a
>las mil
20 matches
Mail list logo