On 29/04/2015 09:31 AM, Götz Reinicke - IT Koordinator wrote:
Hi,
may be somewon has a working solution and information on that:
I installed the most recent mysql community on a server and do get a
lot
of errno: 24 - Too many open files.
There are suggestions to increase the
-Original Message-
From: Jim Perrin
Sent: Tuesday, April 28, 2015 20:45
On 04/28/2015 06:05 PM, Akemi Yagi wrote:
On Tue, Apr 28, 2015 at 3:10 PM, Johnny Hughes
joh...@centos.org wrote:
CentOS is not approved for DOD use. In fact, CentOS is
not now, nor has
it ever been
Hi,
may be somewon has a working solution and information on that:
I installed the most recent mysql community on a server and do get a lot
of errno: 24 - Too many open files.
There are suggestions to increase the open_files_limit, change/add that
to /etc/security/limits.conf and modify the
CentOS Errata and Bugfix Advisory 2015:0915
Upstream details at : https://rhn.redhat.com/errata/RHBA-2015-0915.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
CentOS Errata and Bugfix Advisory 2015:0916
Upstream details at : https://rhn.redhat.com/errata/RHBA-2015-0916.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
Gotz,
This is due to systemd, it overrules your settings. Add a file to systemd
config fixes it:
[root@mysql2 ~]# cat /etc/systemd/system/mariadb.service.d/limits.conf
[Service]
LimitNOFILE=1
LimitMEMLOCK=1
On Wed, Apr 29, 2015 at 8:31 AM, Götz Reinicke - IT Koordinator
On Tue, Apr 28, 2015 at 4:05 PM, Akemi Yagi amy...@gmail.com wrote:
Incidentally, someone has just started a thread related to DoD in the
RH community discussion session entitled, A DoD version of RHEL - A
money maker for RH? Maybe! :
https://access.redhat.com/comment/913243
A new comment
On Wed, Apr 29, 2015 at 10:51 AM, m.r...@5-cent.us wrote:
The server in this case isn't a Linux box with an ext4 file system - so
that won't help ...
What kind of filesystem is it? I note that xfs also has barrier as a mount
option.
The server is a NetApp FAS6280. It's using NetApp's
m.r...@5-cent.us wrote:
Matt Garman wrote:
We have a compute cluster of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
snip
*IF* I understand you,
--On Wednesday, April 29, 2015 08:35:29 AM -0500 Matt Garman
matthew.gar...@gmail.com wrote:
All indications are that CentOS 6 seems to be much more aggressive
in how it does NFS reads. And likewise, CentOS 5 was very polite,
to the point that it basically got starved out by the introduction
James Pearson wrote:
m.r...@5-cent.us wrote:
Matt Garman wrote:
We have a compute cluster of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
snip
*IF*
I have noanacron installed on a fresh centos 7 install.
I added this too settings.
nano /etc/cron.d/0hourly
*/5 * * * * root run-parts /etc/cron.fiveminutes
*/1 * * * * root run-parts /etc/cron.minute
0,30 * * * * root run-parts /etc/cron.halfhour
and then created the directories for it. Now
Carl,
By default my.cnf has to obey the OS limits, so in this case the order is:
systemd /etc/security/limits* /etc/my*.
On Wed, Apr 29, 2015 at 3:22 PM, Carl E. Hartung carlh04...@gmail.com
wrote:
Hi Johan,
Does systemd also overrule /etc/my.conf?
Thx!
Carl
On Wed, 29 Apr 2015
Check selinux context for directory?
30.4.2015 12.19 ap. Matt matt.mailingli...@gmail.com kirjoitti:
I have noanacron installed on a fresh centos 7 install.
I added this too settings.
nano /etc/cron.d/0hourly
*/5 * * * * root run-parts /etc/cron.fiveminutes
*/1 * * * * root run-parts
I'm staring at the free CentOS images on AWS, and seeing that whoever
set those up elected to use a partition for /dev/xvda1 rather than
taking advantage of Amazon's tendency to use /dev/xvda, /dev/xvdb,
etc. for each disk and use those directly as a file system.
The result is that if you elect
Dear Doc Admins,
My name is Earl Ramirez and I have a particular interest with the
'hardening' SIG, therefore I will like to know if its possible for me to
have write access to the hardening SIG page [0]. My goal is to kick off the
draft and as we come together to decide the goals and direction
Hi Johan,
Does systemd also overrule /etc/my.conf?
Thx!
Carl
On Wed, 29 Apr 2015 14:58:52 +0200
Johan Kooijman wrote:
Gotz,
This is due to systemd, it overrules your settings. Add a file to
systemd config fixes it:
[root@mysql2 ~]# cat /etc/systemd/system/mariadb.service.d/limits.conf
On 29 April 2015 at 14:23, Earl A Ramirez earlarami...@gmail.com wrote:
Dear Doc Admins,
My name is Earl Ramirez and I have a particular interest with the
'hardening' SIG, therefore I will like to know if its possible for me to
have write access to the hardening SIG page [0]. My goal is to
Should the URL match the word used in the subject? [0]
[0] http://wiki.centos.org/SpecialInterestGroup/Hardening
jerry
On Wed, Apr 29, 2015 at 8:45 AM, Alan Bartlett a...@elrepo.org wrote:
On 29 April 2015 at 14:23, Earl A Ramirez earlarami...@gmail.com wrote:
Dear Doc Admins,
My name is
We have a compute cluster of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5.
We did a
On 29 April 2015 at 15:48, Jerry Amundson jamun...@gmail.com wrote:
Should the URL match the word used in the subject? [0]
[0] http://wiki.centos.org/SpecialInterestGroup/Hardening
jerry
On Wed, Apr 29, 2015 at 8:45 AM, Alan Bartlett a...@elrepo.org wrote:
On 29 April 2015 at 14:23, Earl
On 29 April 2015 at 14:48, Jerry Amundson jamun...@gmail.com wrote:
Should the URL match the word used in the subject? [0]
[0] http://wiki.centos.org/SpecialInterestGroup/Hardening
jerry
Thanks, Jerry. I shall tactfully refrain for mentioning who originally
created that page using novel
Thank you for clarifying this, Johan. Very much appreciated!
On Wed, 29 Apr 2015 22:28:00 +0200
Johan Kooijman wrote:
Carl,
By default my.cnf has to obey the OS limits, so in this case the
order is: systemd /etc/security/limits* /etc/my*.
On Wed, Apr 29, 2015 at 3:22 PM, Carl E.
You may want to look at NFSometer and see if it can help.
Haven't seen that, will definitely give it a try!
Try nfsstat -cn on the clients to see if any particular NFS operations
occur more or less frequently on the C6 systems.
Also look at the lookupcache option found in man nfs:
On Wed, Apr 29, 2015 at 10:36 AM, Devin Reade g...@gno.org wrote:
Have you looked at the client-side NFS cache? Perhaps the C6 cache
is either disabled, has fewer resources, or is invalidating faster?
(I don't think that would explain the C5 starvation, though, unless
it's a secondary effect
This is not really a problem at all.
when you launch your image for the first time, you can specify a larger /
volume size and cloud-init-tools will take care of the rest.
This is well documented in the AWS userguides.
-- Kelly Prescott
On Wed, 29 Apr 2015, Nico Kadel-Garcia wrote:
I'm
to follow-up, I will give an example.
Here is the listing for the official centos AMI:
IMAGE ami-96a818feaws-marketplace/CentOS 7 x86_64 (2014_09_29) EBS
HVM-b7ee8a69-ee97-4a49-9e68-afaee216db2e-ami-d2a117ba.2
aws-marketplace available public [marketplace:
On Wed, Apr 29, 2015 at 11:24 PM, Kelly Prescott kpresc...@coolip.net wrote:
This is not really a problem at all.
when you launch your image for the first time, you can specify a larger /
volume size and cloud-init-tools will take care of the rest.
This is well documented in the AWS
Matt Garman wrote:
We have a compute cluster of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
We recently upgraded all these machines from CentOS 5.7
30 matches
Mail list logo