I have had two ceph monitor nodes generate swap space alerts this week.
Looking at the memory, I see ceph-mon using a lot of memory and most of the
swap space. My ceph nodes have 128GB mem, with 2GB swap (I know the
memory/swap ratio is odd)
When I get the alert, I see the following
e:
> On 09.02.2017 20:11, Jim Kilborn wrote:
>
>> I am trying to figure out how to allow my users to have sudo on their
>> workstation, but not have that root access to the ceph kernel mounted volume.
>
> I do not think that CephFS is meant to be mounted on human users'
>
<mailto:j...@suse.de>
Sent: Thursday, February 9, 2017 3:06 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph-mon memory issue jewel 10.2.5 kernel 4.4
Hi Jim,
On 02/08/2017 07:45 PM, Jim Kilborn wrote:
> I have had two ceph monitor n
5 on 4.4.x kernel.
>
> It seems what with every release there are more and more problems with ceph
> (((, which is a shame.
>
> Andrei
>
> - Original Message -
>> From: "Jim Kilborn" <j...@kilborns.com>
>> To: "ceph-users" <ceph-users@list
Does cephfs have an option for root squash, like nfs mounts do?
I am trying to figure out how to allow my users to have sudo on their
workstation, but not have that root access to the ceph kernel mounted volume.
Can’t seem to find anything. Using cephx for the mount, but can’t find a “root
Hello all…
I am setting up a ceph cluster (jewel) on a private network. The compute nodes
are all running centos 7 and mounting the cephfs volume using the kernel
driver. The ceph storage nodes are dual connected to the private network, as
well as our corporate network, as some users need to
I am finishing testing our new cephfs cluster and wanted to document a failed
osd procedure.
I noticed that when I pulled a drive, to simulate a failure, and run through
the replacement steps, the osd has to be removed from the crushmap in order to
initialize the new drive as the same osd
.
Thanks again
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: Reed Dier<mailto:reed.d...@focusvq.com>
Sent: Wednesday, September 14, 2016 1:39 PM
To: Jim Kilborn<mailto:j...@kilborns.com>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.
deal.
I’m new to cephfs and ceph completely, so I’m in that steep learning curve
phase
Thanks again
Sent from Windows Mail
From: Gregory Farnum<mailto:gfar...@redhat.com>
Sent: Thursday, September 8, 2016 6:05 PM
To: Jim Kilborn<mailto:j...@kilborns.com>
Cc: Wido d
I have a replicated cache pool and metadata pool which reside on ssd drives,
with a size of 2, backed by a erasure coded data pool
The cephfs filesystem was in a healthy state. I pulled an SSD drive, to perform
an exercise in osd failure.
The cluster recognized the ssd failure, and replicated
Simple issue I cant find with the cache tier. Thanks for taking the time…
Setup a new cluster with ssd cache tier. My cache tier is on 1TB ssd. With 2
replicas. It just fills up my cache until the ceph filesystem stops allowing
access.
I even set the target_max_bytes to 1048576 (1GB) and still
Please disregard this. I have a error in my target_max_bytes, that was causing
the issue. I now have it evicting the cache.
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: Jim Kilborn<mailto:j...@kilborns.com>
Sent: Tuesday, September 20
;https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: John Spray<mailto:jsp...@redhat.com>
Sent: Wednesday, October 19, 2016 12:16 PM
To: Jim Kilborn<mailto:j...@kilborns.com>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] New
I have setup a new linux cluster to allow migration from our old SAN based
cluster to a new cluster with ceph.
All systems running centos 7.2 with the 3.10.0-327.36.1 kernel.
I am basically running stock ceph settings, with just turning the write cache
off via hdparm on the drives, and
;
Sent: Wednesday, October 19, 2016 7:54 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Cc: Jim Kilborn<mailto:j...@kilborns.com>
Subject: Re: [ceph-users] New cephfs cluster performance issues- Jewel - cache
pressure, capability release, poor iostat await avg queue size
tps://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: Jim Kilborn<mailto:j...@kilborns.com>
Sent: Thursday, October 20, 2016 10:20 AM
To: Christian Balzer<mailto:ch...@gol.com>;
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] New cephfs cl
Jewel - cache
pressure, capability release, poor iostat await avg queue size
On Oct 19, 2016, at 7:54 PM, Christian Balzer
<ch...@gol.com<mailto:ch...@gol.com>> wrote:
Hello,
On Wed, 19 Oct 2016 12:28:28 + Jim Kilborn wrote:
I have setup a new linux cluster to allow migration from
from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: John Spray<mailto:jsp...@redhat.com>
Sent: Wednesday, October 19, 2016 9:10 AM
To: Jim Kilborn<mailto:j...@kilborns.com>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subje
ows 10
From: Jim Kilborn<mailto:j...@kilborns.com>
Sent: Wednesday, January 4, 2017 9:19 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] client.admin accidently removed caps/permissions
Hello:
I was trying to fix an problem with mds caps, and ca
Hello:
I was trying to fix an problem with mds caps, and caused my admin user to have
no mon caps.
I ran:
ceph auth caps client.admin mds 'allow *'
I didn’t realize I had to pass the mon and osd caps as well. Now, when I try to
run any command, I get
2017-01-04 08:58:44.009250 7f5441f62700
20 matches
Mail list logo