This is the second development checkpoint release of Luminous, the next
long term stable release.
Major changes from 12.0.0
-
* The original librados rados_objects_list_open (C) and objects_begin
(C++) object listing API, deprecated in Hammer, has finally been
Jason, sorry for the typo of your email address in my last mail...
On 29/03/2017, 00:36, Jason Dillaman wrote:
While certainly that could be a feature that could be added to "rbd
info", it will take a while for this feature to reach full use since
it would rely on new versions of librbd /
On Tue, Mar 28, 2017 at 8:44 PM, Brady Deetz wrote:
> Thanks John. Since we're on 10.2.5, the mds package has a dependency on
> 10.2.6
>
> Do you feel it is safe to perform a cluster upgrade to 10.2.6 in this state?
Yes, shouldn't be an issue to upgrade the whole system to
Thanks John. Since we're on 10.2.5, the mds package has a dependency on
10.2.6
Do you feel it is safe to perform a cluster upgrade to 10.2.6 in this state?
[root@mds0 ceph-admin]# rpm -Uvh ceph-mds-10.2.6-1.gdf5ca2d.el7.x86_64.rpm
error: Failed dependencies:
ceph-base =
Hi Christian,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 28 March 2017 00:59
> To: ceph-users@lists.ceph.com
> Cc: Nick Fisk
> Subject: Re: [ceph-users] New hardware for OSDs
>
>
> Hello,
>
On Tue, Mar 28, 2017 at 7:12 PM, Brady Deetz wrote:
> Thank you very much. I've located the directory that's layout is against
> that pool. I've dug around to attempt to create a pool with the same ID as
> the deleted one, but for fairly obvious reasons, that doesn't seem to
Thank you very much. I've located the directory that's layout is against
that pool. I've dug around to attempt to create a pool with the same ID as
the deleted one, but for fairly obvious reasons, that doesn't seem to exist.
On Tue, Mar 28, 2017 at 1:08 PM, John Spray wrote:
On Tue, Mar 28, 2017 at 6:45 PM, Brady Deetz wrote:
> If I follow the recommendations of this doc, do you suspect we will recover?
>
> http://docs.ceph.com/docs/jewel/cephfs/disaster-recovery/
You might, but it's overkill and introduces its own risks -- your
metadata isn't
Did you at some point add a new data pool to the filesystem, and then
remove the pool? With a little investigation I've found that the MDS
currently doesn't handle that properly:
http://tracker.ceph.com/issues/19401
John
On Tue, Mar 28, 2017 at 6:11 PM, John Spray wrote:
>
On Tue, Mar 28, 2017 at 5:54 PM, Brady Deetz wrote:
> Running Jewel 10.2.5 on my production cephfs cluster and came into this ceph
> status
>
> [ceph-admin@mds1 brady]$ ceph status
> cluster 6f91f60c-7bc0-4aaa-a136-4a90851fbe10
> health HEALTH_WARN
> mds0:
Running Jewel 10.2.5 on my production cephfs cluster and came into this
ceph status
[ceph-admin@mds1 brady]$ ceph status
cluster 6f91f60c-7bc0-4aaa-a136-4a90851fbe10
health HEALTH_WARN
mds0: Behind on trimming (2718/30)
mds0: MDS in read-only mode
monmap e17:
While certainly that could be a feature that could be added to "rbd
info", it will take a while for this feature to reach full use since
it would rely on new versions of librbd / krbd.
Additionally, access and modified timestamps would require sending out
an update notification so that other
This is something we have talked about in the past -- and I think it
would be a good addition. The caveat is that if we would now have
config-level, per-pool level, per-image level (via rbd image-meta),
and command-line/environment variable configuration overrides.
I think there needs to be clear
Just adding some anecdotal input. It likely won't be ultimately helpful
other than a +1..
Seemingly, we also have the same issue since enabling exclusive-lock on
images. We experienced these messages at a large scale when making a CRUSH
map change a few weeks ago that resulted in many many VMs
> Op 28 maart 2017 om 16:52 schreef Gregory Farnum :
>
>
> CephFS files are deleted asynchronously by the mds once there are no more
> client references to the file (NOT when the file is unlinked -- that's not
> how posix works). If the number of objects isn't going down
Eric,
If you already have debug level 20 logs captured from one of these
events, I would love to be able to take a look at them to see what's
going on. Depending on the size, you could either attach the log to a
new RBD tracker ticket [1] or use the ceph-post-file helper to upload
a large file.
CephFS files are deleted asynchronously by the mds once there are no more
client references to the file (NOT when the file is unlinked -- that's not
how posix works). If the number of objects isn't going down after a while,
restarting your samba instance will probably do the trick.
-Greg
On Tue,
Nope, all osds are running 0.94.9
On 28/03/17 14:53, Brian Andrus wrote:
Well, you said you were running v0.94.9, but are there any OSDs
running pre-v0.94.4 as the error states?
On Tue, Mar 28, 2017 at 6:51 AM, Jaime Ibar > wrote:
On
Hi, may be I got something wrong or did not understend it yet in total.
I have some pools and created some test rbd images which are mounted to
a samba server.
After the test I deleted all files from the on the samba server.
But "ceph df detail" and "ceph -s" show still used space.
The OSDs
Well, you said you were running v0.94.9, but are there any OSDs running
pre-v0.94.4 as the error states?
On Tue, Mar 28, 2017 at 6:51 AM, Jaime Ibar wrote:
>
>
> On 28/03/17 14:41, Brian Andrus wrote:
>
> What does
> # ceph tell osd.* version
>
> ceph tell osd.21 version
>
On 28/03/17 14:41, Brian Andrus wrote:
What does
# ceph tell osd.* version
ceph tell osd.21 version
Error ENXIO: problem getting command descriptions from osd.21
reveal? Any pre-v0.94.4 hammer OSDs running as the error states?
Yes, as this is the first one I tried to upgrade.
The other
We also ran into this problem on upgrading Ubuntu from 14.04 to 16.04.
The service file is not being automatically created. The issue was
resolved with the following steps:
$ sudo systemctl enable ceph-mon@your-hostname
/Created symlink from
What does
# ceph tell osd.* version
reveal? Any pre-v0.94.4 hammer OSDs running as the error states?
On Tue, Mar 28, 2017 at 1:21 AM, Jaime Ibar wrote:
> Hi,
>
> I did change the ownership to user ceph. In fact, OSD processes are running
>
> ps aux | grep ceph
> ceph
> Op 27 maart 2017 om 21:49 schreef Richard Hesse :
>
>
> Has anyone run their Ceph OSD cluster network on IPv6 using SLAAC? I know
> that ceph supports IPv6, but I'm not sure how it would deal with the
> address rotation in SLAAC, permanent vs outgoing address, etc.
Hi,
I did change the ownership to user ceph. In fact, OSD processes are running
ps aux | grep ceph
ceph2199 0.0 2.7 1729044 918792 ? Ssl Mar27 0:21
/usr/bin/ceph-osd --cluster=ceph -i 42 -f --setuser ceph --setgroup ceph
ceph2200 0.0 2.7 1721212 911084 ? Ssl
On Mon, Mar 27, 2017 at 11:17 PM, Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> I can't guarantee it's the same as my issue, but from that it sounds the
> same.
>
> Jewel 10.2.4, 10.2.5 tested
> hypervisors are proxmox qemu-kvm, using librbd
> 3 ceph nodes with mon+osd on each
>
>
On Tue, Mar 28, 2017 at 4:22 PM, Marcus Furlong wrote:
> On 22 March 2017 at 19:36, Brad Hubbard wrote:
>> On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote:
>
>>> [435339.965817] [ cut here ]
>>>
On 22 March 2017 at 19:36, Brad Hubbard wrote:
> On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote:
>> [435339.965817] [ cut here ]
>> [435339.965874] WARNING: at fs/xfs/xfs_aops.c:1244
>> xfs_vm_releasepage+0xcb/0x100 [xfs]()
I've copied Dan who may have some thoughts on this and has been
involved with this code.
On Tue, Mar 28, 2017 at 3:58 PM, Mika c wrote:
> Hi Brad,
>Thanks for your help. I found that's my problem. Forget attach file name
> with words ''keyring".
>
> And sorry to
Hi Brad,
Thanks for your help. I found that's my problem. Forget attach file name
with words ''keyring".
And sorry to bother you again. Is it possible to create a minimum privilege
client for the api to run?
Best wishes,
Mika
2017-03-24 19:32 GMT+08:00 Brad Hubbard :
30 matches
Mail list logo