Re: [ceph-users] CephFS removal.

2015-02-12 Thread Gregory Farnum
What version of Ceph are you running? It's varied by a bit.

But I think you want to just turn off the MDS and run the fail
command — deactivate is actually the command for removing a logical
MDS from the cluster, and you can't do that for a lone MDS because
there's nobody to pass off the data to. I'll make a ticket to clarify
this. When you've done that you should be able to delete it.
-Greg

On Mon, Feb 2, 2015 at 1:40 AM,  warren.je...@stfc.ac.uk wrote:
 Hi All,



 Having a few problems removing cephfs file systems.



 I want to remove my current pools (was used for test data) – wiping all
 current data, and start a fresh file system on my current cluster.



 I have looked over the documentation but I can’t find anything on this. I
 have an object store pool, Which I don’t want to remove – but I’d like to
 remove the cephfs file system pools and remake them.





 My cephfs is called ‘data’.



 Running ceph fs delete data returns: Error EINVAL: all MDS daemons must be
 inactive before removing filesystem



 To make an MDS inactive I believe the command is: ceph mds deactivate 0



 Which returns: telling mds.0 135.248.53.134:6809/16692 to deactivate



 Checking the status of the mds using: ceph mds stat  returns: e105: 1/1/0 up
 {0=node2=up:stopping}



 This has been sitting at this status for the whole weekend with no change. I
 don’t have any clients connected currently.



 When trying to manually just remove the pools, it’s not allowed as there is
 a cephfs file system on them.



 I’m happy that all of the failsafe’s to stop someone removing a pool are all
 working correctly.



 If this is currently undoable. Is there a way to quickly wipe a cephfs
 filesystem – using RM from a kernel client is really slow.



 Many thanks



 Warren Jeffs


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS removal.

2015-02-12 Thread warren.jeffs
I am running 0.87, In the end I just wiped the cluster and started again - it 
was quicker.

Warren

-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com] 
Sent: 12 February 2015 16:25
To: Jeffs, Warren (STFC,RAL,ISIS)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CephFS removal.

What version of Ceph are you running? It's varied by a bit.

But I think you want to just turn off the MDS and run the fail
command — deactivate is actually the command for removing a logical MDS from 
the cluster, and you can't do that for a lone MDS because there's nobody to 
pass off the data to. I'll make a ticket to clarify this. When you've done that 
you should be able to delete it.
-Greg

On Mon, Feb 2, 2015 at 1:40 AM,  warren.je...@stfc.ac.uk wrote:
 Hi All,



 Having a few problems removing cephfs file systems.



 I want to remove my current pools (was used for test data) – wiping 
 all current data, and start a fresh file system on my current cluster.



 I have looked over the documentation but I can’t find anything on 
 this. I have an object store pool, Which I don’t want to remove – but 
 I’d like to remove the cephfs file system pools and remake them.





 My cephfs is called ‘data’.



 Running ceph fs delete data returns: Error EINVAL: all MDS daemons 
 must be inactive before removing filesystem



 To make an MDS inactive I believe the command is: ceph mds deactivate 
 0



 Which returns: telling mds.0 135.248.53.134:6809/16692 to deactivate



 Checking the status of the mds using: ceph mds stat  returns: e105: 
 1/1/0 up {0=node2=up:stopping}



 This has been sitting at this status for the whole weekend with no 
 change. I don’t have any clients connected currently.



 When trying to manually just remove the pools, it’s not allowed as 
 there is a cephfs file system on them.



 I’m happy that all of the failsafe’s to stop someone removing a pool 
 are all working correctly.



 If this is currently undoable. Is there a way to quickly wipe a cephfs 
 filesystem – using RM from a kernel client is really slow.



 Many thanks



 Warren Jeffs


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS removal.

2015-02-12 Thread warren.jeffs
Hi All,

Having a few problems removing cephfs file systems.

I want to remove my current pools (was used for test data) - wiping all current 
data, and start a fresh file system on my current cluster.

I have looked over the documentation but I can't find anything on this. I have 
an object store pool, Which I don't want to remove - but I'd like to remove the 
cephfs file system pools and remake them.


My cephfs is called 'data'.

Running ceph fs delete data returns: Error EINVAL: all MDS daemons must be 
inactive before removing filesystem

To make an MDS inactive I believe the command is: ceph mds deactivate 0

Which returns: telling mds.0 135.248.53.134:6809/16692 to deactivate

Checking the status of the mds using: ceph mds stat  returns: e105: 1/1/0 up 
{0=node2=up:stopping}

This has been sitting at this status for the whole weekend with no change. I 
don't have any clients connected currently.

When trying to manually just remove the pools, it's not allowed as there is a 
cephfs file system on them.

I'm happy that all of the failsafe's to stop someone removing a pool are all 
working correctly.

If this is currently undoable. Is there a way to quickly wipe a cephfs 
filesystem - using RM from a kernel client is really slow.

Many thanks

Warren Jeffs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS removal.

2015-02-12 Thread Gregory Farnum
Oh, hah, your initial email had a very delayed message
delivery...probably got stuck in the moderation queue. :)

On Thu, Feb 12, 2015 at 8:26 AM,  warren.je...@stfc.ac.uk wrote:
 I am running 0.87, In the end I just wiped the cluster and started again - it 
 was quicker.

 Warren

 -Original Message-
 From: Gregory Farnum [mailto:g...@gregs42.com]
 Sent: 12 February 2015 16:25
 To: Jeffs, Warren (STFC,RAL,ISIS)
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] CephFS removal.

 What version of Ceph are you running? It's varied by a bit.

 But I think you want to just turn off the MDS and run the fail
 command — deactivate is actually the command for removing a logical MDS from 
 the cluster, and you can't do that for a lone MDS because there's nobody to 
 pass off the data to. I'll make a ticket to clarify this. When you've done 
 that you should be able to delete it.
 -Greg

 On Mon, Feb 2, 2015 at 1:40 AM,  warren.je...@stfc.ac.uk wrote:
 Hi All,



 Having a few problems removing cephfs file systems.



 I want to remove my current pools (was used for test data) – wiping
 all current data, and start a fresh file system on my current cluster.



 I have looked over the documentation but I can’t find anything on
 this. I have an object store pool, Which I don’t want to remove – but
 I’d like to remove the cephfs file system pools and remake them.





 My cephfs is called ‘data’.



 Running ceph fs delete data returns: Error EINVAL: all MDS daemons
 must be inactive before removing filesystem



 To make an MDS inactive I believe the command is: ceph mds deactivate
 0



 Which returns: telling mds.0 135.248.53.134:6809/16692 to deactivate



 Checking the status of the mds using: ceph mds stat  returns: e105:
 1/1/0 up {0=node2=up:stopping}



 This has been sitting at this status for the whole weekend with no
 change. I don’t have any clients connected currently.



 When trying to manually just remove the pools, it’s not allowed as
 there is a cephfs file system on them.



 I’m happy that all of the failsafe’s to stop someone removing a pool
 are all working correctly.



 If this is currently undoable. Is there a way to quickly wipe a cephfs
 filesystem – using RM from a kernel client is really slow.



 Many thanks



 Warren Jeffs


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com