On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson <andre...@aktzero.com> wrote:
Looking at old archives, I found this thread which shows that to mount a
pool as cephfs, it needs to be added to mds:

http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685

I started a `rados cppool data tempstore` a couple hours ago. When it
finishes, will I need to remove the current pool from mds somehow(other than
just deleting the pool)?

Is `ceph mds add_data_pool <poolname>` still required? (It's not listed in
`ceph --help`.)
If the pool you are trying to grow pg_num for really is a CephFS data
pool, I fear a "rados cppool" is nowhere near enough to perform a
migration. My understanding is that each of the inodes stored in
cephfs/on ceph-mds'es knows what pool the file data resides in; you
shoveling the objects into another pool with "rados cppool" doesn't
change these pointers, removing the old pool will just break the
filesystem.

Before we go too far down this road: is your problem pool *really*
being use as a cephfs data pool? Based on how it's not named "data"
and you're just now asking about "ceph mds add_data_pool", it seems
that's not likely..

Well, I guess it's time to wipe this cluster and start over.

Yes, it was my `data` pool I was trying to grow. After renaming and removing the original data pool, I can `ls` my folders/files, but not access them.

I attempted a tar backup beforehand, so unless it flaked out, I should be able to recover data.

I was concerned the small number of PGs created by default by mkcephfs would be an issue, so I was trying to up it a bit. I'm not going to have 100+ OSDs or petabytes of data. I just want a relatively safe place to store my files that I can easily extend as needed.

So far, I'm 0 and 5... I keep blowing up the filesystem, one way or another.

--
Andrew Thompson
http://aktzero.com/

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to