On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson andre...@aktzero.com wrote:
Looking at old archives, I found this thread which shows that to mount a
pool as cephfs, it needs to be added to mds:
http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
I started a `rados cppool
On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson andre...@aktzero.com wrote:
Looking at old archives, I found this thread which shows that to mount a
pool as cephfs, it needs to be added to mds:
On Tue, Sep 4, 2012 at 9:19 AM, Andrew Thompson andre...@aktzero.com wrote:
Yes, it was my `data` pool I was trying to grow. After renaming and removing
the original data pool, I can `ls` my folders/files, but not access them.
Yup, you're seeing ceph-mds being able to access the metadata pool,
On 8/31/2012 11:05 PM, Sage Weil wrote:
Sadly you can't yet adjust pg_num for an active pool. You can create a
new pool,
ceph osd pool create name pg_num
I would aim for 20 * num_osd, or thereabouts.. see
http://ceph.com/docs/master/ops/manage/grow/placement-groups/
Then you
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached is the crush map.
Here is the current situation
On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached is the
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached
On Fri, 31 Aug 2012, Andrew Thompson wrote:
On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson andre...@aktzero.com wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my cluster a
few days ago reloading different crush maps only to find
On Fri, 31 Aug 2012, Andrew Thompson wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my cluster a
few days ago reloading different crush maps only to find that it
re-injecting a
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:39 AM, Gregory Farnum wrote:
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson andre...@aktzero.com
wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been
On 09/01/2012 11:05 AM, Sage Weil wrote:
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while
14 matches
Mail list logo