.
Am I reading this incorrectly?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.commailto:kevin.wei...@imc-chicago.com
Thanks Gregory,
One point that was a bit unclear in documentation is whether or not this
equation for PGs applies to a single pool, or the entirety of pools.
Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should
all the pools ADD UP to 3000 PGs? Thanks!
--
Kevin Weiler
Hi guys,
I have an OSD in my cluster that is near full at 90%, but we're using a little
less than half the available storage in the cluster. Shouldn't this be balanced
out?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com
All of the disks in my cluster are identical and therefore all have the same
weight (each drive is 2TB and the automatically generated weight is 1.82 for
each one).
Would the procedure here be to reduce the weight, let it rebal, and then put
the weight back to where it was?
--
Kevin Weiler
this is in bytes.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.commailto:kevin.wei...@imc-chicago.com
From: Kurt Bauer kurt.ba
Hi Josh,
We did map it directly to the host, and it seems to work just fine. I
think this is a problem with how the container is accessing the rbd module.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312
VMs don't go
down when there is a problem with the cluster?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.commailto:kevin.wei...@imc
The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did,
however, try a map with an RBD that was format 2. I got the same error.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax
...@inktank.com
Date: Tuesday, August 27, 2013 9:42 AM
To: Kevin Weiler
kevin.wei...@imc-chicago.commailto:kevin.wei...@imc-chicago.com
Cc: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy pushy
: NOKEY
/usr/bin/env
gdisk
or
pushy = 0.5.3
python(abi) = 2.7
python-argparse
python-distribute
python-pushy = 0.5.3
rpmlib(CompressedFileNames) = 3.0.4-1
rpmlib(PayloadFilesHavePrefix) = 4.0-1
it seems to require both pushy AND python-pushy.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker
). The spec file looks fine in
the ceph-deploy git repo, maybe you just need to rerun the package/repo
generation? Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail
Hi again Ceph devs,
I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph about this and we both agreed it
doesn't seem possible given the documentation. However, I have an example of a
ceph cluster that was deployed using
creating
the client.admin key so it doesn't need capabilities? Thanks again!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com
On 8/3/13
13 matches
Mail list logo