On Thu, Nov 6, 2014 at 12:00 PM, Luca Mazzaferro
luca.mazzafe...@rzg.mpg.de wrote:
Dear Users,
Hi Luca,
On the admin-node side the ceph healt command or the ceph -w hangs forever.
I'm not a ceph expert either, but this is usually an indication that
the monitors are not running.
How many MONs
On Fri, Dec 5, 2014 at 2:24 AM, Anthony Alba ascanio.al...@gmail.com wrote:
Hi Cephers,
Have anyone of you decided to put Giant into production instead of Firefly?
This is very interesting to me too: we are going to deploy a large
ceph cluster on Ubuntu 14.04 LTS, and so far what I have found
On Fri, Dec 5, 2014 at 2:24 AM, Anthony Alba ascanio.al...@gmail.com wrote:
Hi Cephers,
Have anyone of you decided to put Giant into production instead of Firefly?
This is very interesting to me too: we are going to deploy a large
ceph cluster on Ubuntu 14.04 LTS, and so far what I have found
On Fri, Dec 5, 2014 at 4:25 PM, David Moreau Simard dmsim...@iweb.com wrote:
What are the kernel versions involved ?
We have Ubuntu precise clients talking to a Ubuntu trusty cluster without
issues - with tunables optimal.
0.88 (Giant) and 0.89 has been working well for us as far the client
On Fri, Dec 5, 2014 at 4:25 PM, Nick Fisk n...@fisk.me.uk wrote:
This is probably due to the Kernel RBD client not being recent enough. Have
you tried upgrading your kernel to a newer version? 3.16 should contain all
the relevant features required by Giant.
I would rather tune the tunables, as
://kernel.ubuntu.com/~kernel-ppa/mainline/
and I believe 3.16 should be available in the 14.04.2 release, which should
be released early next year.
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Antonio Messina
Sent: 05 December 2014 15:38
On Fri, Dec 5, 2014 at 4:59 PM, Sage Weil s...@newdream.net wrote:
On Fri, 5 Dec 2014, Antonio Messina wrote:
On Fri, Dec 5, 2014 at 2:24 AM, Anthony Alba ascanio.al...@gmail.com wrote:
Hi Cephers,
Have anyone of you decided to put Giant into production instead of Firefly?
This is very
Hi all, just an update
After setting chooseleaf_vary_r to 0 _and_ removing an pool with
erasure coding, I was able to run rbd map.
Thank you all for the help
.a.
On Fri, Dec 5, 2014 at 5:07 PM, Antonio Messina
antonio.mess...@s3it.uzh.ch wrote:
On Fri, Dec 5, 2014 at 4:59 PM, Sage Weil s
On Fri, Dec 5, 2014 at 5:24 PM, Sage Weil s...@newdream.net wrote:
The v2 rule means you have a crush rule for erasure coding. Do you have
an EC pool in your cluster?
Yes indeed. I didn't know EC pool was incompatible with the current
kernel, I only tested it with rados bench and VMs, I guess.
On Sun, Dec 7, 2014 at 1:51 PM, René Gallati c...@gallati.net wrote:
Hello Antonio,
I use aptly to manage my repositories and mix and match (and snapshot / pin)
I didn't know aptly, thank you for mentioning.
specific versions and non-standard packages, but as far as I know, the
kernel from
On Sun, Dec 7, 2014 at 1:51 PM, René Gallati c...@gallati.net wrote:
Hello Antonio,
I use aptly to manage my repositories and mix and match (and snapshot / pin)
I didn't know aptly, thank you for mentioning.
specific versions and non-standard packages, but as far as I know, the
kernel from
On Wed, Mar 25, 2015 at 6:06 PM, Robert LeBlanc rob...@leblancnet.us wrote:
I don't know much about ceph-deploy, but I know that ceph-disk has
problems automatically adding an SSD OSD when there are journals of
other disks already on it. I've had to partition the disk ahead of
time and pass
Hi all,
I'm trying to install ceph on a 7-nodes preproduction cluster. Each
node has 24x 4TB SAS disks (2x dell md1400 enclosures) and 6x 800GB
SSDs (for cache tiering, not journals). I'm using Ubuntu 14.04 and
ceph-deploy to install the cluster, I've been trying both Firefly and
Giant and
On Thu, Apr 23, 2015 at 11:18 AM, Jake Grimmett j...@mrc-lmb.cam.ac.uk wrote:
Dear All,
I have multiple disk types (15k 7k) on each ceph node, which I assign to
different pools, but have a problem as whenever I reboot a node, the OSD's
move in the CRUSH map.
I just found out that you can
On Thu, Apr 23, 2015 at 11:18 AM, Jake Grimmett j...@mrc-lmb.cam.ac.uk wrote:
Dear All,
I have multiple disk types (15k 7k) on each ceph node, which I assign to
different pools, but have a problem as whenever I reboot a node, the OSD's
move in the CRUSH map.
I just found out that you can
On Wed, Mar 25, 2015 at 6:37 PM, Robert LeBlanc rob...@leblancnet.us wrote:
As far as the foreign journal, I would run dd over the journal
partition and try it again. It sounds like something didn't get
cleaned up from a previous run.
I wrote zeros on the journal device re-created the journal
On Mon, Aug 3, 2015 at 5:10 PM, Quentin Hartman
qhart...@direwolfdigital.com wrote:
The problem with this kind of monitoring is that there are so many possible
metrics to watch and so many possible ways to watch them. For myself, I'm
working on implementing a couple of things:
- Watching error
I used rclone[0] to sync from filesystem to SWIFT. Although it's a
plains SWIFT cluster I'm sure it works with RGW/S3 as well
[0] http://rclone.org/
2016-12-28 22:12 GMT+01:00 Robin H. Johnson :
> On Wed, Dec 28, 2016 at 09:31:57PM +0100, Marc Roos wrote:
>> Is it possible to
18 matches
Mail list logo