Le Tue, 21 Nov 2017 10:52:43 +0200,
Rudi Ahlers a écrit :
> [snip, snap]
>
> Maybe I'm confusing the terminology. I have created a DB and WAL
> device for my Bluestore, but I presume that's not a journal
> (anymore?).
They never were, please read further in the
Le Wed, 15 Nov 2017 19:46:48 +,
Shawn Edwards a écrit :
> On Wed, Nov 15, 2017, 11:07 David Turner
> wrote:
>
> > I'm not going to lie. This makes me dislike Bluestore quite a
> > bit. Using multiple OSDs to an SSD journal allowed for you to
If yes, how to handle
> them in the CRUSH map (I have different categories of OSD hosts for
> different use cases, split by appropriate CRUSH rules).
>
> Thanks
>
> Martin
>
> -Ursprüngliche Nachricht-
> Von: Loris Cuoghi [mailto:loris.cuo...@artific
Le Mon, 3 Jul 2017 11:30:04 +,
Martin Emrich a écrit :
> Hi!
>
> Thanks for the hint, but I get this error:
>
> [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No
> such file or directory: 'ceph.conf'; has `ceph-deploy new` been run
> in this
Hello,
Le 17/01/2017 à 13:38, Kingsley Tart a écrit :
How did you find the fuse client performed?
I'm more interested in the fuse client because I'd like to use CephFS
for shared volumes, and my understanding of the kernel client is that it
uses the volume as a block device.
I think you're
>
> On Mon, Jan 16, 2017 at 9:07 AM, Patrick McGarry
<pmcga...@redhat.com> wrote:
>> Hey cephers,
>>
>> Please bear with us as we migrate ceph.com as there may be some
>> outages. They should be quick and over soon. Thanks!
>>
>>
>> --
>>
Hello,
Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
Hi,
I have two OSD and Mon nodes.
I'm going to add third osd and mon on this cluster but before I want to
fix this error:
>
> [SNIP SNAP]
You've just created your cluster.
With the standard CRUSH rules you need one OSD on three
Hello,
Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit :
Hello,
after correcting the configuration for different qemu vm's with rbd disks
(we removed the cache=writethrough option to have the default
writeback mode) we have a strange behaviour after restarting the vm's.
For most of them the
they at least are started at boot time.
Great! But only if the disks keep their device names, right ?
Best regards,
Karsten
2016-04-27 13:57 GMT+02:00 Loris Cuoghi <l...@stella-telecom.fr>:
Le 27/04/2016 13:51, Karsten Heymann a écrit :
Hi Loris,
thank you for your feedback. As I plan
really would expect ceph to work
out of the box with the version in jessie, so I'd rather try to find
the cause of the problem and help fixing it.
That's my plan too, updating was just a troubleshoting method. ;)
best regards
2016-04-27 13:36 GMT+02:00 Loris Cuoghi <l...@stella-telecom
Hi Karsten,
I've had the same experience updating our test cluster (Debian 8) from
Infernalis to Jewel.
I've update udev/systemd to the one in testing (so, from 215 to 229),
and it worked much better at reboot.
So... Are the udev rules written for the udev version in RedHat (219) or
Hi Jason,
Le 22/03/2016 14:12, Jason Dillaman a écrit :
We actually recommend that OpenStack be configured to use writeback cache [1].
If the guest OS is properly issuing flush requests, the cache will still
provide crash-consistency. By default, the cache will automatically start up
in
Hi Wido,
Le 22/03/2016 13:52, Wido den Hollander a écrit :
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and
would like to prove us wrong? :)
Le 15/03/2016 22:33, David Casier a écrit :
> Hi,
> Maybe (not tested) :
> [osd ]allow * object_prefix ?
>
>
>
> 2016-03-15 22:18 GMT+01:00 Loris Cuoghi <l...@stella-telecom.fr>:
>> Hi David,
>>
>> One pool per virtual
; 2016-02-12 3:34 GMT+01:00 Loris Cuoghi <l...@stella-telecom.fr>:
>> Hi!
>>
>> We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts.
>>
>> How could we twist cephx capabilities so to forbid our KVM+QEMU+libvirt
>> hosts any RBD creati
Le 07/03/2016 17:58, Jason Dillaman a écrit :
> Documentation of these new RBD features is definitely lacking and
I've opened a tracker ticket to improve it [1].
>
> [1] http://tracker.ceph.com/issues/15000
>
Hey, thank you Jason! :)
> That's disheartening to hear that your RBD images were
Le 05/03/2016 09:35, Lindsay Mathieson a écrit :
> On 5/03/2016 3:31 AM, Christoph Adomeit wrote:
>> I just updated our ceph-cluster to infernalis and now I want to
>> enable the new image features.
>
>
> Semi related - is there a description of these new features some where?
>
Been there, done
Hi!
We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts.
How could we twist cephx capabilities so to forbid our KVM+QEMU+libvirt
hosts any RBD creation capability ?
We currently have an rbd-user key like so :
caps: [mon] allow r
caps: [osd] allow x
Pool ;)
Le 10/02/2016 06:43, Shinobu Kinjo a écrit :
What is poll?
Rgds,
Shinobu
- Original Message -
From: "Swapnil Jain"
To: ceph-users@lists.ceph.com
Sent: Wednesday, February 10, 2016 2:20:08 PM
Subject: [ceph-users] Max Replica Size
Hi,
What is the maximum
Le 28/01/2016 11:06, Sándor Szombat a écrit :
Hello all!
I check the Ceph FS, but it is beta now unfortunatelly. I start to meet
with the rdb. It is possible to create an image in a pool, mount it as a
block device (for example /dev/rbd0), and format this as HDD, and mount
it on 2 host? I tried
Le 25/01/2016 15:28, Mihai Gheorghe a écrit :
As far as i know you will not lose data, but it will be unaccessable
untill you bring the journal back online.
http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
After this, we should be able to restart the
Le 25/01/2016 11:04, Sam Huracan a écrit :
> Hi Cephers,
>
> When an Ceph write made, does it write to all File Stores of Primary OSD
> and Secondary OSD before sending ACK to client, or it writes to journal
> of OSD and sending ACK without writing to File Store?
>
> I think it would write to
Le 22/12/2015 20:03, koukou73gr a écrit :
Even the cheapest stuff nowadays has some more or less decent wear
leveling algorithm built into their controller so this won't be a
problem. Wear leveling algorithms cycle the blocks internally so wear
evens out on the whole disk.
But it would wear
Le 22/12/2015 09:42, yuyang a écrit :
Hello, everyone,
[snip snap]
Hi
> If the SSD failed or down, can the OSD work?
> Is the osd down or only can be read?
If you don't have a journal anymore, the OSD has already quit, as it
can't continue writing, nor it can assure data consistency, since
Le 17/12/2015 13:57, Loris Cuoghi a écrit :
Le 17/12/2015 13:52, Burkhard Linke a écrit :
Hi,
On 12/17/2015 01:41 PM, Dan Nica wrote:
And the osd tree:
$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.81180 root default
-2 21.81180 host rimu
0 7.27060
Le 17/12/2015 13:52, Burkhard Linke a écrit :
Hi,
On 12/17/2015 01:41 PM, Dan Nica wrote:
And the osd tree:
$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.81180 root default
-2 21.81180 host rimu
0 7.27060 osd.0 up 1.0 1.0
Le 17/12/2015 16:47, Misa a écrit :
Hello everyone,
does it make sense to create SSD only pool from OSDs without journal?
From my point of view, the SSDs are so fast that OSD journal on the SSD
will not make much of a difference.
Cheers
Misa
___
Le 25/11/2015 14:37, Emmanuel Lacour a écrit :
Le 24/11/2015 21:48, Gregory Farnum a écrit :
Yeah, this is the old "two copies in one rack, a third copy elsewhere"
replication scheme that lots of stuff likes but CRUSH doesn't really
support. Assuming new enough clients and servers (some of the
ts cluster.
Thanks All :)
Le 02/11/2015 16:14, Gregory Farnum a écrit :
Regardless of what the crush tool does, I wouldn't muck around with the
IDs of the OSDs. The rest of Celh will probably not handle it well if
the crush IDs don't match the OSD numbers.
-Greg
On Monday, November 2, 2
Hi All,
We're currently on version 0.94.5 with three monitors and 75 OSDs.
I've peeked at the decompiled CRUSH map, and I see that all ids are
commented with '# Here be dragons!', or more literally : '# do not
change unnecessarily'.
Now, what would happen if an incautious user would happen
Le 02/11/2015 12:47, Wido den Hollander a écrit :
On 02-11-15 12:30, Loris Cuoghi wrote:
Hi All,
We're currently on version 0.94.5 with three monitors and 75 OSDs.
I've peeked at the decompiled CRUSH map, and I see that all ids are
commented with '# Here be dragons!', or more literally
31 matches
Mail list logo