Hi,
someone asked me if he could get access to the BTRFS defragmenter we
used for our Ceph OSDs. I took a few minutes to put together a small
github repository with :
- the defragmenter I've been asked about (tested on 7200 rpm drives and
designed to put low IO load on them),
- the scrub
Le 19/02/2016 17:17, Don Laursen a écrit :
>
> Thanks. To summarize
>
> Your data, images+volumes = 27.15% space used
>
> Raw used = 81.71% used
>
>
>
> This is a big difference that I can’t account for? Can anyone? So is
> your cluster actually full?
>
I believe this is the pool size being
Hi,
Le 18/03/2016 20:58, Mark Nelson a écrit :
> FWIW, from purely a performance perspective Ceph usually looks pretty
> fantastic on a fresh BTRFS filesystem. In fact it will probably
> continue to look great until you do small random writes to large
> objects (like say to blocks in an RBD
Le 12/04/2016 01:40, Lindsay Mathieson a écrit :
> On 12/04/2016 9:09 AM, Lionel Bouton wrote:
>> * If the journal is not on a separate partition (SSD), it should
>> definitely be re-created NoCoW to avoid unnecessary fragmentation. From
>> memory : stop OSD, touch j
Hi,
Le 11/04/2016 23:57, Mark Nelson a écrit :
> [...]
> To add to this on the performance side, we stopped doing regular
> performance testing on ext4 (and btrfs) sometime back around when ICE
> was released to focus specifically on filestore behavior on xfs.
> There were some cases at the time
Le 19/03/2016 18:38, Heath Albritton a écrit :
> If you google "ceph bluestore" you'll be able to find a couple slide
> decks on the topic. One of them by Sage is easy to follow without the
> benefit of the presentation. There's also the " Redhat Ceph Storage
> Roadmap 2016" deck.
>
> In any
Hi,
Le 20/03/2016 15:23, Francois Lafont a écrit :
> Hello,
>
> On 20/03/2016 04:47, Christian Balzer wrote:
>
>> That's not protection, that's an "uh-oh, something is wrong, you better
>> check it out" notification, after which you get to spend a lot of time
>> figuring out which is the good
Hi,
Le 12/07/2016 02:51, Brad Hubbard a écrit :
> [...]
This is probably a fragmentation problem : typical rbd access patterns
cause heavy BTRFS fragmentation.
>>> To the extent that operations take over 120 seconds to complete? Really?
>> Yes, really. I had these too. By default
Hi,
On 19/07/2016 13:06, Wido den Hollander wrote:
>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy :
>>
>>
>> Thanks for the correction...so even one OSD reaches to 95% full, the
>> total ceph cluster IO (R/W) will be blocked...Ideally read IO should
>> work...
>
Le 11/07/2016 04:48, 한승진 a écrit :
> Hi cephers.
>
> I need your help for some issues.
>
> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>
> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
>
> I've experienced one of OSDs was killed himself.
>
> Always it issued
Le 11/07/2016 11:56, Brad Hubbard a écrit :
> On Mon, Jul 11, 2016 at 7:18 PM, Lionel Bouton
> <lionel-subscript...@bouton.name> wrote:
>> Le 11/07/2016 04:48, 한승진 a écrit :
>>> Hi cephers.
>>>
>>> I need your help for some issues.
>
Hi,
Le 29/06/2016 12:00, Mario Giammarco a écrit :
> Now the problem is that ceph has put out two disks because scrub has
> failed (I think it is not a disk fault but due to mark-complete)
There is something odd going on. I've only seen deep-scrub failing (ie
detect one inconsistency and
Hi,
Le 29/06/2016 18:33, Stefan Priebe - Profihost AG a écrit :
>> Am 28.06.2016 um 09:43 schrieb Lionel Bouton
>> <lionel-subscript...@bouton.name>:
>>
>> Hi,
>>
>> Le 28/06/2016 08:34, Stefan Priebe - Profihost AG a écrit :
>>> [...]
>>&
Le 19/11/2016 à 00:52, Brian :: a écrit :
> This is like your mother telling not to cross the road when you were 4
> years of age but not telling you it was because you could be flattened
> by a car :)
>
> Can you expand on your answer? If you are in a DC with AB power,
> redundant UPS, dual feed
Hi,
Le 10/01/2017 à 19:32, Brian Andrus a écrit :
> [...]
>
>
> I think the main point I'm trying to address is - as long as the
> backing OSD isn't egregiously handling large amounts of writes and it
> has a good journal in front of it (that properly handles O_DSYNC [not
> D_SYNC as Sebastien's
Le 07/01/2017 à 14:11, kevin parrikar a écrit :
> Thanks for your valuable input.
> We were using these SSD in our NAS box(synology) and it was giving
> 13k iops for our fileserver in raid1.We had a few spare disks which we
> added to our ceph nodes hoping that it will give good performance same
Le 13/04/2017 à 17:47, mj a écrit :
> Hi,
>
> On 04/13/2017 04:53 PM, Lionel Bouton wrote:
>> We use rbd snapshots on Firefly (and Hammer now) and I didn't see any
>> measurable impact on performance... until we tried to remove them.
>
> What exactly do you mean w
Hi,
Le 13/04/2017 à 10:51, Peter Maloney a écrit :
> [...]
> Also more things to consider...
>
> Ceph snapshots relly slow things down.
We use rbd snapshots on Firefly (and Hammer now) and I didn't see any
measurable impact on performance... until we tried to remove them. We
usually have at
Le 18/04/2017 à 11:24, Jogi Hofmüller a écrit :
> Hi,
>
> thanks for all you comments so far.
>
> Am Donnerstag, den 13.04.2017, 16:53 +0200 schrieb Lionel Bouton:
>> Hi,
>>
>> Le 13/04/2017 à 10:51, Peter Maloney a écrit :
>>> Ceph snapshots relly
Le 04/07/2017 à 19:00, Jack a écrit :
> You may just upgrade to Luminous, then replace filestore by bluestore
You don't just "replace" filestore by bluestore on a production cluster
: you transition over several weeks/months from the first to the second.
The two must be rock stable and have
Le 30/06/2017 à 18:48, Sage Weil a écrit :
> On Fri, 30 Jun 2017, Lenz Grimmer wrote:
>> Hi Sage,
>>
>> On 06/30/2017 05:21 AM, Sage Weil wrote:
>>
>>> The easiest thing is to
>>>
>>> 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
>>> against btrfs for a long time and are
Le 13/11/2017 à 15:47, Oscar Segarra a écrit :
> Thanks Mark, Peter,
>
> For clarification, the configuration with RAID5 is having many servers
> (2 or more) with RAID5 and CEPH on top of it. Ceph will replicate data
> between servers. Of course, each server will have just one OSD daemon
>
Hi,
On 22/02/2018 23:32, Mike Lovell wrote:
> hrm. intel has, until a year ago, been very good with ssds. the
> description of your experience definitely doesn't inspire confidence.
> intel also dropping the entire s3xxx and p3xxx series last year before
> having a viable replacement has been
On 31/05/2018 14:41, Simon Ironside wrote:
> On 24/05/18 19:21, Lionel Bouton wrote:
>
>> Unfortunately I just learned that Supermicro found an incompatibility
>> between this motherboard and SM863a SSDs (I don't have more information
>> yet) and they proposed S4
Le 11/12/2018 à 15:51, Konstantin Shalygin a écrit :
>
>> Currently I plan a migration of a large VM (MS Exchange, 300 Mailboxes
>> and 900GB DB) from qcow2 on ext4 (RAID1) to an all-flash Ceph luminous
>> cluster (which already holds lot's of images).
>> The server has access to both local and
Le 13/05/2019 à 16:20, Kevin Flöh a écrit :
> Dear ceph experts,
>
> [...] We have 4 nodes with 24 osds each and use 3+1 erasure coding. [...]
> Here is what happened: One osd daemon could not be started and
> therefore we decided to mark the osd as lost and set it up from
> scratch. Ceph started
101 - 126 of 126 matches
Mail list logo