>
> On Wednesday, July 27, 2016, Vladislav Bolkhovitin wrote:
>>
>>
>> Alex Gorbachev wrote on 07/27/2016 10:33 AM:
>> > One other experiment: just running blkdiscard against the RBD block
>> > device completely clears it, to the point where the rbd-diff method
>> > reports 0 blocks utilized. So
Hi Vlad,
On Wednesday, July 27, 2016, Vladislav Bolkhovitin wrote:
>
> Alex Gorbachev wrote on 07/27/2016 10:33 AM:
> > One other experiment: just running blkdiscard against the RBD block
> > device completely clears it, to the point where the rbd-diff method
> > reports 0 blocks utilized. So t
Am 2016-07-30 14:04, schrieb Marius Vaitiekunas:
Hi,
We had a similar issue. If you use radosgw and have large buckets,
this pg could hold a bucket index.
Hello Marius,
thanks for your hint.
But, it seems that i forgot to mention that we are using ceph only as
rbd for our virtual machines
Hi Richard,
It would be useful to know what you're currently using for storage as that
would help in recommending a strategy. My guess is an all CephFS set up
might be best for your use case. I haven't tested this myself but I'd mount
CephFS on the OSD nodes with the Fuse client and export over NF
> Op 30 juli 2016 om 8:51 schreef Richard Thornton :
>
>
> Hi,
>
> Thanks for taking a look, any help you can give would be much appreciated.
>
> In the next few months or so I would like to implement Ceph for my
> small business because it sounds cool and I love tinkering.
>
> The requiremen
Hi there,
I upgraded my cluster to jewel recently, built object maps for every image and
recreated all snapshots du use fast-diff feature for backups.
Unfortunately i am still getting the following error message on rbd du:
root@host:/backups/ceph# rbd du vm-208-disk-2
warning: fast-diff map is i