As a closure I would like to thank all people who contributed with
their knowledge in my problem although the final decision was not to try
any sort of recovery since the effort required would have been
tremendous with unambiguous results (to say at least).
Jason, Ilya, Brad, David, George,
On Mon, Aug 8, 2016 at 11:47 PM, Jason Dillaman wrote:
> On Mon, Aug 8, 2016 at 5:39 PM, Jason Dillaman wrote:
>> Unfortunately, for v2 RBD images, this image name to image id mapping
>> is stored in the LevelDB database within the OSDs and I don't know,
The image's associated metadata is removed from the directory once the
image is removed. Also, the default librbd log level will not log an
image's internal id. Therefore, unfortunately, the only way to
proceed is how I previously described.
On Wed, Aug 10, 2016 at 2:48 AM, Brad Hubbard
On Wed, Aug 10, 2016 at 3:16 PM, Georgios Dimitrakakis
wrote:
>
> Hello!
>
> Brad,
>
> is that possible from the default logging or verbose one is needed??
>
> I 've managed to get the UUID of the deleted volume from OpenStack but don't
> really know how to get the
Hello!
Brad,
is that possible from the default logging or verbose one is needed??
I 've managed to get the UUID of the deleted volume from OpenStack but
don't really know how to get the offsets and OSD maps since "rbd info"
doesn't provide any information for that volume.
Is it possible
On Tue, Aug 9, 2016 at 7:39 AM, George Mihaiescu wrote:
> Look in the cinder db, the volumes table to find the Uuid of the deleted
> volume.
You could also look through the logs at the time of the delete and I
suspect you should
be able to see how the rbd image was
On Mon, Aug 8, 2016 at 9:39 PM, Georgios Dimitrakakis
wrote:
> Dear David (and all),
>
> the data are considered very critical therefore all this attempt to
> recover them.
>
> Although the cluster hasn't been fully stopped all users actions have. I
> mean services are
On Mon, Aug 8, 2016 at 5:39 PM, Jason Dillaman wrote:
> Unfortunately, for v2 RBD images, this image name to image id mapping
> is stored in the LevelDB database within the OSDs and I don't know,
> offhand, how to attempt to recover deleted values from there.
Actually, to
All RBD images use a backing RADOS object to facilitate mapping
between the external image name and the internal image id. For v1
images this object would be named ".rbd" and for v2 images
this object would be named "rbd_id.". You would need to
find this deleted object first in order to start
Look in the cinder db, the volumes table to find the Uuid of the deleted
volume.
If you go through yours OSDs and look for the directories for PG index 20, you
might find some fragments from the deleted volume, but it's a long shot...
> On Aug 8, 2016, at 4:39 PM, Georgios Dimitrakakis
Dear David (and all),
the data are considered very critical therefore all this attempt to
recover them.
Although the cluster hasn't been fully stopped all users actions have.
I mean services are running but users are not able to read/write/delete.
The deleted image was the exact same size
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
That will be down to the pool the rbd was in, the crush rule for that pool
will dictate which osd's store objects. In a standard config that rbd will
likely have objects on every osd in your cluster.
On 8 Aug 2016 9:51 a.m., "Georgios Dimitrakakis"
wrote:
> Hi,
>>
>>
>> On
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2 of them are the OSD nodes as well) all with ceph
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and 3MON
nodes (2 of them are the OSD nodes as well) all with ceph
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and 3MON
nodes (2 of them are the OSD nodes as well) all with ceph version 0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
18 matches
Mail list logo