get $x tmp_object
> SUFFIX=0x$(echo ${x} | cut -d. -f3)
> OFFSET=$(($SUFFIX * 0x40 / ${BLOCK_SIZE}))
> echo ${x} @ ${OFFSET}
> dd conv=notrunc if=tmp_object of=rbd_export seek=${OFFSET}
> bs=${BLOCK_SIZE}
> done
>
> On Thu, Sep 22, 2016 at 5:27 AM, Fran Barrera <
the
> object number and it represents the byte offset within the image (4MB
> * object number = byte offset assuming default 4MB object size and no
> fancy striping enabled).
>
> You should be able to script something up to rebuild a sparse image
> with whatever data is still availabl
Hello,
I have a Ceph Jewel cluster with 4 osds and only one monitor integrated
with Openstack Mitaka.
Two OSD were down, with a service restart one of them was recovered. The
cluster began to recover and was OK. Finally the disk of the other OSD was
corrupted and the solution was a format and
gt; Please refer to http://docs.ceph.com/docs/hammer/rbd/qemu-rbd/ for
> examples.
>
> Cheers,
> Kees
>
> On 13-07-16 09:18, Fran Barrera wrote:
> > Can you explain how you do this procedure? I have the same problem
> > with the large images and snapshots.
> >
>
Hello,
Can you explain how you do this procedure? I have the same problem with the
large images and snapshots.
This is what I do:
# qemu-img convert -f qcow2 -O raw image.qcow2 image.img
# openstack image create image.img
But the image.img is too large.
Thanks,
Fran.
2016-07-13 8:29
Hello,
You only need a create a pool and authentication in Ceph for cinder.
Your configuration should be like this (This is an example configuration
with Ceph Jewel and Openstack Mitaka):
[DEFAULT]
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool =
Ok, I understand, so I'll create a new mon to permit me stop the mon.a.
Thanks,
Fran.
2016-07-07 17:46 GMT+02:00 Joao Eduardo Luis <j...@suse.de>:
> On 07/07/2016 04:39 PM, Fran Barrera wrote:
>
>> Yes, this is the problem.
>>
>
> Well, you lose quorum once you s
Yes, this is the problem.
2016-07-07 17:34 GMT+02:00 Joao Eduardo Luis <j...@suse.de>:
> On 07/07/2016 04:31 PM, Fran Barrera wrote:
>
>> Hello,
>>
>> Yes I've added two monitors but the error persist. In the error I see
>> only the IP of the f
Hello,
Are you configured these two paremeters in cinder.conf?
rbd_user
rbd_secret_uuid
Regards.
2016-07-07 15:39 GMT+02:00 Gaurav Goyal :
> Hello Mr. Kees,
>
> Thanks for your response!
>
> My setup is
>
> Openstack Node 1 -> controller + network + compute1 (Liberty
On 07/07/2016 04:17 PM, Fran Barrera wrote:
>
>> Hi all,
>>
>> I have a cluster setup AIO with only one monitor and now I've created
>> another monitor in other server following this doc
>> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ but
Hi all,
I have a cluster setup AIO with only one monitor and now I've created
another monitor in other server following this doc
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ but my
problem is if I stop the AIO monitor, the cluster stop working. It seems
like the ceph is not
haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 22.06.2016 um 11:33 schrieb Fran Barrera:
> > Hi all,
> >
>
Hi all,
I have a couple of question about the deployment of Ceph.
This is what I plan:
Private Net - 10.0.0.0/24
Public Net - 192.168.1.0/24
Ceph server:
- eth1: 192.168.1.67
- eth2: 10.0.0.67
Openstack server:
- eth1: 192.168.1.65
- eth2: 10.0.0.65
ceph.conf
- mon_host: 10.0.0.67
Hello,
The problem was in the Ceph documentation. "default_store = rbd" must be in
the "glance_store" section and not in the default section for Openstack
Mitaka and Ceph Jewel.
Thanks,
Fran.
2016-06-15 11:54 GMT+02:00 Fran Barrera <franbarre...@gmail.com>:
> H
>>
>> we're using kvm (Ubuntu 14.04, libvirt 1.2.12 )
>>
>> -Jon
>>
>> :
>> :Regards, I
>> :
>> :2016-06-14 17:38 GMT+02:00 Jonathan D. Proulx <j...@csail.mit.edu>:
>> :
>> :> On Tue, Jun 14, 2016 at 02:15:45PM +0200, Fra
Hi all,
I have a problem integration Glance with Ceph.
Openstack Mitaka
Ceph Jewel
I've following the Ceph doc (
http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/) but when I try to list
or create images, I have an error "Unable to establish connection to
http://IP:9292/v2/images;, and in the
> Purge all ceph packages from this node and remove user/group 'ceph',
>> than retry.
>>
>> On 06/13/2016 02:46 PM, Fran Barrera wrote:
>> > [ceph-admin][WARNIN] usermod: user ceph is currently used by process
>> 1303
>>
>> _
Hi all,
I have a problem installing ceph jewel with ceph-deploy (1.5.33) on ubuntu
14.04.4 (openstack instance).
This is my setup:
ceph-admin
ceph-mon
ceph-osd-1
ceph-osd-2
I've following these steps from ceph-admin node:
I have the user "ceph" created in all nodes and access from ssh key.
18 matches
Mail list logo