Hey :)
that means the volume is still in use. (lvopen : 1) make sure it's not
by checking the process, qemu-nbd, etc...
On 13 Nov 2013, at 4:50, Guilherme Russi wrote:
Hello Razique, I'm here opening this thread again, I've done some
cinder
delete but when I try to create another storeges it returns there's no
space to create a new volume.
Here is part of my lvdisplay output:
Alloc PE / Size 52224 / 204,00 GiB
Free PE / Size 19350 / 75,59 GiB
And here is my lvdisplay:
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
VG Name cinder-volumes
LV UUID wdqxVd-GgUQ-21O4-OWlR-sRT3-HvUA-Q8j9kL
LV Write Access read/write
LV snapshot status source of
/dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
[active]
LV Status available
# open 0
LV Size 10,00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1
--- Logical volume ---
LV Name
/dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
VG Name cinder-volumes
LV UUID EZz1lC-a8H2-1PlN-pJTN-XAIm-wW0q-qtUQOc
LV Write Access read/write
LV snapshot status active destination for
/dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
LV Status available
# open 0
LV Size 10,00 GiB
Current LE 2560
COW-table size 10,00 GiB
COW-table LE 2560
Allocated to snapshot 0,00%
Snapshot chunk size 4,00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:3
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-ca36920e-938e-4ad1-b9c4-74c1e28abd31
VG Name cinder-volumes
LV UUID b40kQV-P8N4-R6jt-k97Z-I2a1-9TXm-5GXqfz
LV Write Access read/write
LV Status available
# open 1
LV Size 60,00 GiB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:4
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-70be4f36-10bd-4877-b841-80333ccfe985
VG Name cinder-volumes
LV UUID 2YDrMs-BrYo-aQcZ-8AlX-A4La-HET1-9UQ0gV
LV Write Access read/write
LV Status available
# open 1
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:5
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-00c532bd-91fb-4a38-b340-4389fb7f0ed5
VG Name cinder-volumes
LV UUID MfVOuB-5x5A-jne3-H4Ul-4NP8-eI7b-UYSYE7
LV Write Access read/write
LV Status available
# open 0
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:6
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-ae133dbc-6141-48cf-beeb-9d6576e57a45
VG Name cinder-volumes
LV UUID 53w8j3-WT4V-8m52-r6LK-ZYd3-mMHA-FtuyXV
LV Write Access read/write
LV Status available
# open 0
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:7
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-954d2f1b-837b-4ba5-abfd-b3610597be5e
VG Name cinder-volumes
LV UUID belquE-WxQ2-gt6Y-WlPE-Hmq3-B9Am-zcYD3P
LV Write Access read/write
LV Status available
# open 0
LV Size 60,00 GiB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:8
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-05d037d1-4e61-4419-929a-fe340e00e1af
VG Name cinder-volumes
LV UUID Pt61e7-l3Nu-1IdX-T2sb-0GQD-PhS6-XtIIUj
LV Write Access read/write
LV Status available
# open 1
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:9
--- Logical volume ---
LV Name
/dev/cinder-volumes/volume-316f77c6-bf13-4ea4-9b98-028198f3922f
VG Name cinder-volumes
LV UUID e46mBx-CRps-HYKk-aJsc-XFRd-B1Rv-UVk8gT
LV Write Access read/write
LV Status available
# open 1
LV Size 60,00 GiB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:10
Do you know how I can remove all of them? When I try lvremove
/dev/cinder-volumes/volume-316f77c6-bf13-4ea4-9b98-028198f3922f, for
example, I get:
Can't remove open logical volume
"volume-316f77c6-bf13-4ea4-9b98-028198f3922f"
Thank you again.
2013/11/8 Guilherme Russi <[email protected]>
Very thanks again.
Best regards.
2013/11/8 Razique Mahroua <[email protected]>
Oh yah true!
not sure “conductors” exist yet for Cinder, meaning meanwhile,
every node
needs a direct access to the database
glad to hear it’s working :)
On 08 Nov 2013, at 08:53, Guilherme Russi
<[email protected]>
wrote:
Hello again Razique, I've found the problem, I need to add the
grants on
the mysql to my another IP. Now it's working really good :D
I've found this link too if someone needs:
http://docs.openstack.org/admin-guide-cloud/content//managing-volumes.html
Thank you so much, and if you need me just let me know.
Best regards.
Guilherme.
2013/11/8 Guilherme Russi <[email protected]>
Hello Razique, I got a couple of doubts, do you know if I need to
do
something else that's is not on the link you sent me? I'm asking
because I
followed the configuration but it's not working, here is what I
get: I've
installed the cinder-volume at the second computer that have the
HD, and
I've changed it's cinder.conf. I've changed too the master's
cinder.conf
like is following:
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:password@localhost/cinder
api_paste_confg = /etc/cinder/api-paste.ini
#iscsi_helper=iscsiadm
#iscsi_helper = ietadm
iscsi_helper = tgtadm
volume_name_template = volume-%s
#volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
iscsi_ip_address = 192.168.3.1
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
# Rabbit authorization
rabbit_host = localhost
rabbit_port = 5672
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = password
#rabbit_virtual_host = /nova
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
#rpc_backend = cinder.rpc.impl_kombu
enabled_backends=orion-1,orion-4
[orion-1]
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
[orion-4]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
The cinder.conf on the second computer is like this but the IPs are
changed with the controller IP (It has the cinder-api), and when I
run
service cinder-volume restart at the second computer it's status is
stop/waiting.
Any ideas?
Thanks :)
2013/11/8 Razique Mahroua <[email protected]>
sure :)
On 08 Nov 2013, at 05:39, Guilherme Russi
<[email protected]>
wrote:
Oh great! I'll try here and send you the results.
Very thanks :)
2013/11/8 Razique Mahroua <[email protected]>
If I’m not mistaken, you only need to install the
“cinder-volume’
service that will update its status to your main node
:)
On 08 Nov 2013, at 05:34, Guilherme Russi
<[email protected]>
wrote:
Great! I was reading the link and I have one question, do I need
to
install cinder at the other computer too?
Thanks :)
2013/11/8 Razique Mahroua <[email protected]>
Ok in that case, with Grizzly you can use the
“multi-backends”
feature:
https://wiki.openstack.org/wiki/Cinder-multi-backend
and that should do it :)
On 08 Nov 2013, at 05:29, Guilherme Russi
<[email protected]>
wrote:
It is a hard disk, my scenario is one Controller (where I have
my
storage cinder and my network quantum) and four compute nodes.
2013/11/8 Razique Mahroua <[email protected]>
ok !
what is your actual Cinder backend? Is it a hard disk, a SAN, a
network volume, etc…
On 08 Nov 2013, at 05:20, Guilherme Russi <
[email protected]> wrote:
Hi Razique, thank you for answering, I want to expand my cinder
storage, is it the block storage? I'll use the storage to allow
VMs to have
more hard disk space.
Regards.
Guilherme.
2013/11/8 Razique Mahroua <[email protected]>
Hi Guilherme !
Which storage do you precisely want to expand?
Regards,
Razique
On 08 Nov 2013, at 04:52, Guilherme Russi <
[email protected]> wrote:
Hello guys, I have a Grizzly deployment running fine with 5
nodes, and I want to add more storage on it. My question is,
can I install
a new HD on another computer thats not the controller and link
this HD with
my cinder that it can be a storage too?
The computer I will install my new HD is at the same network
as
my cloud is. I'm asking because I haven't seen a question like
that here.
Does anybody knows how to do that? Have a clue? Any help is
welcome.
Thank you all.
Best regards.
_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : [email protected]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack