Hi all,
Just FYI, here is a sample script I'm using for testing os-brick which
attaches/detaches the cinder volume to the host using cinderclient and
os-brick:
https://gist.github.com/tsekiyama/ee56cc0a953368a179f9
python attach.py volume-uuid will attach the volume to the executed
host and
it.
- delete() will delete the specified volume.
The image volume will placed in the tenant specified
cinder_store_tenant_name, or current user's tenant if
cinder_store_tenant_name is not set.
Any comments are much appreciated, thanks.
Regards,
Tomoki Sekiyama
regards,
Tomoki Sekiyama
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Michael,
These wasn't on the list of of exceptions at today's nova meeting. Please
add them.
On 1/8/15, 11:48 , Tomoki Sekiyama tomoki.sekiy...@hds.com wrote:
Hi,
I have submitted 2 nova-specs [1][2] related to Cinder volumes iSCSI
multipath/failover improvement.
These specs are both
://review.openstack.org/#/c/134299/
[2] https://review.openstack.org/#/c/137468/
cinder-specs:
[3] https://review.openstack.org/#/c/136500/
[4] https://review.openstack.org/#/c/131502/
Regards,
Tomoki Sekiyama
___
OpenStack-dev mailing list
OpenStack-dev
this not to merge this?
Or, is there alternative way to cap the bandwidth consumed by Glance?
I appreciate any information about this.
Thanks,
Tomoki Sekiyama
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org
be
a separate facility running as a proxy in front of glance.”
Thanks,
Arnaud
On Aug 8, 2014, at 1:23 PM, Russell Bryant
rbry...@redhat.commailto:rbry...@redhat.com wrote:
On 08/08/2014 04:17 PM, Jay Pipes wrote:
On 08/08/2014 08:49 AM, Tomoki Sekiyama wrote:
Hi all,
I'm considering how I can apply
-r blkio.throttle.write_bps_device=7:0 1000 test) After
copying volume, detach loop device. (losetup --detach /dev/loop0)
Interesting. I tried this locally and confirmed it's feasible.
Thanks,
Tomoki Sekiyama
___
OpenStack-dev mailing list