We have implemented PURE in the OOO deployment I have, I am going to push
Production next week or so with Dual PURE, their docs had some configuration
that was not correct, and I am not sure if you follow that or not. I have in
fact created a pure Ansible playbook to deploy PURE Flash Array. I c
Nothing special, Purity 4.1.5. These are not brand new array’s we have had them
for a while. (FAS450) 8gb FC switches (Force 10) All hosts zoned, initiators
manually created in the pure or configured by cinder does not seem to matter.
The weird thing is everything seems to be working fine excep
What’s the config on PURE?
> On May 29, 2018, at 12:03 PM, Steven D. Searles wrote:
>
> Hello everyone, I am seeing a strange issue with cinder block live migration
> and libvirt and looking for some assistance.
>
> Environment: Openstack Pike
> OS: Ubuntu 16.04LTS
> Cinder FC Driver: Pure
I think it'd be worth filing a bug against the "openstack" client...most of the
clients try to be compatible with any server version.
Probably best to include the details from the run with the --debug option for
both the new and old version of the client.
Chris
On 05/29/2018 10:36 AM, Ken D'
Hello everyone, I am seeing a strange issue with cinder block live migration
and libvirt and looking for some assistance.
Environment: Openstack Pike
OS: Ubuntu 16.04LTS
Cinder FC Driver: Pure Storage
Cinder FC Driver: Dell Compellent
libvirtd (libvirt) 3.6.0
Cinder-volume 2:11.1.0-0ubuntu1~cloud
On 2018-05-25 13:52, r...@italy1.com wrote:
> Use the --debug option to see what calls are going on and which one fails.
Thanks! That did the trick. Turned out the image that was causing
failure was one that's been stuck queueing since July, and has no
associated name. The lack of a name is ca