On 17 February 2017 at 17:05, Marcus Furlong wrote:
> On 17 February 2017 at 16:47, Rikimaru Honjo
> wrote:
>> Hi all,
>>
>> I found and reported a unkind behavior of "openstack server migrate" command
>> when I maintained my environment.[1]
>>
On 17 February 2017 at 16:47, Rikimaru Honjo
wrote:
> Hi all,
>
> I found and reported a unkind behavior of "openstack server migrate" command
> when I maintained my environment.[1]
> But, I'm wondering which solution is better.
> Do you have opinions about following
Hi all ,,
Any one can help me with the following error ?
"An auth plugin is required to fetch a token"
When run neutron net-list in controller node .
Note : We have deplyed VIO.3 ( Vmware Integrated Openstack )
I've checked the configuration of keystone authentication details in
Gluster for block storage is definitely not a good choice, specially for
VMs and OpenStack in general. Also, there are rumors all over the place
that RedHat will start to "phase out" Gluster in favor of CephFS, the "last
frontier" of the so-called "Unicorn Storage" (Ceph does everything). But
when
Same experience here. Gluster ‘failover’ time was an issue for as well
(rebooting one of the Gluster nodes caused unacceptable locking/timeout for a
period of time). Ceph has worked well for us for both nova-ephemeral and
cinder volume as well as Glance. Just make sure you stay well ahead of
Hello All ,
For a long time we are testing Ceph and today we also want to test GlusterFS
Interesting thing is maybe with single client we can not get IOPS what we
get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random
write and gluster gave us 15-17K )
But interesting thing
Hello All ,
For a long time we are testing Ceph from Firefly to Kraken , tried to
optimise many things which are very very common I guess like test tcmalloc
version 2.1 , 2,4 , jemalloc , setting debugs 0/0 , op_tracker and such
others and I believe with out hardware we almost reach to end of
Hey LDTers,
Snuck up on us, but our Feb meeting is later today. See you all in
#openstack-operators.
Thanks!
VW
Sent from my iPhone
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
The three new compute nodes that you added are empty, so most likely
the new instances are scheduled to those three (3 attempts) and
something goes wrong.
with admin rights do:
openstack server show uuid
this should give you the info about the compute node where the
instance was scheduled. Check