You don't need to run swift-dispersion-populate more than once - all it
does is put a bunch of objects in some % of your ring's partitions. The
number of partitions in a ring is fixed at creation [1] - only which device
where each partition is assigned will change with a rebalance.
swift-dispersi
If you're running 100% coverage with dispersion report, then running it once
per day seems reasonable.
If you're running something smaller, like 1-10%, then doing it once per hour
might be reasonable.
The point is, make it automatic and integrate it into your normal ops metrics.
Track it over
Hi all,
I recently installed an all-in-one openstack environment using packstack.
I just realized that my instances disks have a limitation of 2TB even though
the volume is bigger.
Is there a reason for that?
Thank you very much
Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute o
We are running a 2 region Swift cluster with write affinity.
I've just managed to deceive myself with the Dispersion report :-( . The
last run of the populate was Early Dec, and the corresponding report
happily shows 100%. All good - seemingly.
However probing the actual distribution of a num
It is likely because this has been tested with QEMU only. I think you might
want to bring this up with the Nova team.
Sent from my iPhone
> On Jan 12, 2017, at 11:28 AM, Eugen Block wrote:
>
> I'm not sure if this is the right spot, but I added some log statements into
> driver.py.
> First,
With 4 physical cpu cores, you'll have 8 with HT so you likely don't need
anything managing overcommitting. But you also have memory contention in
your design. OpenStack facilitates resource limits to prevent the
contention from happening but does not get into managing vm priorities. The
assumption
This is a scheduler thing. Kvm/Linux has it's own scheduler built in for
any process that needs to share CPU cycles aside from the filter scheduler
used by OpenStack. My understanding is that any optimizations assigned to
resource consumption will not be handled by OpenStack but with manual
tweaks.
Hi,
I have provided the answer to that on
https://ask.openstack.org/en/question/101374 where you asked this question as
well.
That elaborates slightly on 'There are facilities to allow one VM or another to
have CPU priority’ which James mentioned below.
For RAM such things as Memory Tuning
ht
I'm not sure if this is the right spot, but I added some log
statements into driver.py.
First, there's this if-block:
if (self._host.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION,
MIN_QEMU_LIVESNAPSHOT_VERSION,
On 01/12/2017 05:31 AM, Balazs Gibizer wrote:
Hi,
The flavor field of the Instance object is a lazy-loaded field and the
projects field of the Flavor object is also lazy-loaded. Now it seems to
me that when the Instance object lazy loads instance.flavor then the
created Flavor object is orphaned
Yes, I truncated the file and uploaded it:
http://dropcanvas.com/ta7nu
(First time I used this service, please give me feedback if this
doesn't work for you)
I see the "Beginning cold snapshot process" message, but I don't know
why. Any help is appreciated!
Regards,
Eugen
Zitat von Moha
With OVS driver, how can you even get a VM up and running in Openstack
without the risk of melting the entire network down ?
Both the interfaces of the BITW VM are sitting on br-int.The risk of BUM
packets looping between the two interfaces is very high. Even if the BITW VM
does not forward
Hi,
I'm pretty sure one can, via overriding source_repository element settings
[0] with
export DIB_REPOREF_ironic_agent=stable/mitaka
[0[
https://github.com/openstack/diskimage-builder/blob/7fc4856c6a0f5d63cdba2ee30ea7c7d762676bb6/elements/source-repositories/README.rst#override-per-source
Chee
Thanks Pavlo! After downgrade my IPA to Mitaka branch, my ironic seems work
fine now. But another problem, can we specify IPA version when we create
image via DIB?
On Thu, Jan 12, 2017 at 3:42 PM, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:
> Hi,
>
> you shouldn't use the latest m
On Thu, Jan 12, 2017 at 2:31 PM, Balazs Gibizer
wrote:
Hi,
The flavor field of the Instance object is a lazy-loaded field and
the projects field of the Flavor object is also lazy-loaded. Now it
seems to me that when the Instance object lazy loads instance.flavor
then the created Flavor objec
Hi,
The flavor field of the Instance object is a lazy-loaded field and the
projects field of the Flavor object is also lazy-loaded. Now it seems
to me that when the Instance object lazy loads instance.flavor then the
created Flavor object is orphaned [1] therefore
instance.flavor.projects wil
Would you be able to share the logs of a full snapshot run with the compute
node in debug?
Sent from my iPhone
> On Jan 12, 2017, at 7:47 AM, Eugen Block wrote:
>
> That's strange, I also searched for this message, but nothing there. I have
> debug logs enabled on compute node but I don't see
That's strange, I also searched for this message, but nothing there. I
have debug logs enabled on compute node but I don't see anything
regarding ceph. No matter, what I do, my instance is always shutdown
before a snapshot is taken. What else can I try?
Zitat von John Petrini :
Mohammed,
What are the facilities for cpu? That's the initial question - how can I do it
if this is possible
Best Regards
Tech-corps IT Engineer
Ivan Derbenev
Phone: +79633431774
-Original Message-
From: James Downs [mailto:e...@egon.cc]
Sent: Thursday, January 12, 2017 12:56 AM
To: Ivan Derbene
19 matches
Mail list logo