Li Ma,
This is interesting, In general I am in favor of expanding the scope of any
read/write separation capabilities that we have. I'm not clear what exactly
you are proposing, hopefully you can answer some of my questions inline.
The thing I had thought of immediately was detection of whether
://review.openstack.org/#/c/93466/
On Sun, Aug 10, 2014 at 10:30 PM, Li Ma skywalker.n...@gmail.com wrote:
not sure if I said that :). I know extremely little about galera.
Hi Mike Bayer, I'm so sorry I mistake you from Mike Wilson in the last
post. :-) Also, say sorry to Mike Wilson.
I’d totally
the dispatch router discussion, but it
does dampen my enthusiasm a bit not knowing how to fix issues beyond scale
:-(.
-Mike Wilson
[1]
http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment
[2]
http://www.openstack.org/summit
flag to disable/enable this behavior? Maybe I am oversimplifying things...
you tell me.
-Mike Wilson
On Mon, Dec 9, 2013 at 3:01 PM, Vasudevan, Swaminathan (PNB Roseville)
swaminathan.vasude...@hp.com wrote:
Hi Folks,
We are in the process of defining the API for the Neutron Distributed
and other members of
the team on this? I would also be happy to pitch in towards whatever
solution is decided on provided we can rescue the poor deployers :-).
-Mike Wilson
[1] https://bugs.launchpad.net/neutron/+bug/1214115
[2] https://review.openstack.org/43275
On Mon, Mar 3, 2014 at 3:10 PM, Sergey Skripnick sskripn...@mirantis.comwrote:
I can run multiple compute service in same hosts without containers.
Containers give you a nice isolation and another way to try a more
realistic scenario, but my initial goal now is to be able to simulate many
Hangouts worked well at the nova mid-cycle meetup. Just make sure you have
your network situation sorted out before hand. Bandwidth and firewalls are
what comes to mind immediately.
-Mike
On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton
tom.creigh...@rackspace.comwrote:
When the Designate team
Undeleting things is an important use case in my opinion. We do this in our
environment on a regular basis. In that light I'm not sure that it would be
appropriate just to log the deletion and git rid of the row. I would like
to see it go to an archival table where it is easily restored.
-Mike
The restore use case is for sure inconsistently implemented and used. I
think I agree with Boris that we treat it as separate and just move on with
cleaning up soft delete. I imagine most deployments don't like having most
of the rows in their table be useless and make db access slow? That being
After a read through seems pretty good.
+1
On Thu, Mar 13, 2014 at 1:42 PM, Boris Pavlovic bpavlo...@mirantis.comwrote:
Hi stackers,
As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
(step by step)
resources / Delayed Deletion !=
Soft deletion.
Best regards,
Boris Pavlovic
On Thu, Mar 13, 2014 at 9:21 PM, Mike Wilson geekinu...@gmail.com
mailto:geekinu...@gmail.com wrote:
For some guests we use the LVM imagebackend and there are times
when
Hi Yatin,
I'm glad you are thinking about the drawbacks of the zmq-receiver causes, I
want to give you a reason to keep the zmq-receiver and get your feedback.
The way I think about the zmq-receiver is a tiny little mini-broker that
exists separate from any other OpenStack service. As such, it's
+1 to what Chris suggested. Zombie state that doesn't affect quota, but
doesn't create more problems by trying to reuse resources that aren't
available. That way we can tell the customer that things are deleted, but
we don't need to break our cloud by screwing up future schedule requests.
-Mike
and manufacturing engineering type papers. I'll do more
research on this.
However, this does fit under performance for sure, it is not unrelated at
all. If there is a chance to incorporate this into a performance session I
think this is where it belongs.
-Mike Wilson
On Mon, Oct 14, 2013 at 9:53 PM
So, I observe a consensus here of long migrations suckm +1 to that.
I also observe a consensus that we need to get no-downtime schema changes
working. It seems super important. Also +1 to that.
Getting back to the original review, it got -2'd because Michael would like
to make sure that the
+1
I also have tenants asking for this :-). I'm interested to see a blueprint.
-Mike
On Tue, Oct 29, 2013 at 1:24 PM, Jay Pipes jaypi...@gmail.com wrote:
On 10/29/2013 02:25 PM, Justin Hammond wrote:
We have been considering this and have some notes on our concept, but we
haven't made a
on our end to do properly.
All that being said, I am very interested in what NOSQL DBs can do for us.
-Mike Wilson
[1] https://review.openstack.org/#/c/43151/
[2] https://blueprints.launchpad.net/nova/+spec/db-mysqldb-impl
On Mon, Nov 18, 2013 at 12:35 PM, Mike Spreitzer mspre...@us.ibm.comwrote
Hi Kanthi,
Just to reiterate what Kyle said, we do have an internal implementation
using flows that looks very similar to security groups. Jun Park was the
guy that wrote this and is looking to get it upstreamed. I think he'll be
back in the office late next week. I'll point him to this thread
On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson geekinu...@gmail.com wrote:
Hi Kanthi,
Just to reiterate what Kyle said, we do have an internal implementation
using flows that looks very similar to security groups. Jun Park was the
guy that wrote this and is looking to get it upstreamed. I
I've been thinking about this use case for a DHT-like design, I think I
want to do what other people have alluded to here and try and intercept
problematic requests like this one in some sort of pre sending to
ring-segment stage. In this case the pre-stage could decide to send this
off to a
I agree heartily with the availability and resiliency aspect. For me, that
is the biggest reason to consider a NOSQL backend. The other potential
performance benefits are attractive to me also.
-Mike
On Wed, Nov 20, 2013 at 9:06 AM, Soren Hansen so...@linux2go.dk wrote:
2013/11/18 Mike
, Mike Wilson geekinu...@gmail.com wrote:
Hi Kanthi,
Just to reiterate what Kyle said, we do have an internal implementation
using flows that looks very similar to security groups. Jun Park was the
guy that wrote this and is looking to get it upstreamed. I think he'll be
back in the office
Hotel information has been posted. Look forward to seeing you all in
February :-).
-Mike
On Mon, Nov 25, 2013 at 8:14 AM, Russell Bryant rbry...@redhat.com wrote:
Greetings,
Other groups have started doing mid-cycle meetups with success. I've
received significant interest in having one
Just some added info for that talk, we are using qpid as our messaging
backend. I have no data for RabbitMQ, but our schedulers are _always_
behind on processing updates. It may be different with rabbit.
-Mike
On Tue, Jul 23, 2013 at 1:56 PM, Joe Gordon joe.gord...@gmail.com wrote:
On Jul
doesn't do this type of thing
already. It _must_ be something that everyone wants. But #2 may be quicker
and easier to implement, my $.02.
-Mike Wilson
On Thu, Jul 25, 2013 at 2:21 PM, Joe Gordon joe.gord...@gmail.com wrote:
Hi All,
We have recently hit some performance issues with nova
So back at the Portland summit myself and Jun Park presented about some of
our difficulties scaling Openstack with the Folsom release:
http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment
.
One of the main obstacles we ran
or reroute requests, but the API set is not very large so
a very doable task. That being said, in our environment we use a single
neutron-server with another standing by as backup. It's not as performant
as we'd like it to be, but it hasn't stopped us from growing so far.
-Mike Wilson
P.S
27 matches
Mail list logo