Great question. So, some use cases, like guest agent, would like to see 
something around ~20ms if the agent is needing to respond to requests from a 
control surface/panel while a user clicks around. I spoke with a social media 
company who was also interested in low latency just because they have a big 
volume of messages they need to slog through in a timely manner or they will 
get behind (long-polling or websocket support was something they would like to 

Other use cases should be fine with, say, 100ms. I want to say Heat’s needs 
probably fall into that latter category, but I’m only speculating.

Some other feedback we got a while back was that people would like a knob to 
tweak queue attributes. E.g., the tradeoff between durability and performance. 
That led to work on queue “flavors”, which Flavio has been working on this past 
cycle, so I’ll let him chime in on that.

From: Joe Gordon <<>>
Reply-To: OpenStack Dev 
Date: Wednesday, September 17, 2014 at 2:32 PM
To: OpenStack Dev 
Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

Can you further quantify what you would consider too slow, is it 100ms too slow.
OpenStack-dev mailing list

Reply via email to