Thanks for the reply Angus,

DDoS attacks are definitely a concern we are trying to address here. My
assumptions are based on a solution that is engineered for this type of
thing. Are you more concerned with network I/O during a DoS attack or
storing the logs? Under the idea I had, I wanted to make the amount of
time logs are stored for configurable so that the operator can choose
whether they want the logs after processing or not. The network I/O of
pumping logs out is a concern of mine, however.

Sampling seems like the go-to solution for gathering usage but I was
looking for something different as sampling can get messy and can be
inaccurate for certain metrics. Depending on the sampling rate, this
solution has the potential to miss spikes in traffic if you are gathering
gauge metrics such as active connections/sessions. Using logs would be
100% accurate in this case. Also, I'm assuming LBaaS will have events so
combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
gets complicated. Combining logs with events is arguably less complicated
as the granularity of logs is high. Due to this granularity, one can split
the logs based on the event times cleanly. Since sampling will have a
fixed cadence you will have to perform a "manual" sample at the time of
the event (i.e. add complexity).

At the end of the day there is no free lunch so more insight is
appreciated. Thanks for the feedback.


On 10/27/14 6:55 PM, "Angus Lees" <> wrote:

>On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
>> > I,d like to start a conversation on usage requirements and have a few
>> > suggestions. I advocate that, since we will be using TCP and
>> > based protocols, we inherently enable connection logging for load
>> > balancers for several reasons:
>> Just request from the operator side of things:
>> Please think about the scalability when storing all logs.
>> e.g. we are currently logging http requests to one load balanced
>> (that would be a fit for LBAAS) It is about 500 requests per second,
>> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
>> solution is chosen it can cope with machines doing 1000s of requests per
>> second...
>And to take this further, what happens during DoS attack (either syn
>flood or 
>full connections)?  How do we ensure that we don't lose our logging
>and/or amplify the DoS attack?
>One solution is sampling, with a tunable knob for the sampling rate -
>tunable per-vip.  This still increases linearly with attack traffic,
>unless you 
>use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
>One of the advantages of (eg) polling the number of current sessions is
>the cost of that monitoring is essentially fixed regardless of the number
>connections passing through.  Numerous other metrics (rate of new
>etc) also have this property and could presumably be used for accurate
>- without amplifying attacks.
>I think we should be careful about whether we want logging or metrics for
>accurate billing.  Both are useful, but full logging is only really
>for ad-hoc debugging (important! but different).
> - Gus
>OpenStack-dev mailing list

OpenStack-dev mailing list

Reply via email to