[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-06 Thread nickwallen
Github user nickwallen commented on the issue:

https://github.com/apache/metron/pull/940
  
+1 The unified topology works great.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-06 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
Ok, README is updated with the new topology diagram.  Let me know if 
there's anything else.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-06 Thread nickwallen
Github user nickwallen commented on the issue:

https://github.com/apache/metron/pull/940
  
That's great @cestella .  Many thanks.  I will run it up in the lab. No 
problem.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/metron/pull/940
  
Maybe the issue has to do with our keys, and their distribution as the size 
get's larger?  Maybe when we get larger sizes we get more collisions and end up 
calling equals() more or something.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/metron/pull/940
  
This should have the equiv. diagram and documentation ( i believe as shown 
above ) to the original split join strategy.



---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
@nickwallen Ok, I refactored the abstraction to separate some concerns, 
name things a bit, and collapse some of the more onerous abstractions.  Also 
updated javadocs.  

Can you give it another look and see what you think?  We probably should 
also give it another smoketest in the lab to make sure I didn't do something 
dumb.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
Ahhh, that makes sense.  I bet we were getting killed by small allocations 
in the caching layer.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ben-manes
Github user ben-manes commented on the issue:

https://github.com/apache/metron/pull/940
  
Caffeine doesn't allocate on read, so that would make sense. I saw a [25x 
boost](https://github.com/google/guava/issues/2063#issuecomment-107169736) 
(compared to 
[current](https://github.com/google/guava/issues/2063#issue-82444927)) when 
porting the buffers to Guava.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
I actually suspect GC as well.  We adjusted the garbage collector to the 
G1GC and saw throughput gains, but not nearly the kinds of gains as we got with 
a drop-in of Caffeine to replace Guava.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ben-manes
Github user ben-manes commented on the issue:

https://github.com/apache/metron/pull/940
  
Interesting. Then I guess the size must trigger the read bottleneck as 
larger than writes. Perhaps it is incurring a lot more GC overhead that causes 
more collections? The CLQ additions requires allocating a new queue node. That 
and the cache entry probably get promoted to old gen due to the high churn 
rate, causing everything to slow down. Probably isn't too interesting to 
investigate vs swapping libraries :)


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
In this case, the loader isn't doing anything terribly expensive, though it 
may in the future (incur a hbase get or some more expensive computation).


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ben-manes
Github user ben-manes commented on the issue:

https://github.com/apache/metron/pull/940
  
Internally Guava uses a `ConcurrentLinkedQueue` and an `AtomicInteger` to 
record its size, per segment. When a read occurs, it records that in the queue 
and then drains it under the segment's lock (via tryLock) to replay the events. 
This is similar to Caffeine, which uses optimized structures instead. I 
intended the CLQ & counter as baseline scaffolding for replacement, as it is an 
obvious bottleneck, but I could never get it replaced despite advocating for 
it. The penalty of draining the buffers is amortized, but unfortunately this 
buffer isn't capped.

Since there would be a higher hit rate with a larger cache, the reads would 
be recorded more often. Perhaps contention there and the penalty of draining 
the queue is more observable than a cache miss. That's still surprising since a 
cache miss is usually more expensive I/O. Is the loader doing expensive work in 
your case?

Caffeine gets around this problem by using more optimal buffers and being 
lossy (on reads only) if it can't keep up. By default it delegates the 
amortized maintenance work to a ForkJoinPool to avoid user-facing latencies, 
since you'll want those variances to be tight. Much of that can be back ported 
onto Guava for a nice boost.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
We actually did increase the concurrency level for guava to 64; that is 
what confused us as well.  The hash code is mostly standard, should be evenly 
distributed (the key is pretty much a POJO).


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ben-manes
Github user ben-manes commented on the issue:

https://github.com/apache/metron/pull/940
  
Guava defaults to a `concurrencyLevel` of 4, given its age and a desire to 
not abuse memory in low concurrent situations. You probably want to increase it 
to 64 in a heavy workload, which has a ~4x throughput gain on reads. It won't 
scale much higher, since it has internal bottlenecks and I could never get 
patches reviewed to fix those.

I've only noticed overall throughput be based on threads, and never 
realized there was a capacity constraint to its performance. One should expect 
some due to the older hash table design resulting in more collisions, whereas 
CHMv8 does much better there. Still, I would have expected it to even out 
enough unless have a bad hash code?


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
The interesting thing that we found was that guava seems to be doing poorly 
when the # of items in the cache gets large.  When we scaled the test down (830 
distinct IP addresses chosen randomly and sent in at a rate of 200k events per 
second with a cache size of 100) kept up but scaling the test up (300k distinct 
ip addresses chosen randomly and sent in at a rate of 200k events per second 
with a cache size of 100k) didn't. 


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ben-manes
Github user ben-manes commented on the issue:

https://github.com/apache/metron/pull/940
  
That makes sense. A uniform distribution will, of course, degrades all 
policies to random replacement so the test is then about how well the 
implementations handle concurrency. Most often caches exhibit a Zipfian 
distribution (80-20 rule), so our bias towards frequency is a net gain. We have 
observed a few rare cases where frequency is a poor signal and LRU is optimal, 
and we are exploring adaptive techniques to dynamically tune the cache based on 
the workload's characteristics. These cases don't seem to occur in many 
real-world scenarios that we know of, but it is always nice to know what users 
are experiencing and how much better (or worse) we perform than a standard LRU 
cache.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
We were being purposefully unkind to the cache in the tests.  The load 
simulation chose a IP address at random to present, so each IP had an equal 
probability of being selected.  Whereas, in real traffic, we expect a coherent 
working set.  Not sure of the exact hit rates, though.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread ben-manes
Github user ben-manes commented on the issue:

https://github.com/apache/metron/pull/940
  
Do you know what the hit rates were, for the same data set, between Guava 
and Caffeine? The caches use different policies so it is always interesting to 
see how the handle given workloads. As we continue to refine our adaptive 
algorithm W-TinyLFU, its handy to know what types of workloads to investigate. 
(P.S. We have a simulator for re-running persisted traces if useful for your 
tuning)


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread nickwallen
Github user nickwallen commented on the issue:

https://github.com/apache/metron/pull/940
  
I completed some fairly extensive performance testing comparing this new 
Unified topology against the existing Split-Join implementation.  The 
difference was dramatic. 

- The Unified topology _performed roughly 3.4 times faster than Split-Join._

Both topologies in this side-by-side test included the same fixes, 
including the Guava cache problem fixed in #947. The tests included two 
enrichments:
* GeoIP enrichment; `geo := GEO_GET(ip_dst_addr)`
* Compute-only Stellar enrichment; `local := IN_SUBNET(ip_dst_addr, 
'192.168.0.0/24')`

The number one driver of performance is the cache hit rate, which is 
heavily dependent on what your data looks-like.  With these enrichments, that's 
driven by how varied the `ip_dst_addr` is in the data.  

I tested both of these topologies with different sets of data intended to 
either increase or decrease that cache hit rate.  The differences between the 
two topologies were fairly consistent across the different data sets. 

When running these topologies, reasonably well-tuned, on the same data, I 
was able to consistently maintain 70,000 events per second with the Split/Join 
topology.  In the same environment, I was able to maintain 312,000 events per 
second using the Unified topology.  

The raw throughput numbers are relative and depend on how much hardware you 
are willing to throw at the problem.  I was running on 3 nodes dedicated to 
running the Enrichment topology only.  But with the same data, on the same 
hardware, the difference was 3.4 times.  That's big.

Pushing as much as you can into a single executor and avoiding network hops 
is definitely the way to go here.



---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-05 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
I ran this up with vagrant and ensured:
* Normal stellar works still in field transformations as well as enrichments
* swapped in and out new enrichments live
* swapped in and out new threat intel live

Are there any other pending issues here beyond a report of the performance 
impact?


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
Just FYI, as part of the performance experimentation in the lab here, we 
found that one major impediment to scale was the guava cache in this topology 
when the size of the cache becomes non-trivial in size (e.g. 10k+).  Swapping 
out [Caffeine](https://github.com/ben-manes/caffeine) immediately had a 
substantial affect.  I created #947 to migrate the split/join infrastructure to 
use caffeine as well and will look at the performance impact of that change.  I 
wanted to separate that work from here as it may be that guava performance is 
fine outside of an explicit threadpool like we have here.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
@arunmahadevan Thanks for chiming in Arun.   I would say that most of the 
enrichment work is I/O bound and we try to avoid it whenever possible with a a 
time-evicted LRU cache in front of the enrichments.  We don't always know a 
priori what enrichments users are doing, per se, as their individual 
enrichments may be expressed via stellar.  The threads here are entirely 
managed via the fixed threadpool service in storm and the threadpool is shared 
across all of the executors running in-process on the worker, so we try to 
minimize that.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread arunmahadevan
Github user arunmahadevan commented on the issue:

https://github.com/apache/metron/pull/940
  
Managing threadpools within a bolt isn't fundamentally wrong, we have see 
some use cases where this is done. However, we have been putting efforts to 
reduce the overall number of threads created  internally within storm since the 
thread context switches were causing performance bottlenecks. I assume the 
threadpool threads are mostly IO/network bound so it should not cause too much 
harm.

Do you need multiple threads since the enrichments involve external DB look 
ups and are time consuming ?  Maybe you could compare the performance of 
maintaining a thread pool v/s increasing the bolt's parallelism to achieve a 
similar effect. 

Another option might be to prefetch the enrichment data and load it into 
each bolt so that you might not need separate threads to do the enrichment.

If you are able to manage without threads, that would be preferable. Even 
otherwise its not that bad as long as you don't create too many threads and 
they are cleaned up properly. (we have had some cases were the internal threads 
were causing workers to hang).


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
@ottobackwards I haven't sent an email to the storm team, but I did run the 
PR past a storm committer that I know and asked his opinion prior to submitting 
the PR.  The general answer was something to the effect of `The overall goal 
should be to reduce the network shuffle unless its really required.`  Also, the 
notion of using an external threadpool didn't seem to be fundamentally 
offensive.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/metron/pull/940
  
have we thought to send a mail to the storm dev list and ask if anyone has 
done this?  potential issues?


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/metron/pull/940
  
If we integrated storm with yarn this would also be a problem, as our 
resource management may be at odds with yarn's.  I think?

What would be nice is if storm could manage the pool and we could just use 
it.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-03-02 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
@mraliagha It's definitely a tradeoff.  This is why this is as a complement 
to the original split/join topology.  Keep in mind, also, that this 
architecture enables use-cases that the other would prevent or make extremely 
difficult and/or network intensive, such as multi-level stellar statements 
rather than the 2 levels we have now.  We are undergoing some preliminary 
testing in-lab right now, which @nickwallen alluded to, to compare the two 
approaches under at least synthetic load and will report back.

Ultimately this boils down to efficiencies gained by avoiding network hops 
and whether that's going to provide an outsized impact, I think.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-02-28 Thread mraliagha
Github user mraliagha commented on the issue:

https://github.com/apache/metron/pull/940
  
@cestella Thanks, Casey. Wouldn't be still hard to tune this solution? 
Still, thread pool tuning and probably the race condition between these threads 
and normal Strom workers makes the tuning hard for a production platform with 
tons of feeds/topologies. Storm resource management is very basic at this stage 
to absorb spikes, and having a separate thread pool transfers the complexity 
from one place to another place. 


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-02-28 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
@nickwallen Sounds good.  When scale tests are done, can we make sure that 
we also include #944 ?


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-02-27 Thread nickwallen
Github user nickwallen commented on the issue:

https://github.com/apache/metron/pull/940
  
I'd hold on merging this until we can get this tested at some decent scale. 
 Unless it already has been?  Otherwise, I don't see a need to merge this until 
we know it actually addresses a problem.


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-02-26 Thread merrimanr
Github user merrimanr commented on the issue:

https://github.com/apache/metron/pull/940
  
I tested this in full dev and worked as expected.  +1


---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-02-23 Thread cestella
Github user cestella commented on the issue:

https://github.com/apache/metron/pull/940
  
The current architecture is described ![Image of 
Yaktocat](https://github.com/apache/metron/raw/master/metron-platform/metron-enrichment/enrichment_arch.png)

In short, for each message each splitter will
* inspect the configs for the sensor 
* For each sensor, extract the fields required for enrichment and send them 
to the appropriate enrichment bolt (e.g. hbase, geo, stellar)
  * If one enrichment enriches k fields, then k messages will be sent to 
the enrichment bolt
  * In the case of stellar, each stellar subgroup will be a separate message
  * the original message is sent directly to the join bolt
* The enrichment bolts do the enrichment and send the additional fields and 
values to the original message
* The join bolt will asynchronously collect the subresults and join them 
with the original message
  * The join bolt has a LRU cache to hold subresults until all results 
arrive
  * Tuning performance involves tuning this cache (max size and time until 
eviction)
  * Tuning this can be complex because it has to be large enough to handle 
spikes in traffic




---


[GitHub] metron issue #940: METRON-1460: Create a complementary non-split-join enrich...

2018-02-22 Thread mraliagha
Github user mraliagha commented on the issue:

https://github.com/apache/metron/pull/940
  
Is there any document somewhere to show how the previous approach was 
implemented? I would like to understand the previous architecture in details. 
Becuase some of the pros/cons didn't make sense to me. Maybe I can help to 
predict what the impact will be. Thanks. 


---