Folks, it's not my habit but playing around with running Kettle transformations on Flink w/ Beam was so cool I had to blog about it.
http://sandbox.kettle.be/wordpress/index.php/2019/02/24/kettle-beam-update-0-5-0/ Allow me to again extend my thanks to all the developers involved. Some really cool things are happening right now. Version 0.5.0 of Kettle Beam now supports all Kettle steps including third party connectors like SalesForce, SAP, Neo4j and so on. Obviously they don't always make sense in a big data context but side-loading the data for in-memory lookup and so on can indeed make a lot of sense in a lot of scenarios. For the batched output I also managed to get performance on-par with expectations, specifically for Neo4j since I work for the company after all. I really appreciate all the help I got so far getting to this point. In a record time we've gone from conceptual work to something we can consider to be stable. Apache Beam has really made a huge difference. Cheers, Matt --- Matt Casters <m <[email protected]>[email protected]> Senior Solution Architect, Kettle Project Founder
