Hi 2016-09-27 23:03 GMT+02:00 Mike Sofen <mso...@runbox.com>:
> Hi gang, > > > > On PG 9.5.1, linux, I’m running some large ETL operations, migrate data > from a legacy mysql system into PG, upwards of 250m rows in a transaction > (it’s on a big box). It’s always a 2 step operation – extract raw mysql > data and pull it to the target big box into staging tables that match the > source, the second step being read the landed dataset and transform it into > the final formats, linking to newly generated ids, compressing big subsets > into jsonb documents, etc. > > > > While I could break it into smaller chunks, it hasn’t been necessary, and > it doesn’t eliminate my need: how to view the state of a transaction in > flight, seeing how many rows have been read or inserted (possible for a > transaction in flight?), memory allocations across the various PG > processes, etc. > > > > Possible or a hallucination? > > > > Mike Sofen (Synthetic Genomics) > some years ago I used a trick http://okbob.blogspot.cz/2014/09/nice-unix-filter-pv.html#links Regards Pavel