Greetings HAWQ community, There's an unanswered question about using Spark with HAWQ to analyze a large table.
I realize this is more of a Spark question than a HAWQ question, but it is the same user. If someone has an idea, please offer an answer: http://stackoverflow.com/questions/33004441/setting-spark-memory-allocations-for-extracting-125-gb-of-data-executorlostfai -Greg -- Greg Chase Director of Big Data Communities http://www.pivotal.io/big-data Pivotal Software http://www.pivotal.io/ 650-215-0477 @GregChase Blog: http://geekmarketing.biz/
