Raja10D commented on issue #4865:
URL: https://github.com/apache/hop/issues/4865#issuecomment-2649969102

   > "large" in the context of an Excel file is not what is considered "large" 
in the context of a Spark cluster. Reading/writing Excel files on a distributed 
Spark cluster sounds like a square peg, round hole problem to me, but it 
shouldn't be impossible. You'll need to check your beam + spark configuration 
to tweak your pipeline.
   
   Got it. I was able to run my Hop pipelines on Spark cluster.
   There is a small doubt, some of the hop components does not support Spark 
engine like **csv file input**, why it is supporting Spark Engine??


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to