Does stateful structured streaming work on a stand-alone spark cluster with
few nodes? Does it need hdfs ? If not how to get it working without hdfs ?
Regards
Srini
Hi,
Partitioning works in teradata, but your user may have core and memory
restrictions. So please do adjust the number of queries hitting parallel to
teradata based on partitions used in your query.
I am able to extract data to S3 in 3 hours from on premise teradata which
from teradata export and
Hi,
Yeah we generally read files from hdfs or object stores like S3, gcs, etc
where files cannot be updated.
Regards
Gourav
On Sun, 7 Jun 2020, 22:36 Jungtaek Lim,
wrote:
> Hi Nick,
>
> I guess that's by design - Spark assumes the input file will not be
> modified once it is placed on the inpu