Instead of write to console you need to write to memory for it to be queryable
.format("memory")
.queryName("tableName")
https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-sinks
From: Aakash Basu
Sent: Friday, April 6, 20
If you are looking for a Spark scheduler that runs on top of Kubernetes then
this is the way to go:
https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
You can also have a look
Yes Yinan I’m looking for a Scala program which submits a Spark job to a
k8s cluster by running spark-submit programmatically
On Wednesday, April 4, 2018, Yinan Li wrote:
> Hi Kittu,
>
> What do you mean by "a Scala program"? Do you mean a program that submits
> a Spark job to a k8s cluster by r
Hi Panagiotis,
I did that, but it still prints the result of the first query and awaits
for new data, doesn't even goes to the next one.
*Data -*
$ nc -lk 9998
1,2
3,4
5,6
7,8
*Result -*
---
Batch: 0
---
++
|a
Hello Aakash,
When you use query.awaitTermination you are pretty much blocking there
waiting for the current query to stop or throw an exception. In your case
the second query will not even start.
What you could do instead is remove all the blocking calls and use
spark.streams.awaitAnyTermination
Any help?
Need urgent help. Someone please clarify the doubt?
-- Forwarded message --
From: Aakash Basu
Date: Thu, Apr 5, 2018 at 3:18 PM
Subject: [Structured Streaming] More than 1 streaming in a code
To: user
Hi,
If I have more than one writeStream in a code, which operates
Any help?
Need urgent help. Someone please clarify the doubt?
-- Forwarded message --
From: Aakash Basu
Date: Mon, Apr 2, 2018 at 1:01 PM
Subject: [Structured Streaming Query] Calculate Running Avg from Kafka feed
using SQL query
To: user , "Bowden, Chris" <
chris.bow...@microfo
Any help?
Need urgent help. Someone please clarify the doubt?
-- Forwarded message --
From: Aakash Basu
Date: Thu, Apr 5, 2018 at 2:28 PM
Subject: [Structured Streaming] How to save entire column aggregation to a
file
To: user
Hi,
I want to save an aggregate to a file without
Any help?
Need urgent help. Someone please clarify the doubt?
-- Forwarded message --
From: Aakash Basu
Date: Thu, Apr 5, 2018 at 2:50 PM
Subject: Spark Structured Streaming Inner Queries fails
To: user
Hi,
Why are inner queries not allowed in Spark Streaming? Spark assumes t
Hello Cesar,
can you add some details like: number of columns, avg number of rows in the
DFs, time spent to compute the plan with all the unions, and the time
needed to perform the action?
Thanks,
Alessandro
On 5 April 2018 at 23:22, Cesar wrote:
> Thanks for your answers.
>
> The suggested met
10 matches
Mail list logo