Jacek Laskowski created SPARK-20599:
---------------------------------------

             Summary: KafkaSourceProvider should work with ConsoleSink
                 Key: SPARK-20599
                 URL: https://issues.apache.org/jira/browse/SPARK-20599
             Project: Spark
          Issue Type: Improvement
          Components: SQL, Structured Streaming
    Affects Versions: 2.3.0
            Reporter: Jacek Laskowski
            Priority: Minor


I think the following should just work.

{code}
spark.
  read.  // <-- it's a batch query not streaming query if that matters
  format("kafka").
  option("subscribe", "topic1").
  option("kafka.bootstrap.servers", "localhost:9092").
  load.
  write.
  format("console").  // <-- that's not supported currently
  save
{code}

The above combination of {{kafka}} source and {{console}} sink leads to the 
following exception:

{code}
java.lang.RuntimeException: 
org.apache.spark.sql.execution.streaming.ConsoleSinkProvider does not allow 
create table as select.
  at scala.sys.package$.error(package.scala:27)
  at 
org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:479)
  at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:93)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
  ... 48 elided
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to