ad.RLock' object
gf> Can you please tell me how to do this?
gf> Or at least give me some advice?
gf> Sincerely,
gf> FARCY Guillaume.
gf> -
gf> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
-
ders.java:178)
AS> at
java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
AS> Thanks
AS>
AS> Amit
--
With best wishes,Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
t; scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>>
>> at
>> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
>>
>> at org.apache.spark.sql.execution.streaming.StreamExecution.org
>> $apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:286)
>>
>> at
>> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:209)
>>
>> obj.test_ingest_incremental_data_batch1()
>>
>> File
>> "C:\Users\agundapaneni\Development\ModernDataEstate\tests\test_mdefbasic.py",
>> line 56, in test_ingest_incremental_data_batch1
>>
>> mdef.ingest_incremental_data('example', entity,
>> self.schemas['studentattendance'], 'school_year')
>>
>> File
>> "C:\Users\agundapaneni\Development\ModernDataEstate/src\MDEFBasic.py", line
>> 109, in ingest_incremental_data
>>
>> query.awaitTermination() # block until query is terminated, with
>> stop() or with error; A StreamingQueryException will be thrown if an
>> exception occurs.
>>
>> File
>> "C:\Users\agundapaneni\Development\ModernDataEstate\.tox\default\lib\site-packages\pyspark\sql\streaming.py",
>> line 101, in awaitTermination
>>
>> return self._jsq.awaitTermination()
>>
>> File
>> "C:\Users\agundapaneni\Development\ModernDataEstate\.tox\default\lib\site-packages\py4j\java_gateway.py",
>> line 1309, in __call__
>>
>> return_value = get_return_value(
>>
>> File
>> "C:\Users\agundapaneni\Development\ModernDataEstate\.tox\default\lib\site-packages\pyspark\sql\utils.py",
>> line 117, in deco
>>
>> raise converted from None
>>
>> pyspark.sql.utils.StreamingQueryException:
>> org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$.checkFieldNames(Lscala/collection/Seq;)V
>>
>> === Streaming Query ===
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
--
With best wishes,Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)
second one does not.
S> Is there any solution to the problem of being able to write to multiple
sinks in Continuous Trigger Mode using Structured Streaming?
--
With best wishes,Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)
AS> connector.
AS> Thanks
AS> Amit
--
With best wishes, Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
ugh a dependency is specified. I ask is there any way to fix this.
Zeppelin version is
s> 0.9.0, Spark version is 2.4.6, and kafka version is 2.4.1. I have specified
the dependency
s> in the packages and add a jar file that contained the kafka stream 010.
--
With best wishes,
tasks...
Srinivas V at "Sat, 18 Apr 2020 10:32:33 +0530" wrote:
SV> Thank you Alex. I will check it out and let you know if I have any
questions
SV> On Fri, Apr 17, 2020 at 11:36 PM Alex Ott wrote:
SV> http://shop.oreilly.com/product/0636920047568.do has quite go
out best cluster size and number of executors and cores
required.
--
With best wishes, Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org